Exploring Local LLM-based Copilot Capabilities for Enhanced Vehicle Functionalities

This study aims to enhance vehicle functionality by deploying LLM (Large Language Model)-based copilots on local GPUs. While advanced vehicle voice assistants like Alexa struggle with complex multi-step commands, GPUs in electric vehicles present an opportunity for API call generation using locally loaded LLMs. We explore the effectiveness of using local LLMs to understand and execute complex commands directly on these GPUs to avoid the latency, costs, and internet reliance of cloud-based solutions. This approach may significantly enhance vehicle functionality by leveraging the impressive text-to-code and task decomposition capabilities of modern LLMs. You can find more details at here.