On January 3, Cerence (CRNC), a U.S.-based in-vehicle AI voice technology company, announced that it has deepened its collaboration with NVIDIA to improve the performance of language models in its in-vehicle systems, and plans to achieve profitability in fiscal 2025.
After the news was released, Cerence rose more than 32% in pre-market trading, and more than 94% in early trading, with a cumulative increase of about 165% in the past six months.
Cerence's "Cloud + Edge" strategy
With the increasing variety of smart car functions, consumers' requirements for in-car voice assistants are also increasing. They expect voice assistants to not only understand instructions quickly and accurately, but also to operate stably in a variety of complex scenarios and strictly protect personal privacy. However, traditional single cloud or edge computing solutions are not able to fully meet these needs.
The cloud computing solution uses powerful cloud servers to perform complex calculations, with unparalleled data processing power and flexibility. Instructions from the user are uploaded to the cloud and analyzed by the deep learning model to return the results. This approach enables voice assistants to perform highly complex tasks such as natural language understanding and semantic analysis. However, this method relies on a stable network connection, and the voice assistant's response speed will be significantly reduced in the event of poor network conditions. In addition, there is a risk of privacy leakage during data transmission.
In contrast, edge computing solutions complete computing tasks directly inside the car, and simple commands such as "open the car window" can be responded to immediately, without the need to upload data to the cloud, thus ensuring higher response speed and better privacy protection. However, due to the computing resources of on-board hardware, edge computing is difficult to support complex AI models, which limits the scalability and intelligence of its functions.
By combining cloud and edge computing solutions, Cerence solves these problems by meeting the diverse needs of the market while mitigating the risks associated with technical limitations. This two-pronged strategy makes Cerence more flexible and more accessible to automakers.
|Cerence and NVIDIA join forces
Cerence's partnership with NVIDIA marks a critical step in the deployment of automotive AI. By leveraging NVIDIA's AI Enterprise software and DRIVE With the AGX Orin hardware platform, Cerence was able to further optimize its Cloud Language Model (CaLLM) and Edge Computing Language Model (CaLLM). Edge), which significantly improves the performance of in-car voice assistants.
Efficiently run complex AI models
Designed for high-performance AI computing, NVIDIA's platforms are capable of efficiently running complex AI models, ensuring that voice assistants not only perform well in ideal conditions, but also provide exceptional service in resource-constrained environments. Especially through DRIVE AGX Orin in-vehicle hardware, Cerence implements localized AI capabilities – that is, AI algorithms are run directly in the vehicle. This feature is particularly important for driving scenarios where the network connection is unstable or cannot be maintained continuously, such as remote areas or underground parking lots, ensuring fast response times and high reliability.
Solve the three major problems of in-vehicle AI assistants
To overcome the three common challenges of in-vehicle AI assistants – latency, insufficient performance, and high resource consumption – NVIDIA provided Cerence with the TensorRT-LLM and NeMo frameworks. These tools help solve the following problems:
Reduced reaction time: In driving scenarios, millisecond reaction times are critical and can have a direct impact on safety. TensorRT-LLM ensures that the voice assistant can respond accurately in the shortest possible time through deep optimization of the model.
Improve performance with limited resources: Computing resources in an in-vehicle environment are nowhere near as abundant as in a data center. With the optimization of TensorRT-LLM, the AI model can be maximized even under limited hardware conditions, thereby improving the overall performance.
Reduced power consumption and resource usage: Through in-depth optimization of hardware and models, the power consumption and computing resource occupation are reduced, enabling advanced AI technologies to run smoothly in resource-constrained in-vehicle devices.
Enhance security and reliability
In addition, Cerence has introduced Nvidia's NeMo Guardrails technology adds a "safety net" to AI assistants. This technology not only filters out incorrect or potentially dangerous operating instructions, but also prevents malicious input, such as users trying to trick an assistant into generating harmful content. This greatly enhances the safety of the driver and passengers, ensuring that every interaction is safe and secure.
Focus on generative AI and strive to return to profitability in FY2025
Over the past 12 months, Cerence has demonstrated strong financial performance, with revenue of $331.5 million and gross margin of 73.7%. In particular, in the most recent fourth quarter, the company achieved revenue of $54.8 million, exceeding market expectations; Adjusted EBITDA of negative $1.9 million was also better than expected. Going forward, Cerence expects its free cash flow to reach $25 million in fiscal 2025 and plans to return to profitability through the shift to generative AI.
Cerence Nils, Executive Vice President of AI Products & Technologies "By optimizing the performance of the CaLLM family of language models, we have not only saved our customers money, but also significantly improved the performance of the system," said Schanz. These improvements help automakers quickly deploy generative AI solutions to provide drivers with a faster, more reliable interactive experience that enhances their safety, fun, and productivity on the road. As we advance the next generation of our CaLLM-based platform, these advanced features will bring unprecedented convenience to our users. ”
Rishi, vice president of Nvidia Automotive Dhall commented, "Large language models (LLMs) open up new user experience possibilities, but their scale and deployment complexity often present challenges for developers. "By expanding our collaboration with Cerence, we will leverage NVIDIA's leading AI and accelerated computing technologies to optimize the development and deployment of these large language models." This enables Cerence to more efficiently deliver AI-driven solutions into the hands of end users, driving advancements in the intelligent driving experience. ”
Cerence is at an inflection point, committed to leading the automotive industry into a new era of smarter through technological innovation and strategic partnerships. As the company progresses steadily toward its goal of returning to profitability in fiscal 2025 and continues to evolve its next-generation AI platform, Cerence will continue to deliver superior in-vehicle voice assistant services to customers around the world, while creating greater value for its partners.