More than just computing power! Cabin-driver integration + AI empowerment, the era of smart cockpit 4.0 is here

Publisher:SparklingSoulLatest update time:2024-07-18 Source: 高工智能汽车 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

The new generation of smart cockpits is undergoing profound changes, and the era of smart cockpit 4.0 is accelerating.


On the one hand, the cabin-driver integrated solution is about to enter the mass production cycle, and the smart cockpit hardware platform, software architecture, application development model, organizational structure and other aspects will all change, which will have a huge impact on car companies, smart driving and smart cabin supply chains.


It is particularly worth mentioning that under the trend of cabin-driver integration, the requirements for system suppliers are higher and more comprehensive. System suppliers need to understand both smart cabins and smart driving, and their architecture design capabilities, software and hardware capabilities, AI optimization capabilities, etc. need to be improved.


On the other hand, AI big models are being introduced into smart cockpit systems at an accelerated pace, and are beginning to redefine human-computer interaction and user experience.


The current auto market is highly competitive, especially as the configuration of intelligent driving systems is being upgraded while prices are becoming increasingly competitive. Against the backdrop of serious homogenization of smart cockpits, the AI ​​smart cockpit achieves a multimodal, humanized, and differentiated human-computer interaction experience through functional optimization and application innovation, which enhances the emotional value of the product, brings higher premiums and repurchase rates, and will be an effective strategy for major automakers to break out of the siege.


Since the beginning of this year, the main models of major automobile companies have begun to compete in the "end-side small model" in the cockpit, and have innovated various generative AI applications, creating a more intelligent, multi-modal differentiated user experience by integrating voice/gesture/touch and other methods, accelerating the move towards the "AI smart cockpit" era.

01


Cockpit SOC platform iteration acceleration


The integration of cabin and driver and the installation of large models on vehicles will first put forward new requirements for the cockpit computing platform.

In 2021, Qualcomm's 8155 platform began large-scale mass production. In the second half of 2023, 8295 began to be installed in vehicles one after another, and the mass production of 8755 is expected to be in 2025. The iteration speed of the cockpit hardware platform has been significantly accelerated.

According to public information statistics, a number of suppliers including Huayang, Beidou Zhilian, Chelian Tianxia, ​​Hangsheng, and Thundersoft have launched cabin-pilot integration solutions based on the Qualcomm 8775 platform.

At this year's Beijing Auto Show, Nezha Auto, Qualcomm and Autolink released the Snapdragon Ride Flex (SA8775P) cabin-driving fusion platform. The solution is based on the Qualcomm 8775 single-chip platform, and Nezha Auto plans to install the platform on mass-produced vehicles in Q2 2025.

In addition, the two automobile giants Toyota and Volkswagen have basically finalized the Qualcomm 8775 designated project, and the two Tier 1 companies Desay SV and Bosch are expected to be deeply involved in the above projects.

The single-chip cabin and mooring solution replaces the two independent solutions of Qualcomm 8155 + NVIDIA Orin chips (smart cabin + smart driving), realizes the integration of software and applications, and reduces costs at the same time, becoming the next major design solution for OEMs and Tier 1s.

According to the solution summary of major suppliers, based on the 8755 single-chip platform, in addition to rich smart cockpit functions, it can also integrate DMS/OMS/CMS/AVM functions, and can also realize the deployment of AI voice large models. In terms of intelligent driving functions, it can achieve the highest urban familiar road mode or commuting NOA, and parking can support up to HPA.


Many people from automobile companies and Tier 1 companies said that the integration of cabin and driver and the application of large models are rapidly driving the rapid iteration of cockpit chip platforms.


In the current cockpit chip platform, functional integration has reached saturation, making it difficult to support the deployment of large AI models. In particular, AI models have begun to transition from cloud deployment to a vehicle-side + cloud deployment architecture. Support for higher-level functions such as real-time reasoning and task orchestration in the end-side model is clearly insufficient.


Currently, all major automakers are planning the next generation of AI smart cockpit models, and the deployment of the next generation cockpit platform is also being accelerated. Therefore, both chip suppliers and major system manufacturers need to quickly launch high-performance products and related solutions.


Among them, the local cockpit chip supplier Xinchi Technology has taken the lead in deploying a complete product series for the integrated cabin, driving and parking/cabin-integrated + AI cockpit.


According to the information, the Core X9 series of the Core Cabin integrates high-performance CPUs, GPUs, AI accelerators and video processors, fully covering the cockpit processor needs of all eras, including entry-level to flagship cockpit application scenarios, and it has also been actively leading the development of AI cockpit products.


As early as 2023, it released the first-generation product X9SP of AI cockpit, realizing local deployment and acceleration of AI algorithms.

According to the information, X9SP can support multi-modal perception in the car and interaction with large models in the cloud. Based on the AI ​​cockpit of the CoreDrive X9SP, intelligent functions such as in-car user emotion recognition, gesture interaction, smart navigation, proactive recommendations, and automatic generation of call summaries can be smoothly implemented.


At the spring launch conference in April 2024, it launched the new generation of X9CC, which is oriented to the central computing + regional control electronic and electrical architecture. The chip has a built-in high-performance AI unit and supports the deployment of AI applications such as OMS/DMS and voice recognition; it supports large-model local + cloud hybrid deployment, which can well meet the needs of local multimodal perception, and can simultaneously support the deployment of high-level intelligent driving (driving + parking) algorithms.


Next, Xinchi will further launch the AI ​​cockpit processor X10, which will more efficiently support the Transformer architecture and support pure end-side deployment of large models, bringing users a safer, more efficient and more personalized AI cockpit experience.


02


Innovative applications of AI cockpits are accelerating


The AI ​​big model has begun to fully empower the new generation of smart cockpits in terms of voice, multimodal interaction, personalized scenarios, and more innovative intelligent applications.


Since 2024, many car companies have used in-vehicle voice assistants as a breakthrough point to promote the application of large models in smart cockpits.


Through deep learning and natural language processing technology, large models can better understand and parse user voice commands. Large models have richer knowledge reserves and stronger semantic understanding capabilities. Voice interaction is more human, smarter, and more interesting. Based on the above capabilities, a variety of innovative scenario applications can be expanded.


In addition, the multimodal characteristics of large model technology can open up multimodal applications such as vision, hearing, and touch, thereby driving the interaction mode from single voice/visual interaction to a multimodal interaction stage.


For example, after the official launch of the NIO NOMI GPT end-cloud multimodal big model, NOMI has a more powerful understanding and generation capability. It can set AI scenarios in one sentence, and can also have anthropomorphic "fun chats", as well as answer various wonderful questions and so on based on the big model encyclopedia.


Based on Ideal Auto's fully self-developed multimodal cognitive model Mind GPT, the cockpit AI voice assistant "Ideal Classmate" has evolved into a car assistant, travel assistant, entertainment assistant and real-time connected encyclopedia teacher.


It also realizes multimodal interaction that combines voice and visual perception, including the ability to execute vehicle control commands through hand gestures, and to control the rear screen remotely through gesture interaction.


Li Juan, senior director of Ideal Auto Intelligent Space, pointed out that the next stage will be based on the perception-understanding-decision-execution process of large models, and with the help of professional models, it will be possible to provide users with active scene recommendations in real time. "This will be a higher level of human-computer interaction."


Zhao Henry, chief designer of BAIC's intelligent cockpit, introduced that BAIC is also rapidly deploying intelligent cockpit applications based on big model technology, including a chat mode based on natural voice interaction, a personality mode with stylized replies + speaker's timbre = distinctive voice replies, car task masters, and proactively generated scenario modes, etc. In addition, it can also use deep learning and big language models to drive an intelligent scheduling scenario engine to create a cockpit interaction that is "mainly based on dialogue and supplemented by touch", etc.


At the same time, the implementation of large models will also accelerate the upgrade of smart cockpit hardware and software, especially software capabilities. Smart cockpits will also rapidly evolve towards active human-computer interaction.

[1] [2]
Reference address:More than just computing power! Cabin-driver integration + AI empowerment, the era of smart cockpit 4.0 is here

Previous article:Detailed explanation of the definition domain standards of automotive grade
Next article:NXP receives Car Connectivity Alliance certification to accelerate development of digital car keys

Latest Automotive Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号