The new generation of smart cockpits is undergoing profound changes, and the era of smart cockpit 4.0 is accelerating.
On the one hand, the cabin-driver integrated solution is about to enter the mass production cycle, and the smart cockpit hardware platform, software architecture, application development model, organizational structure and other aspects will all change, which will have a huge impact on car companies, smart driving and smart cabin supply chains.
It is particularly worth mentioning that under the trend of cabin-driver integration, the requirements for system suppliers are higher and more comprehensive. System suppliers need to understand both smart cabins and smart driving, and their architecture design capabilities, software and hardware capabilities, AI optimization capabilities, etc. need to be improved.
On the other hand, AI big models are being introduced into smart cockpit systems at an accelerated pace, and are beginning to redefine human-computer interaction and user experience.
The current auto market is highly competitive, especially as the configuration of intelligent driving systems is being upgraded while prices are becoming increasingly competitive. Against the backdrop of serious homogenization of smart cockpits, the AI smart cockpit achieves a multimodal, humanized, and differentiated human-computer interaction experience through functional optimization and application innovation, which enhances the emotional value of the product, brings higher premiums and repurchase rates, and will be an effective strategy for major automakers to break out of the siege.
Since the beginning of this year, the main models of major automobile companies have begun to compete in the "end-side small model" in the cockpit, and have innovated various generative AI applications, creating a more intelligent, multi-modal differentiated user experience by integrating voice/gesture/touch and other methods, accelerating the move towards the "AI smart cockpit" era.
Cockpit SOC platform iteration acceleration
Many people from automobile companies and Tier 1 companies said that the integration of cabin and driver and the application of large models are rapidly driving the rapid iteration of cockpit chip platforms.
In the current cockpit chip platform, functional integration has reached saturation, making it difficult to support the deployment of large AI models. In particular, AI models have begun to transition from cloud deployment to a vehicle-side + cloud deployment architecture. Support for higher-level functions such as real-time reasoning and task orchestration in the end-side model is clearly insufficient.
Currently, all major automakers are planning the next generation of AI smart cockpit models, and the deployment of the next generation cockpit platform is also being accelerated. Therefore, both chip suppliers and major system manufacturers need to quickly launch high-performance products and related solutions.
Among them, the local cockpit chip supplier Xinchi Technology has taken the lead in deploying a complete product series for the integrated cabin, driving and parking/cabin-integrated + AI cockpit.
According to the information, the Core X9 series of the Core Cabin integrates high-performance CPUs, GPUs, AI accelerators and video processors, fully covering the cockpit processor needs of all eras, including entry-level to flagship cockpit application scenarios, and it has also been actively leading the development of AI cockpit products.
As early as 2023, it released the first-generation product X9SP of AI cockpit, realizing local deployment and acceleration of AI algorithms.
At the spring launch conference in April 2024, it launched the new generation of X9CC, which is oriented to the central computing + regional control electronic and electrical architecture. The chip has a built-in high-performance AI unit and supports the deployment of AI applications such as OMS/DMS and voice recognition; it supports large-model local + cloud hybrid deployment, which can well meet the needs of local multimodal perception, and can simultaneously support the deployment of high-level intelligent driving (driving + parking) algorithms.
Next, Xinchi will further launch the AI cockpit processor X10, which will more efficiently support the Transformer architecture and support pure end-side deployment of large models, bringing users a safer, more efficient and more personalized AI cockpit experience.
Innovative applications of AI cockpits are accelerating
Since 2024, many car companies have used in-vehicle voice assistants as a breakthrough point to promote the application of large models in smart cockpits.
Through deep learning and natural language processing technology, large models can better understand and parse user voice commands. Large models have richer knowledge reserves and stronger semantic understanding capabilities. Voice interaction is more human, smarter, and more interesting. Based on the above capabilities, a variety of innovative scenario applications can be expanded.
In addition, the multimodal characteristics of large model technology can open up multimodal applications such as vision, hearing, and touch, thereby driving the interaction mode from single voice/visual interaction to a multimodal interaction stage.
For example, after the official launch of the NIO NOMI GPT end-cloud multimodal big model, NOMI has a more powerful understanding and generation capability. It can set AI scenarios in one sentence, and can also have anthropomorphic "fun chats", as well as answer various wonderful questions and so on based on the big model encyclopedia.
Based on Ideal Auto's fully self-developed multimodal cognitive model Mind GPT, the cockpit AI voice assistant "Ideal Classmate" has evolved into a car assistant, travel assistant, entertainment assistant and real-time connected encyclopedia teacher.
It also realizes multimodal interaction that combines voice and visual perception, including the ability to execute vehicle control commands through hand gestures, and to control the rear screen remotely through gesture interaction.
Li Juan, senior director of Ideal Auto Intelligent Space, pointed out that the next stage will be based on the perception-understanding-decision-execution process of large models, and with the help of professional models, it will be possible to provide users with active scene recommendations in real time. "This will be a higher level of human-computer interaction."
Zhao Henry, chief designer of BAIC's intelligent cockpit, introduced that BAIC is also rapidly deploying intelligent cockpit applications based on big model technology, including a chat mode based on natural voice interaction, a personality mode with stylized replies + speaker's timbre = distinctive voice replies, car task masters, and proactively generated scenario modes, etc. In addition, it can also use deep learning and big language models to drive an intelligent scheduling scenario engine to create a cockpit interaction that is "mainly based on dialogue and supplemented by touch", etc.
At the same time, the implementation of large models will also accelerate the upgrade of smart cockpit hardware and software, especially software capabilities. Smart cockpits will also rapidly evolve towards active human-computer interaction.
Previous article:Detailed explanation of the definition domain standards of automotive grade
Next article:NXP receives Car Connectivity Alliance certification to accelerate development of digital car keys
- Popular Resources
- Popular amplifiers
- 2024 China Automotive Charging and Battery Swapping Ecosystem Conference held in Taiyuan
- State-owned enterprises team up to invest in solid-state battery giant
- The evolution of electronic and electrical architecture is accelerating
- The first! National Automotive Chip Quality Inspection Center established
- BYD releases self-developed automotive chip using 4nm process, with a running score of up to 1.15 million
- GEODNET launches GEO-PULSE, a car GPS navigation device
- Should Chinese car companies develop their own high-computing chips?
- Infineon and Siemens combine embedded automotive software platform with microcontrollers to provide the necessary functions for next-generation SDVs
- Continental launches invisible biometric sensor display to monitor passengers' vital signs
- Intel promotes AI with multi-dimensional efforts in technology, application, and ecology
- ChinaJoy Qualcomm Snapdragon Theme Pavilion takes you to experience the new changes in digital entertainment in the 5G era
- Infineon's latest generation IGBT technology platform enables precise control of speed and position
- Two test methods for LED lighting life
- Don't Let Lightning Induced Surges Scare You
- Application of brushless motor controller ML4425/4426
- Easy identification of LED power supply quality
- World's first integrated photovoltaic solar system completed in Israel
- Sliding window mean filter for avr microcontroller AD conversion
- What does call mean in the detailed explanation of ABB robot programming instructions?
- STMicroelectronics discloses its 2027-2028 financial model and path to achieve its 2030 goals
- 2024 China Automotive Charging and Battery Swapping Ecosystem Conference held in Taiyuan
- State-owned enterprises team up to invest in solid-state battery giant
- The evolution of electronic and electrical architecture is accelerating
- The first! National Automotive Chip Quality Inspection Center established
- BYD releases self-developed automotive chip using 4nm process, with a running score of up to 1.15 million
- GEODNET launches GEO-PULSE, a car GPS navigation device
- Should Chinese car companies develop their own high-computing chips?
- Infineon and Siemens combine embedded automotive software platform with microcontrollers to provide the necessary functions for next-generation SDVs
- Continental launches invisible biometric sensor display to monitor passengers' vital signs
- Working hours of heart rate bracelet
- "Call No. 601" Bibi Zhou and Cecilia Cheung
- A great popular science article on electromagnetic compatibility principles, methods and design!
- What is it like to run high-speed lines on the signal layer of ultra-thick copper?
- ADC conversion method and main parameters
- GaN: Key Technologies for Solving 5G Challenges
- MicroPython driver porting for LIS2DW12 motion sensor
- The 3.3V square wave signal output by the microcontroller can be measured by the multimeter as 1.5V voltage
- Introduction to SPI NAND flash
- Microcomputer Principles and Interface Design