Smart cockpits will move towards intelligent entities

Publisher:MindfulBeingLatest update time:2024-08-19 Source: Apollo智能驾驶 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

With the rapid development of AI technology, the interaction mode of smart cockpits has evolved many times. From the initial single touch interaction, to simple command interaction, and then to complex voice interaction, the frequency of user use has also increased rapidly, from less than 10 times a day in the past to more than 60 times a day.


"But is the number of daily interactions the more, the better?" On August 9, at the high-end seminar on "Smart Car Industry Ecosystem Development Path" hosted by the China Electric Vehicle Hundred People's Forum, Li Tao, general manager of Baidu Apollo Intelligent Cabin Business Department, pointed out: "This may be because the car computer does not really meet user needs. For example, if a user talks to the car 100 times in a day, it may mean that the car computer is easy to use, or it may mean that the car computer is too clumsy and cannot understand or predict user needs. The user can only repeat the conversation with the car computer many times."



In addition, Li Tao said that the current trend of car computers becoming pads is very serious. "Put an iPad or Android tablet in the center of the car computer and install all the applications. Are these applications really needed by users?" He believes that according to the Pareto distribution law (i.e. the 80/20 principle), 80% of the functions are rarely used. Too many applications not only increase the user's understanding and cognitive burden and occupy precious car computer resources, but also mean that car manufacturers need to pay a lot of costs.



So, in what direction should the smart cockpit develop? What changes will the powerful semantic understanding, text generation, logical reasoning and multimodality of the big model bring to the smart cockpit? What practices has Baidu Apollo done? Li Tao gave the answers one by one in his speech.


picture

Li Tao, General Manager of Baidu Apollo Intelligent Cabin Business Department


The following is a transcript of his speech:



We are currently in an era of deep integration of AI and automobiles. Let's first look at the current status of the development of smart cars, with a few sets of data: In the field of new energy vehicles, the installation rate of L2 smart driving is about 40%, and the installation rate of smart cockpits has exceeded 60%, and it is expected to reach more than 70% this year. On the other hand, we also see a trend that consumers' satisfaction with interaction has declined. This may be because the car machine does not really meet user needs.



In the past few years, the control method of the cockpit has undergone some evolution. Initially, it was achieved through precise command control, that is, if you say A, it is A, and if you say B, it is B, without generalization. Later, preliminary generalized command interaction appeared. In the past, the average number of daily interactions was less than ten times, but now the average number of daily voice interactions on some models has reached more than 60 times. The industry has begun to discuss whether the average number of daily interactions will increase in the process of evolution towards natural language dialogue.



But is the number of daily interactions the better? For example, if a user talks to the car 100 times in a day, it may mean that the car computer is easy to use, but it may also mean that the car computer is too clumsy and cannot understand or predict the user's needs. Therefore, when the user needs something, they can only repeat the conversation with the car many times.



In addition, the trend of padification is very serious. The current implementation method is to install an iPad or Android tablet in the center of the car computer and install all applications. Are these programs really needed by users? Too many programs not only mean an increase in user understanding and cognitive burden, and the occupation of precious car computer resources, but also mean that car manufacturers need to pay a lot of costs. This is all following the Pareto distribution, that is, the 80/20 principle. 80% of the functions are actually rarely used.


picture


In this case, we need to consider which direction the smart cockpit will eventually go. From the perspective of the big model, we believe that the smart cockpit will go in the direction of an intelligent agent that can understand scene information, naturally understand user needs, generate scenario-based solutions and complete execution. Therefore, we launched the Apollo Super Cockpit series of products. It is an intelligent agent that can achieve full sensory integration, global planning and global execution.



Users want vehicles to understand their needs, record their habits, and provide them with the most suitable in-car environment or application configuration based on the current scenario. This is exactly the thinking ability that big models are best at, namely understanding and memory, logic and generation. We have other professional model analysis architectures. In this framework, the smart cabin can automatically understand, build, and generate corresponding models. For vehicle manufacturers, this can greatly reduce the adaptation cost of various scenarios and ultimately achieve full-domain execution. We also have an underlying big model that can truly dispatch the capabilities of the entire vehicle, can truly understand user needs, and actively execute to provide users with a better experience.


picture


We have made breakthroughs in some scenarios, such as combining the in-car DMS, OMS and the entire cockpit voice. We also collected external data, such as speed bump prediction, internal high-speed music playback, and multi-person detection. For example, in high-speed and open-window scenarios, due to excessive noise, the human voice is drowned in the noise, which can easily lead to difficulties in detecting real voices. Through audio and video voice enhancement technology, the success rate of voice detection in high-speed and open-window scenarios can be increased to 99%, which is better than the situation of ordinary vehicles with closed windows.


picture


The above are some of our thoughts on the generative cockpit. In the future, we will continue to move forward in this direction. Thank you.


Reference address:Smart cockpits will move towards intelligent entities

Previous article:Intel launches A760-A, joining the smart cockpit enclosure movement
Next article:AI Agent reshapes the smart cockpit. How should OEMs plan?

Latest Automotive Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号