Development Trend of Human-Machine Interaction in Intelligent Cockpit

Publisher:SerendipitySoulLatest update time:2023-10-23 Source: elecfans Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

1. Introduction

At present, cars have not only changed in power source, driving mode and driving experience, but also the cockpit has bid farewell to the traditional boring mechanical and electronic space, and the level of intelligence has soared, becoming the "third space" of people's lives after home and office. Through high-tech technologies such as face and fingerprint recognition, voice/gesture interaction, and multi-screen linkage, the ability of today's automotive smart cockpit in environmental perception, information collection and processing has been significantly enhanced, becoming a "smart assistant" for human driving. One of the significant signs that the smart cockpit has bid farewell to simple electronics and entered the stage of smart assistants is that the interaction between people and the cockpit has changed from passive to active. This "passive" and "active" is defined centered on the cockpit itself. In the past, information exchange was mainly initiated by people, but now both people and machines can start it. The level of interaction between people and machines has become an important symbol for defining the level of smart cockpit products.


2. Background of Human-Computer Interaction Development

The history of computers and mobile phones can reflect the development of the interaction methods between machines and people, from complex to simple and concise, from abstract actions to natural interactions. The most important development trend of human-machine interaction in the future is to change the machine from passive response to active interaction. Following the extension of this trend, the ultimate goal of human-machine interaction is to anthropomorphize the machine, so that the interaction between man and machine is as natural and smooth as the communication between man and man. In other words, the history of human-machine interaction is the history of man adapting to machines and adapting to man through machines. The development of smart cockpits has a similar process. With the advancement of electronic technology and the expectations of car owners, there are more and more electronic signals and functions inside and outside the car, so that car owners can reduce the waste of attention resources and thus reduce driving gas. As a result, the method of car interaction has gradually changed: physical knob/keyboard-digital touch screen-language control-natural state interaction.


2.1 Natural interaction is the ideal model for the next generation of human-computer interaction


2.1.1 What is natural interaction? In short, it is communication through actions, eye tracking, language, etc. The consciousness mode here is more specifically similar to human "perception", which is a mixture of various perceptions and corresponds to the five major perceptions of human vision, hearing, touch, smell, and taste. The corresponding information media include various sensors, such as sound, video, text, infrared, pressure, and radar. Smart cars are essentially manned robots. The two most critical functions are self-control and interaction with people. Without one of them, it will not be able to work efficiently for people. Therefore, an intelligent human-computer interaction system is very necessary.


2.1.2 Ways to implement natural interaction More and more sensors are integrated into the cockpit, and the sensors have improved the ability of morphological diversity, data richness and accuracy. On the one hand, it makes the computing power demand in the cockpit leap, and on the other hand, it also provides better perception support. This trend makes richer cockpit scene innovation and better interactive experience possible. Among them, visual processing is the key to cockpit human-computer interaction technology. And fusion technology is the real king. For example, when it comes to voice recognition under noisy conditions, microphones alone are not enough. In this case, people can selectively listen to someone speaking, not only by ears, but also by eyes. Therefore, by visually determining the sound source and reading lip reading, better results can be obtained than simple voice recognition. If the sensor is the five senses of a person, then the computing power is an automatically interactive human brain. The AI ​​algorithm combines vision and voice, and through various cognitive methods, it can identify multiple signals such as face, action, posture, voice, etc. As a result, more intelligent human target interaction can be achieved, including eye tracking, voice recognition, oral recognition linkage and driver fatigue status detection. The design of cockpit personnel interaction usually needs to be completed through edge computing rather than cloud computing. Three points: security, real-time and privacy security. Cloud computing relies on the network. For smart cars, relying on wireless networks cannot guarantee the reliability of their connections. At the same time, data transmission delays are uncontrollable and cannot guarantee smooth interactions. To ensure a complete user experience for the automatic operation security domain, the solution lies in edge computing. However, personal information security is also one of the problems faced. The private space in the cab is particularly important in terms of security protection. Today's personalized voice recognition is mainly implemented on the cloud, and private biometric information such as voiceprints can more conveniently display private identity information. By using edge AI design on the car side, private biometric information such as pictures and sounds can be converted into car semantic information and then uploaded to the cloud, thereby effectively protecting the personal information security of the car.


2.1.3 In the era of autonomous driving, interactive intelligence must match driving intelligence In the foreseeable future, drone cooperative flight will become a long-term phenomenon, and cockpit drone interaction will become the first interface for people to master active flight skills. At present, the field of intelligent driving faces the problem of uneven evolution. The level of human-computer interaction lags behind the improvement of the level of autonomous driving, resulting in frequent autonomous driving problems and hindering the development of autonomous driving. The characteristic of human-computer interaction cooperation behavior is the human operation loop. Therefore, the human-computer interaction function must be consistent with the autonomous driving function. Otherwise, it will lead to serious expected functional safety risks, and most fatal autonomous driving incidents are related to this. Once the human-computer interaction interface can provide its own cognitive results of driving, it can further understand the energy boundary of the autonomous driving system. This will greatly help to improve the acceptance of L+ level autonomous driving functions. Of course, the current interactive mode of the smart cockpit is mainly an extension of the Android ecosystem of mobile phones, mainly supported by the host screen. The current display is getting bigger and bigger, and this is actually because low-priority functions occupy the space of high-priority functions, bringing additional signal interference and affecting operational safety. In the future, although physical display screens will still exist, I believe that in the future, they will be replaced by natural human-computer interaction + AR-HUD. If the intelligent driving system is developed to L4 or above, people will be liberated from boring and tiring driving, and cars will become "people's third living space". In this way, the positions of the entertainment area and the safety function area (human-computer interaction and automatic operation) in the future cab will change, and the safety area will become the main control area. Automatic driving is the interaction between the car and the environment, and the interaction between people is the interaction between people and cars. The two are integrated to complete the coordination of people, cars and the environment, forming a complete driving closed loop. Second, the dialogue interface of automatic dialogue mode + AR-HUD is safer. When communicating with language or gestures, it can avoid the diversion of the driver's line of sight, thereby improving driving safety. This cannot be achieved on the large screen in the cockpit, but ARHUD can avoid this problem while displaying the automatic driving perception signal. Third, the natural conversation method is an implicit, concise and emotional natural conversation method. You cannot occupy too much precious physical space in the car, but you can accompany the free person anytime and anywhere. Therefore, the future integration of smart driving and smart cockpits will be a safer way of development, and the final development will be the central system of the car.


2.2 Practical principles of human-computer interaction 2.2.1 Touch interaction In the early days, the center console screen only displayed the information of the radio, and most of the area was occupied by a large number of physical interaction buttons, which basically communicated with humans through tactile interaction. With the development of intelligent interaction, large central control screens appeared, and physical interaction buttons began to gradually decrease. The large central control screens are getting bigger and bigger, occupying more and more important positions. The physical buttons on the center console have been reduced to nothing, and at this time the occupants can no longer interact with people by touch, but at this stage gradually turn to visual interaction. People no longer communicate with people by touch, but mainly operate by vision. But if people only use vision to communicate with humans in the smart cockpit, it will be absolutely inconvenient. Especially during driving, 90% of human visual attention must be devoted to observing road conditions in order to keep their eyes on the screen for a long time and communicate with the smart cockpit.

[1] [2]
Reference address:Development Trend of Human-Machine Interaction in Intelligent Cockpit

Previous article:New Energy Vehicle Supply Chain Navigation Map
Next article:AEC-Q200 certification, everything you want to know is here!

Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号