Article count:1511 Read by:4580472

Account Entry

Detailed explanation of smart cockpit architecture and function development process

Latest update time:2021-11-06
    Reads:
The development of smart car cockpits has mainly gone through four stages: including the electronic cockpit stage, the intelligent assistant stage, the human-machine co-driving stage, and the third living space. Currently, with the continuous development of smart cars in AI algorithms and intelligent driving, they have entered the "human-machine co-driving stage" of L3 autonomous driving. The manifestation of the smart cockpit at this stage includes breakthroughs in voice control and gesture control technology, and the integration of software and hardware in the car to achieve refined vehicle perception. The vehicle can be used throughout the entire vehicle cycle of getting on, driving, and getting off. Proactively provide scenario-based services to drivers and passengers to realize machine autonomous/semi-autonomous decision-making. The core value of the AI ​​cockpit will be manifested in scenario-based active interactions and services, many of which are also called SOA smart car services.



After the introduction of various auxiliary driving functions into the traditional cockpit, the driver is required to be proficient in the interaction methods of the cockpit, understand the capabilities and usage limitations of the system, and understand the input/output relationship of the system, and on this basis decide how to control the auxiliary driving system. . In the subsequent next-generation architecture, the smart cockpit will achieve technological breakthroughs in voice control and gesture control, based on the integration of multiple modes of perception, making perception more precise and proactive.




Intelligent cockpit infrastructure analysis


The entire smart cockpit architecture is composed of a three-layer model. The bottom layer is the hardware layer, including cameras, microphone arrays, embedded memory (disk) EMMC, memory DDR, etc.; the middle layer is the system software layer, including the operating driving domain system driver (Linux /QNX Drive) and cockpit domain system driver (Android Drive\SPI); above the middle layer is the functional software layer, including perception software for common parts with smart driving, perception software for the smart cockpit's own domain, and functional safety analysis layer. The upper layer on the car side is the service layer, which includes enabling camera face recognition, automatic speech recognition, data services, scene gateways, account authentication, etc.



The cockpit AI intelligent interaction system is an independent system with independent iterations and monthly OTA. The entire smart cockpit system architecture can refer to the following design model for corresponding information interaction. Different from the intelligent driving domain, the intelligent cockpit domain is more inclined to the interaction level, that is, it pays more attention to intelligent interconnection. Therefore, more attention is paid to network communication, data flow and other information.



The overall intelligent cockpit system from bottom to top includes the following large control unit applications:


1. Vehicle hardware


The vehicle hardware is mainly the original photosensitive or sound-responsive component, which is used to receive the driver's face or hand information input by the DMS camera and the occupant information input by the OMS. At the same time, it receives relevant voice information input by the occupants in the car, as well as hardware units such as car audio and display.


2. Image or voice processing chip


The image or voice processing chip functions here include applications such as face recognition, emotion recognition, gesture recognition, dangerous behavior recognition, multi-mode speech, and functional algorithms.


  • Perception software: including multi-mode perception algorithms, data burying points for closed-loop data, plug-in management and basic components

  • Functional safety: implement functional safety analysis and construction at the hardware level and software level of chip processing

  • System management: including underlying OTA, configuration components, functional safety, diagnosis, life cycle control, etc.

  • Public management: basic log, link, configuration and other software management


3. System and middleware platform


Similar to smart driving, smart cockpits need to establish hardware adaptation and drive control at the system platform level, including safe digital input and output units, power supply energy distribution, codec, audio output, display, CAN communication and other units.


4. Car and machine service


As the core service of the smart cockpit, it relies more on vehicle and machine services for corresponding capability control. The entire vehicle and machine service includes system control, body control, data services, OTA, chassis status and body data, etc.


Specifically, the following functions are implemented:


  • AI chip management: including system management and cooperation above the AI ​​chip level, including process monitoring, OTA, and HBService

  • Perception data software package SDK: includes receiving sensor perception data results, integrating them into the AI ​​chip algorithm, and providing the recording function of data packet Pack

  • Control software package SDK: Provides software life cycle management, perception algorithm control switch, recording switch and other functions

  • Application framework: Complete relevant business processes, such as scene definition, multi-modal semantic analysis, etc.

  • Business layer: On top of the application framework, complete relevant business implementation processes, such as FaceID registration, working mode definition, OTA, data closed loop, etc.

  • Data services: including data management, data processing, data mining, data refeeding; data indicator evaluation, diagnosis management; model training, model testing, model management; data annotation, annotation management and a series of services.


5. Decision-making center


The decision-making center includes establishing scene SDK through perception SDK to build customized scenes and image/voice perception capabilities.


Multi-modal cockpit interaction technology generally includes: voice + gesture + sight intelligent human-computer interaction system. Here we collectively refer to image and speech perception processing capabilities as the multimodal interactive application technology framework. The processing process includes defining the car body database and in-car perception database, constructing the user interaction behavior database, and developing a cloud scene recommendation matching SDK, which will later be used to solve the full-scenario joint debugging service recommendation function. Furthermore, collecting user behavior data in typical scenarios and inputting actual user behavior data into the personalized configuration engine can promote the implementation of on-end scenario SDK. Finally, it solves the regular service recommendation functions such as car control, music, and payment.


6. Interactive applications


The entire interactive application includes body control, system control, third-party APP interactive control, voice broadcast, user interface and other aspects. At the same time, there are also certain requirements for maps, weather, music, etc. in third-party applications.


7. Cloud services


Since a large amount of data involves remote transmission and monitoring, and the processing of the large computing power algorithm module of the smart cockpit also relies more on cloud management and computing capabilities. Smart cockpit cloud services include algorithm model training, online scene simulation, data security, OTA management, data warehousing, account services, etc.


  • Scene gateway: integrates multiple services, such as driver monitoring faceID or speech recognition for scene understanding, used for behavior analysis and push

  • Account authentication: Authentication for service access. Only authorized accounts can provide services.

  • faceID: driver face recognition

  • Data closed-loop management: data access platform, OTA upgrade, etc.


Analysis of computing power of smart cockpit algorithm


The rapid development of smart cockpits has led to an increase in the number of algorithms and increased demand for computing power. By 2021, cameras will be able to cover car passengers, with IMS detection of up to 5 people, and multi-mode voice separation of up to 5 people. By 2022, there will be about 150 algorithms driving more than 300 scenario applications; by 2023, the developer ecosystem will After establishment, third-party perception will be greatly increased, and the offline multi-mode voice interaction of the entire vehicle will require more computing power. The vehicle-mounted intelligent AI system includes vehicle-mounted AI scenarios, algorithms, development tools, computing architecture, and vehicle-mounted AI chips. The entire smart cockpit AI system integrates vision, voice, and multi-mode. In 2023, the cockpit AI algorithm will reach the 10,000 level.


In terms of data, the overall processing efficiency is improved by 50%, and in terms of algorithms, an efficient neural network structure balances computing and bandwidth. The computing power will increase from single digits to hundreds of digits. Generally speaking, the smart AI cockpit is an independent system with independent iterations and monthly OTA.


The following shows the smart cockpit’s capability allocation table in terms of its AI algorithm development. To sum up, the intelligent cockpit algorithm modules are mainly divided into several categories:


Driver facial recognition category: including head recognition, eye recognition, eye recognition, etc.;

Driver action recognition category: gesture action recognition, body action recognition, lip recognition, etc.;

Cockpit voice recognition: front row dual-tone zone detection, voiceprint recognition, voice gender recognition/age recognition, etc.;

Cockpit light identification category: cockpit atmosphere light, cockpit main background, cockpit interior, etc.;


The figure below shows a relatively comprehensive intelligent cockpit algorithm library.



The demand trend of in-vehicle intelligent interactive computing power indicates that the growth trend of sensors is mainly reflected in the increase in the number and pixels of sensors in the cabin, which has brought about a substantial increase in the demand for computing power. In addition, for smart cockpits, the number of microphones has also evolved from centralized dual microphones/distributed 4 microphones to distributed 6-8 microphones.





Smart cockpit development process


The smart cockpit development process involves using new scenarios and scene libraries to define scenarios; using HMI design tools for UI/UE design (including interface and interaction logic design); using HMI framework building tools to build the entire interaction design platform; developers based on The interactive design platform conducts software and hardware development; testers conduct phased unit testing and integration testing throughout the entire development process. The test results are deployed on the vehicle for mounting. The entire process is fully maintained by developers and designers.



Breaking it down and zooming in on the development process, you can see that the process from data platform to development platform construction and software and hardware development involves the following:



1. Data development framework


The entire development data platform is a fully closed-loop process, which involves four major data processing processes and ultimately forms an effective model that can be used for training. We have refined the closed-loop process of the entire data framework. It is not difficult to find that the entire closed-loop process from data collection to data model is a continuous and uninterrupted process. The process is to continuously explore the true value of the data in the corresponding scenario. Among them, data collection is a data mining service through data infusion from the data background. Through data screening, the qualified data is injected into the data annotation module to conduct data training. During the training process, it is necessary to perform simultaneous evaluation, synchronization and other operations to finally form the first version of the data model and Perform engineering integration into software modules. The last and most important process is to continuously conduct regression testing and data classification during the functional evaluation stage, and update the car-side software through online upgrade methods such as OTA.



1) Data defects


In this process, it is first necessary to extract data defects DATA-Failure from mass-produced products; data defects include missing data, false data, parts that fail data verification, etc.;


2) Data collection


To address data deficiencies, it is necessary to re-collect data DATA-Collection. This collection process includes data collection through the data collection platform built during the development stage (for example, it can be the driving recorder inside and outside the cockpit, panoramic images used in the actual vehicle driving process , front-view or peripheral-view cameras, etc.), also including data burying points or shadow modes set in mass-produced models;


3) Data annotation


Collect data for data annotation DATA-Label. What needs to be noted here is that the labeling methods of smart cockpit and smart driving are different; for example, cockpit mainly involves annotations such as pictures and voices, while ADAS mainly involves road environment semantics (such as lane lines, guardrails, cone barrel and other labeling types) and other labeling;


4) Data model


For smart cockpit algorithms, the most important thing is to train artificial intelligence machine vision algorithms. This process involves forming a more accurate data template and using the labeled data for data model training DATA-Model.


2. Application development framework


The AI ​​algorithm warehouse is mainly used to effectively train the data models in the data platform. Model training mainly includes three progressive development modes: high, medium and low.


Advanced mode: The training model in this AI algorithm warehouse is complex and requires more AI computing power for weight detection, key point detection, image semantic segmentation, graphic skeleton extraction, etc.;


Low-level mode: Algorithm warehouses are all standardized models, such as the recognition of standard parts such as seat belts and seat recognition. This type of recognition process is a standardized recognition process, which does not even include floating point operations and is all integers. Computing, the algorithm consumes little computing power and is highly efficient;


Intermediate mode: The complexity of the algorithm warehouse is average, with many categories. Multi-model combinations are embedded for classification, which can realize the recognition of basic driver operation processes such as smoking and making phone calls. It should be noted that this model has higher requirements for capacity building of the development team.


3. Application integration framework


The application integration framework platform includes the use of AI application development middleware integration model framework to build communication and underlying components. The development and integration process includes model conversion (i.e. floating point to fixed point) and compilation to generate a standardized model, and then load the model and configuration (the configuration can be placed in a fixed place); define input and output: write process code (including processing logic) , receiving function framework, defining message types (automatic deserialization and serialization), releasing software and other processes. Subsequently, the .so file can be compiled and loaded into the perception pipeline.



The other parts above can be standardized, and you only need to focus on the process part.




Summarize


This article describes in detail the development architecture and development process of the new smart cockpit. The development of cockpit architecture technology will not always be limited to the cockpit domain, but is also applicable to the smart driving domain. Currently, the cockpit domain controller usually participates in the control of the body domain, such as controlling air conditioning, doors, windows, etc. The next generation of smart cockpit products will be more involved in the control of the power & chassis domain, such as changing the vehicle's shift lights and pulling on the electronic handbrake through voice control. Of course, this type of operation has higher security requirements. As autonomous driving systems continue to integrate the power domain and chassis domain, and the popularity of service-oriented architecture (SOA) in vehicles, complex driving behaviors may be abstracted into individualized driving services. By improving its functional safety level, the smart cockpit domain controller can directly call the driving service of the autonomous driving domain to control the driving of the vehicle, forming a new situation of human-machine co-driving.


After the safety upgrade of the system architecture, software and hardware of the cockpit domain system, the development of the cockpit will make up for the last shortcoming of "driving control" and move towards an "all-round intelligent cockpit". As the technology gradually matures, it may lead to further concentration of hardware architecture, accelerate the integration of the driving domain and the cockpit domain, and eventually form an on-board central computer.




Latest articles about

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号