From L3 to L5, the advanced path of autonomous driving

Publisher:cannon928Latest update time:2021-03-23 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Autonomous driving is undoubtedly a "pie in the sky" that the automotive industry has painted for users. Being able to take your hands off the steering wheel and turn driving, a boring and risky "hard work", into a kind of enjoyment is too tempting.

 

But the reality is that after years of hype, we still seem to be quite far away from true autonomous driving. If you talk to an industry insider, he may list a series of difficulties from technology to safety, from business models to laws and regulations to explain to you that the road to autonomous driving is long and arduous. However, no matter how many reasons there are, the trend is there. Facing this ultimate goal that everyone is striving for, I am afraid that we have to go ahead if there are conditions, and we have to create conditions if there are no conditions. But how to go on this road and how to go more smoothly requires a reasonable plan.

 

 

In fact, from a technical point of view, the realization of autonomous driving has always faced a scalability problem, because the ultimate goal of autonomous driving is to be achieved in stages and levels, rather than in one step. Therefore, how to build a scalable technical architecture to meet the requirements of all levels of autonomous driving in terms of computing power, safety, etc. has become a very important proposition in this long process. Moreover, such a scalable architecture is also very helpful for forming differentiated high, medium and low-end products in this process, adapting to the needs of different user markets, and timely cashing in technology investment.

 

Levels of autonomous driving


In order to perfectly answer this question, we still have to go back to the classification of autonomous driving. According to the definition given by the American Society of Automotive Engineers (SAE), autonomous driving is divided into five levels from L1 to L5, corresponding to driving support, partial automation, conditional automation, high automation and full automation.

 


Description of the levels of autonomous driving

 

It is not difficult to see from the figure that the differences between the various levels are defined according to the ownership of driving control rights. The lower the level of autonomous driving, the stronger the driver's control over the vehicle. For example, in L1, including automatic cruise control, automatic braking and lane keeping, they actually only allow the vehicle to automatically accelerate or decelerate in one direction, but not steering operations. The driver still has absolute control over the vehicle and must make correct judgments and decisions by observing the environment in person; while at L5, the vehicle is in a fully automated state without the need for driver intervention, and in most cases the driver does not even have a "say" in the driving of the vehicle.

 

From this classification rule, we can also see that there is actually a very high "step" between L3 and L4. If the autonomous driving system from L1 to L3 is still a driver-oriented product, and the core essence is still that people control the car, then at L4 and L5, the car is basically equivalent to a robot, and in most cases it is in a state of being cut off from "people" and operating autonomously. It can also be said that from L1 to L3, no matter how mysterious the product advertising slogans are, it is still ADAS. Only at L4 and L5, it has truly entered the realm of autonomous driving.

 

The span from L1 to L5 makes the scalability of the technical architecture mentioned above even more challenging.

 

Scalable technical architecture


To solve this problem, we first need to simplify it based on a deep understanding. At present, a more mainstream understanding in the industry is that autonomous driving decision-making (THINK) can be divided into two parts (domains): one is perception and modeling (Perception and Modeling), and the other is safe computing (Safe Computing).

 

Specifically, perception and modeling are to extract features, classify, identify, track and process data from vehicle sensors, obtain information such as what the target is, the XYZ coordinate position of the target, and the speed and angle of the target's movement, and output a grid map. The output of the perception and modeling domain can be used as the input of the safety computing domain. What safety computing needs to do is to integrate the target's grid map with environmental information, plan the best route, and dynamically predict possible changes in the next few seconds. The calculation results are output as two control signals for vehicle acceleration and deceleration and steering. This calculation process is repeated to form a coherent autonomous driving behavior.

 

Due to the different functions of the two domains of perception and modeling and secure computing, the specific technical requirements are also different, which is mainly reflected in functional safety and computing efficiency.

 

For perception and modeling, since the front-end input comes from multiple sensors, including cameras, millimeter-wave radars, and lidars, in order to adapt to complex application scenarios, at least two sensors are needed to meet the requirements of comprehensive and accurate data acquisition. The diversity and redundancy of this sensor means that the perception and modeling system of a single sensor only needs to meet the functional safety requirements of ASIL-B to achieve the functional safety level of ASIL-D as a whole. In terms of computing power, fixed-point computing can meet the requirements of most perception and modeling data processing.

 

Safety computing is very different. After sensor fusion, there is no data diversity and redundancy, so the safety computing processor must meet the functional safety requirements of ASIL-D. At the same time, due to the high computational complexity, fixed-point and floating-point operations must be used simultaneously - floating-point operations are mainly used for vector and linear algebra acceleration - and from a safety perspective, neural networks cannot perform well because they cannot backtrack, and deterministic algorithms must be used. These computational efficiency requirements require the support of a computing architecture that is compatible with them.

 

Imagine that if a single computing architecture is used to complete the two tasks of perception, modeling, and secure computing at the same time, it is obviously uneconomical and loses flexibility. For example, when you want to expand the number or type of sensors, you have to replace the entire processor structure. Therefore, an idea of ​​​​scalable architecture is to design different processor chips for the two domains to correspond to them, so that subsequent system expansion and upgrades will be easier.

 

In this way, one architecture can meet the technical requirements of all autonomous driving levels from L1 to L5. Developers can make decisions with ease, whether they are exploring future technologies or developing products for current market needs. With such understanding and technical support, the pace of progress on the path to autonomous driving will be more determined.


Reference address:From L3 to L5, the advanced path of autonomous driving

Previous article:The battle for autonomous control of smart cars
Next article:Neural Network Model Quantization Technology in Autonomous Driving: INT8 or INT4?

Latest Automotive Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号