Autonomous driving is undoubtedly a "pie in the sky" that the automotive industry has painted for users. Being able to take your hands off the steering wheel and turn driving, a boring and risky "hard work", into a kind of enjoyment is too tempting.
But the reality is that after years of hype, we still seem to be quite far away from true autonomous driving. If you talk to an industry insider, he may list a series of difficulties from technology to safety, from business models to laws and regulations to explain to you that the road to autonomous driving is long and arduous. However, no matter how many reasons there are, the trend is there. Facing this ultimate goal that everyone is striving for, I am afraid that we have to go ahead if there are conditions, and we have to create conditions if there are no conditions. But how to go on this road and how to go more smoothly requires a reasonable plan.
In fact, from a technical point of view, the realization of autonomous driving has always faced a scalability problem, because the ultimate goal of autonomous driving is to be achieved in stages and levels, rather than in one step. Therefore, how to build a scalable technical architecture to meet the requirements of all levels of autonomous driving in terms of computing power, safety, etc. has become a very important proposition in this long process. Moreover, such a scalable architecture is also very helpful for forming differentiated high, medium and low-end products in this process, adapting to the needs of different user markets, and timely cashing in technology investment.
Levels of autonomous driving
In order to perfectly answer this question, we still have to go back to the classification of autonomous driving. According to the definition given by the American Society of Automotive Engineers (SAE), autonomous driving is divided into five levels from L1 to L5, corresponding to driving support, partial automation, conditional automation, high automation and full automation.
Description of the levels of autonomous driving
It is not difficult to see from the figure that the differences between the various levels are defined according to the ownership of driving control rights. The lower the level of autonomous driving, the stronger the driver's control over the vehicle. For example, in L1, including automatic cruise control, automatic braking and lane keeping, they actually only allow the vehicle to automatically accelerate or decelerate in one direction, but not steering operations. The driver still has absolute control over the vehicle and must make correct judgments and decisions by observing the environment in person; while at L5, the vehicle is in a fully automated state without the need for driver intervention, and in most cases the driver does not even have a "say" in the driving of the vehicle.
From this classification rule, we can also see that there is actually a very high "step" between L3 and L4. If the autonomous driving system from L1 to L3 is still a driver-oriented product, and the core essence is still that people control the car, then at L4 and L5, the car is basically equivalent to a robot, and in most cases it is in a state of being cut off from "people" and operating autonomously. It can also be said that from L1 to L3, no matter how mysterious the product advertising slogans are, it is still ADAS. Only at L4 and L5, it has truly entered the realm of autonomous driving.
The span from L1 to L5 makes the scalability of the technical architecture mentioned above even more challenging.
Scalable technical architecture
To solve this problem, we first need to simplify it based on a deep understanding. At present, a more mainstream understanding in the industry is that autonomous driving decision-making (THINK) can be divided into two parts (domains): one is perception and modeling (Perception and Modeling), and the other is safe computing (Safe Computing).
Specifically, perception and modeling are to extract features, classify, identify, track and process data from vehicle sensors, obtain information such as what the target is, the XYZ coordinate position of the target, and the speed and angle of the target's movement, and output a grid map. The output of the perception and modeling domain can be used as the input of the safety computing domain. What safety computing needs to do is to integrate the target's grid map with environmental information, plan the best route, and dynamically predict possible changes in the next few seconds. The calculation results are output as two control signals for vehicle acceleration and deceleration and steering. This calculation process is repeated to form a coherent autonomous driving behavior.
Due to the different functions of the two domains of perception and modeling and secure computing, the specific technical requirements are also different, which is mainly reflected in functional safety and computing efficiency.
For perception and modeling, since the front-end input comes from multiple sensors, including cameras, millimeter-wave radars, and lidars, in order to adapt to complex application scenarios, at least two sensors are needed to meet the requirements of comprehensive and accurate data acquisition. The diversity and redundancy of this sensor means that the perception and modeling system of a single sensor only needs to meet the functional safety requirements of ASIL-B to achieve the functional safety level of ASIL-D as a whole. In terms of computing power, fixed-point computing can meet the requirements of most perception and modeling data processing.
Safety computing is very different. After sensor fusion, there is no data diversity and redundancy, so the safety computing processor must meet the functional safety requirements of ASIL-D. At the same time, due to the high computational complexity, fixed-point and floating-point operations must be used simultaneously - floating-point operations are mainly used for vector and linear algebra acceleration - and from a safety perspective, neural networks cannot perform well because they cannot backtrack, and deterministic algorithms must be used. These computational efficiency requirements require the support of a computing architecture that is compatible with them.
Imagine that if a single computing architecture is used to complete the two tasks of perception, modeling, and secure computing at the same time, it is obviously uneconomical and loses flexibility. For example, when you want to expand the number or type of sensors, you have to replace the entire processor structure. Therefore, an idea of scalable architecture is to design different processor chips for the two domains to correspond to them, so that subsequent system expansion and upgrades will be easier.
In this way, one architecture can meet the technical requirements of all autonomous driving levels from L1 to L5. Developers can make decisions with ease, whether they are exploring future technologies or developing products for current market needs. With such understanding and technical support, the pace of progress on the path to autonomous driving will be more determined.
Previous article:The battle for autonomous control of smart cars
Next article:Neural Network Model Quantization Technology in Autonomous Driving: INT8 or INT4?
- Popular Resources
- Popular amplifiers
- A new chapter in Great Wall Motors R&D: solid-state battery technology leads the future
- Naxin Micro provides full-scenario GaN driver IC solutions
- Interpreting Huawei’s new solid-state battery patent, will it challenge CATL in 2030?
- Are pure electric/plug-in hybrid vehicles going crazy? A Chinese company has launched the world's first -40℃ dischargeable hybrid battery that is not afraid of cold
- How much do you know about intelligent driving domain control: low-end and mid-end models are accelerating their introduction, with integrated driving and parking solutions accounting for the majority
- Foresight Launches Six Advanced Stereo Sensor Suite to Revolutionize Industrial and Automotive 3D Perception
- OPTIMA launches new ORANGETOP QH6 lithium battery to adapt to extreme temperature conditions
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions
- TDK launches second generation 6-axis IMU for automotive safety applications
- LED chemical incompatibility test to see which chemicals LEDs can be used with
- Application of ARM9 hardware coprocessor on WinCE embedded motherboard
- What are the key points for selecting rotor flowmeter?
- LM317 high power charger circuit
- A brief analysis of Embest's application and development of embedded medical devices
- Single-phase RC protection circuit
- stm32 PVD programmable voltage monitor
- Introduction and measurement of edge trigger and level trigger of 51 single chip microcomputer
- Improved design of Linux system software shell protection technology
- What to do if the ABB robot protection device stops
- Analysis of the application of several common contact parts in high-voltage connectors of new energy vehicles
- Wiring harness durability test and contact voltage drop test method
- From probes to power supplies, Tektronix is leading the way in comprehensive innovation in power electronics testing
- From probes to power supplies, Tektronix is leading the way in comprehensive innovation in power electronics testing
- Sn-doped CuO nanostructure-based ethanol gas sensor for real-time drunk driving detection in vehicles
- Design considerations for automotive battery wiring harness
- Do you know all the various motors commonly used in automotive electronics?
- What are the functions of the Internet of Vehicles? What are the uses and benefits of the Internet of Vehicles?
- Power Inverter - A critical safety system for electric vehicles
- Analysis of the information security mechanism of AUTOSAR, the automotive embedded software framework
- [Analog Electronics Elective Test] + Amplifier Bandwidth
- Review summary: Mir Allwinner automotive-grade CPU development board MYC-YT507
- EEWORLD University - Buck Converter and Controller Troubleshooting
- Circuit View (Collection)
- SimpleLinkTM low-power Bluetooth wireless MCU supports Alibaba Cloud Link IoT platform
- CC3235x SimpleLink Wi-Fi MCU Solutions
- [MPS Award-winning Review] Let’s talk about the past of model selection
- Schematic diagram of TMCC160 low voltage DC servo system-on-chip supporting EtherCAT CANopen
- Unboxing - ESP32-S2-KALUGA-1+K210Sipeed M1
- Adder