Simply put, it means to organically integrate the information detected by various types of sensors on the vehicle body through a series of algorithms to maximize the vehicle's ability to perceive the environment, thereby providing environmental information for the back-end behavioral decision-making and control modules. A single perception sensor usually has certain perception limitations due to detection mechanisms, visual angles, and other reasons. If each independent sensor gives the car eyes, then sensor fusion technology will enable the car body to see more clearly and say goodbye to "myopia".
Sensor fusion can be divided into compensatory fusion, redundant fusion and collaborative fusion according to its fusion function.
1. Compensatory fusion means that each sensor detects the same environment, and then uses a fusion algorithm to select the superior detection signals of each sensor or integrate the detection ranges of each sensor to obtain more accurate and wider coverage environmental information. Take the current mainstream forward-looking millimeter-wave radar and camera fusion (RV Fusion) solution as an example:
2. Redundant fusion refers to the detection of the same target by each sensor, and the fusion algorithm integrates all the detection information of the same target, thereby improving the credibility of the target and reducing the impact of a single sensor's false detection on the overall system. Redundant fusion is widely used in safety-related functions, such as the popular automatic emergency braking (AEB) function. If the vehicle has multiple sensors, it is generally required that at least two or more sensors detect the target at the same time before automatically braking the target, ensuring the redundancy of the target's existence, reducing the possibility that the target is misidentified by a single sensor, and thus reducing the probability of false braking (based on current laws and regulations, such false braking is quite dangerous and should be avoided as much as possible). Taking the aforementioned radar camera solution as an example, usually during millimeter-wave radar detection, due to the interference of environmental noise and the limitations of the signal processing algorithm, the phenomenon of target misidentification will occur from time to time. The target generated by this misidentification is generally called a ghost target, and the detection principle of the camera makes it have a lower false recognition rate. Therefore, in this system, if the safety function is to be activated, it is generally required that the radar and the camera detect the same target at the same time, thereby reducing the false braking rate.
3. Collaborative fusion refers to integrating the detection information of each sensor, which is relatively simple and low-dimensional, to extract deeper and higher-dimensional detection information. For example, the image information detected by the camera usually loses the three-dimensional information of the environment. If the point cloud information detected by the radar and the image pixel information are fused, an image with depth information (depth map) can be constructed. Then, using this depth map, the complete three-dimensional environmental information can be extracted. In addition to target perception, it can also provide higher-dimensional information such as the feasible domain (Fress Space). The currently popular front fusion belongs to this category of fusion. At present, many smart driving companies are working in this direction and trying to apply it to mass production projects.
Intelligent Fusion Perception
At present, Zhixing Technology is working on the integration of visual perception and ultrasonic perception. Visual perception has certain advantages in judging the existence and category of obstacles. As long as we rely on the data closed-loop link and continuously iterate and optimize the visual perception performance, we can have a good recognition ability for most common obstacles. Ultrasonic sensors have stable perception capabilities for obstacles at close range, and can detect obstacles of any type indiscriminately, such as ground locks, flower stands, limit rods, etc., and are widely used in environmental perception of parking functions. Fusion of visual and ultrasonic obstacle detection is a compensatory fusion, which will be able to detect obstacles near parking spaces more stably and robustly, providing protection for parking functions. This solution will also be directly applied to Zhixing Technology's parking function. In addition, the iDC domain controller to be launched soon will be equipped with four surround view cameras. Zhixing Technology will also perform a FOV-level fusion of the perception results of these four cameras to provide a 360° blind spot-free perception range.
Getting rid of myopia in the future
The high-speed automatic navigation function, which has been very popular in recent years, requires the vehicle to have a stable 360° indiscriminate perception capability, so the fusion of multiple sensors based on the FOV level and the fusion processing of each sensor in the overlapping FOV area are essential solutions for the automatic navigation function. Looking at high-level autonomous driving, from the aforementioned perception results and fusion solutions, it can be seen that the mainstream perception results at the current stage are based on the traffic participants as obstacles (which may also include static obstacles such as traffic lights and cones). However, the real traffic scene is extremely complex, and traffic participants are only one part of it. The complete outline of the environment cannot be achieved by target-level perception fusion.
At present, many researchers are trying to perform pre-fusion of the original information of each sensor, such as native camera images or pixels, radar point clouds and even electromagnetic wave signals, and to complete a more complete expression of the environment without losing sensory information as much as possible, so as to move towards high-level autonomous driving. Whether this solution can eventually be put into mass production still needs time and continuous attention. There is no strict boundary between the various fusion methods mentioned above. In an intelligent driving system, the fusion method is usually selected dynamically according to specific needs, and multiple fusion methods may be used in the same sensor at the same time. The functions of intelligent driving are gradually moving towards a higher level, and the types and number of sensors carried by cars have also increased significantly, but a single sensor can never get rid of its inherent limitations. It is a pair of "short-sighted" eyes. Through the continuous optimization of sensor fusion algorithms, cars can get rid of "short-sightedness", see more clearly, travel farther, and be safer.
Previous article:NavInfo helps Ford's vehicle-road cooperative system settle in Xi'an
Next article:How design and verification techniques ensure functional safety in automotive SoCs
- Popular Resources
- Popular amplifiers
- Huawei's Strategic Department Director Gai Gang: The cumulative installed base of open source Euler operating system exceeds 10 million sets
- Analysis of the application of several common contact parts in high-voltage connectors of new energy vehicles
- Wiring harness durability test and contact voltage drop test method
- Sn-doped CuO nanostructure-based ethanol gas sensor for real-time drunk driving detection in vehicles
- Design considerations for automotive battery wiring harness
- Do you know all the various motors commonly used in automotive electronics?
- What are the functions of the Internet of Vehicles? What are the uses and benefits of the Internet of Vehicles?
- Power Inverter - A critical safety system for electric vehicles
- Analysis of the information security mechanism of AUTOSAR, the automotive embedded software framework
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- LED chemical incompatibility test to see which chemicals LEDs can be used with
- Application of ARM9 hardware coprocessor on WinCE embedded motherboard
- What are the key points for selecting rotor flowmeter?
- LM317 high power charger circuit
- A brief analysis of Embest's application and development of embedded medical devices
- Single-phase RC protection circuit
- stm32 PVD programmable voltage monitor
- Introduction and measurement of edge trigger and level trigger of 51 single chip microcomputer
- Improved design of Linux system software shell protection technology
- What to do if the ABB robot protection device stops
- P22-009_Butterfly E3106 Cord Board Solution
- Keysight Technologies Helps Samsung Electronics Successfully Validate FiRa® 2.0 Safe Distance Measurement Test Case
- Innovation is not limited to Meizhi, Welling will appear at the 2024 China Home Appliance Technology Conference
- Innovation is not limited to Meizhi, Welling will appear at the 2024 China Home Appliance Technology Conference
- Huawei's Strategic Department Director Gai Gang: The cumulative installed base of open source Euler operating system exceeds 10 million sets
- Download from the Internet--ARM Getting Started Notes
- Learn ARM development(22)
- Learn ARM development(21)
- Learn ARM development(20)
- Learn ARM development(19)
- Start with a Routine
- Share a reference book on servo motor drive
- Is FPGA a big deal in the field of artificial intelligence?
- 【GD32F310G-START】Voltage meter
- C language algorithm to calculate the age of beautiful women
- LOTO virtual oscilloscope about trigger sensitivity function
- [ST60 short-distance test] Part 2: Communication rate test
- PT4115 cannot be completely shut down
- STM32 Linux development board recommendation | PHYTEC development board: Help you reduce development risks and improve product stability
- FAQ|Beineng International's new glass breakage detection solution