iDC domain controller to be launched by Zhixing Technology will help cars get rid of "myopia"

Publisher:PeacefulWarriorLatest update time:2022-06-16 Source: elecfans Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Simply put, it means to organically integrate the information detected by various types of sensors on the vehicle body through a series of algorithms to maximize the vehicle's ability to perceive the environment, thereby providing environmental information for the back-end behavioral decision-making and control modules. A single perception sensor usually has certain perception limitations due to detection mechanisms, visual angles, and other reasons. If each independent sensor gives the car eyes, then sensor fusion technology will enable the car body to see more clearly and say goodbye to "myopia".


Sensor fusion can be divided into compensatory fusion, redundant fusion and collaborative fusion according to its fusion function.


1. Compensatory fusion means that each sensor detects the same environment, and then uses a fusion algorithm to select the superior detection signals of each sensor or integrate the detection ranges of each sensor to obtain more accurate and wider coverage environmental information. Take the current mainstream forward-looking millimeter-wave radar and camera fusion (RV Fusion) solution as an example:

2. Redundant fusion refers to the detection of the same target by each sensor, and the fusion algorithm integrates all the detection information of the same target, thereby improving the credibility of the target and reducing the impact of a single sensor's false detection on the overall system. Redundant fusion is widely used in safety-related functions, such as the popular automatic emergency braking (AEB) function. If the vehicle has multiple sensors, it is generally required that at least two or more sensors detect the target at the same time before automatically braking the target, ensuring the redundancy of the target's existence, reducing the possibility that the target is misidentified by a single sensor, and thus reducing the probability of false braking (based on current laws and regulations, such false braking is quite dangerous and should be avoided as much as possible). Taking the aforementioned radar camera solution as an example, usually during millimeter-wave radar detection, due to the interference of environmental noise and the limitations of the signal processing algorithm, the phenomenon of target misidentification will occur from time to time. The target generated by this misidentification is generally called a ghost target, and the detection principle of the camera makes it have a lower false recognition rate. Therefore, in this system, if the safety function is to be activated, it is generally required that the radar and the camera detect the same target at the same time, thereby reducing the false braking rate.


3. Collaborative fusion refers to integrating the detection information of each sensor, which is relatively simple and low-dimensional, to extract deeper and higher-dimensional detection information. For example, the image information detected by the camera usually loses the three-dimensional information of the environment. If the point cloud information detected by the radar and the image pixel information are fused, an image with depth information (depth map) can be constructed. Then, using this depth map, the complete three-dimensional environmental information can be extracted. In addition to target perception, it can also provide higher-dimensional information such as the feasible domain (Fress Space). The currently popular front fusion belongs to this category of fusion. At present, many smart driving companies are working in this direction and trying to apply it to mass production projects.


Intelligent Fusion Perception

At present, Zhixing Technology is working on the integration of visual perception and ultrasonic perception. Visual perception has certain advantages in judging the existence and category of obstacles. As long as we rely on the data closed-loop link and continuously iterate and optimize the visual perception performance, we can have a good recognition ability for most common obstacles. Ultrasonic sensors have stable perception capabilities for obstacles at close range, and can detect obstacles of any type indiscriminately, such as ground locks, flower stands, limit rods, etc., and are widely used in environmental perception of parking functions. Fusion of visual and ultrasonic obstacle detection is a compensatory fusion, which will be able to detect obstacles near parking spaces more stably and robustly, providing protection for parking functions. This solution will also be directly applied to Zhixing Technology's parking function. In addition, the iDC domain controller to be launched soon will be equipped with four surround view cameras. Zhixing Technology will also perform a FOV-level fusion of the perception results of these four cameras to provide a 360° blind spot-free perception range.


Getting rid of myopia in the future

The high-speed automatic navigation function, which has been very popular in recent years, requires the vehicle to have a stable 360° indiscriminate perception capability, so the fusion of multiple sensors based on the FOV level and the fusion processing of each sensor in the overlapping FOV area are essential solutions for the automatic navigation function. Looking at high-level autonomous driving, from the aforementioned perception results and fusion solutions, it can be seen that the mainstream perception results at the current stage are based on the traffic participants as obstacles (which may also include static obstacles such as traffic lights and cones). However, the real traffic scene is extremely complex, and traffic participants are only one part of it. The complete outline of the environment cannot be achieved by target-level perception fusion.


At present, many researchers are trying to perform pre-fusion of the original information of each sensor, such as native camera images or pixels, radar point clouds and even electromagnetic wave signals, and to complete a more complete expression of the environment without losing sensory information as much as possible, so as to move towards high-level autonomous driving. Whether this solution can eventually be put into mass production still needs time and continuous attention. There is no strict boundary between the various fusion methods mentioned above. In an intelligent driving system, the fusion method is usually selected dynamically according to specific needs, and multiple fusion methods may be used in the same sensor at the same time. The functions of intelligent driving are gradually moving towards a higher level, and the types and number of sensors carried by cars have also increased significantly, but a single sensor can never get rid of its inherent limitations. It is a pair of "short-sighted" eyes. Through the continuous optimization of sensor fusion algorithms, cars can get rid of "short-sightedness", see more clearly, travel farther, and be safer.


Reference address:iDC domain controller to be launched by Zhixing Technology will help cars get rid of "myopia"

Previous article:NavInfo helps Ford's vehicle-road cooperative system settle in Xi'an
Next article:How design and verification techniques ensure functional safety in automotive SoCs

Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号