Analysis of camera and millimeter wave radar fusion
Latest update time:2020-04-03
Reads:
To join the
"
Smart Car Expert Car Camera Industry Exchange Group"
, please add WeChat 18512119620 and note "join the group"
Source: Wu Jianming
01
The integration of cameras and radars is one of the key issues that many ADAS companies are currently focusing on. This is because cameras and radars alone cannot solve the problem of distance measurement. Not only in distance measurement, but also in the high-precision maps that may be used in the future, they will require the integration of cameras and radars to be realized.
The best solution currently being sought is to achieve the integration of cameras and radars. The accuracy of camera ranging is low, while the accuracy of radar ranging is high, but there is no identity information of the point source. The characteristics of radar and camera are compared as follows.
Table 1. Comparison of radar and camera fusion performance
The camera will be "blind" in rain, fog, and darkness, and it cannot work properly in strong light or weak light. Compared with optical sensors, radar has significantly lower resolution, but it is obviously better in distance measurement and speed measurement and in bad weather.
Although optical sensors are limited in their capabilities in bad weather, they can still recognize colors (traffic lights and road signs) and still have advantages in resolution. It can be said that each sensor has its own advantages and weaknesses. To achieve perfect sensor fusion, it is necessary to accept input from different sensors and use the comprehensive information to more accurately perceive the surrounding environment. The results are much better than different sensors fighting alone.
There are two types of fusion algorithms: feature fusion and data fusion. As shown in the following figure:
Figure 1. Feature fusion (left) and data fusion (right)
As shown in Figure 1, the left side is feature fusion and the right side is data level fusion.
Feature fusion completes the classification and tracking of targets in its own modules, and the modules exchange data through the CAN bus. Data-level fusion is performed within the same module without data exchange. The level of data fusion is higher, but the underlying parameters of the sensor are required, which are currently unavailable. Therefore, feature fusion is currently used.
02
Key technical parameters and performance indicators
The current ranging performance given by major algorithm companies is generally 5% accuracy at 50 meters and 10% accuracy at 100 meters. According to research, algorithm companies usually give an average error.
The actual long-distance measurement error may be relatively large. The short-distance error may be relatively better. The long-distance error has always been a difficult point in the algorithm, so it is reasonable to set the ranging accuracy in segments.
Combined with the integration of millimeter waves, the ranging accuracy goals can be achieved as follows:
1) Accuracy within 50 meters: 2%~3%.
2) 5%~8% within 100 meters.
3) Give TTC time and warning level.
03
Camera and millimeter wave radar (Radar) fusion
Input:
(1) Image and video resolution (integer)
(2) Image and video formats (RGB, YUV, MP4, etc.)
(3) Millimeter-wave radar point cloud information (point cloud coordinate position x, y, floating point type float)
(4) Camera calibration parameters (center position (x, y) and 5 distortions)
Coefficients (2 radial, 2 tangential, 1 angular), floating point type float)
(5) Camera initialization parameters (camera initial position and three coordinate directions
rotation angle, vehicle width, height, speed, etc., floating point type)
Output:
(1) Using Kalman filter to fuse the camera and millimeter wave radar
Point cloud information (point cloud coordinate position x, y, floating point type float)
(2) Fused image/video (RGB, YUV, MP4, etc.)
(3) Distance between the target object and the vehicle (floating point type)
(4) Target object identification (int)
1. Functional definition
Fusion of timestamps.
The timestamp of the camera is inconsistent with the time stamp of the radar. First, we need to achieve the fusion of the timestamp.
The fusion of space.
The target detected by the camera is converted into the world coordinate system to be integrated with the point distance information detected by the radar.
Speed fusion.
The radar can provide accurate position and speed information. The speed of the camera can be obtained through Kalman filtering. Through speed fusion, the one-to-one correspondence between the camera position and the radar point source can be accurately found.
Fusion algorithm. The accuracy of the fusion algorithm determines the fusion result.
Camera distance measurement.
Since the distance measurement of the radar is point source information, if you want to get the continuous distance of the image on the camera, you still need to obtain it through pixels. Therefore, it is necessary to calibrate the external parameters of the camera through the radar.
Development of fusion platform. Before fusion, the detection results of radar, camera and camera are obtained separately, which includes filtering, camera ranging and other algorithms.
2. Technical route plan
Visual cameras and radars each have their own strengths.
Introduction to millimeter wave radar and camera fusion
1) Radar speed measurement
Range: -50m/s~50m/s, error: 0.1m/s~0.2m/s
2) Radar ranging
120m~130m, error: within 2.5%, monocular camera error: 8~10%
3) Fusion method: feature fusion, data fusion.
04
The advantages of visual cameras are:
1) Can complete road environment parameter recognition (lane detection, front vehicle detection, pedestrian detection, road sign detection, traffic sign detection)
2) Based on the binocular camera, the distance of the object can be calculated relatively accurately
The disadvantages are:
1) The recognition rate is related to the model algorithm and external visual environment conditions (rainy days, haze, and darkness)
2) The identification range is within the line of sight
Solutions or optimization plans:
1) Improve the pixel of the camera. If the pixel of the camera is improved, the detection accuracy can be improved, and the ranging accuracy can be improved to a certain extent.
2) Adaptive calibration: It is necessary to develop an adaptive calibration algorithm to perform adaptive calibration of the road surface according to different road conditions to reduce errors.
3) Lower the position of the camera. Lowering the height of the camera in the z direction can increase the number of pixels in the image from the vanishing point in the y direction, thereby improving the ranging accuracy.
The advantages of radar are:
1) Can be used all day long, not affected by light and weather
2) It can be used at a long distance, and the angle, distance and relative speed detection accuracy of the target is higher than that of vision
3) During 3D scanning, LiDAR can not only detect targets but also perceive the environment.
The disadvantages are:
1) It is difficult to identify small objects such as people (non-metallic objects) and bicycles
2) In curved tunnels or when there are many obstacles, radar wave reflection misjudgment is serious
3) As the market demand for detection accuracy is getting higher and higher, the use of a single vision or radar technology is not enough to meet
the needs of high-precision driving.
In the future, vision and radar ADAS technology will definitely move towards an organic integration and combination, learning from each other's strengths and weaknesses to improve the accuracy of judgment.
Solutions or optimization plans:
Realize the fusion of camera and radar.
About SmartCar
Zhichexianjia focuses on the intelligent sharing and interactive communication platform of the intelligent connected vehicle industry. It has established
more than 10
industry communication groups
and regularly broadcasts online special live broadcasts.
We sincerely invite you to join
Autonomous driving l Camera l Millimeter wave radar l LiDAR l Vehicle-road collaboration l Smart cockpit and other industry WeChat exchange groups
:
The group includes more than 300 domestic
OEMs
, system integrators, etc., including corporate general managers, R&D directors, chief engineers, university professors and experts, etc., and gathers industry elites from the autonomous driving industry chain. Welcome to join the group for communication.
Please add
Xiaoyi@SmartCarExpert WeChat
: 18512119620 to apply to join the group.
[Disclaimer]
The article is the author's independent opinion and does not represent the position of Zhichexingjia.
If there are any problems with the content, copyright, etc. of the work, please contact Zhichexingjia within 30 days of publication to delete it or negotiate copyright use.