Juefei Technical Analysis | Application of LiDAR and Vision Fusion Perception in Autonomous Driving

Publisher:亚瑟摩根Latest update time:2022-04-15 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

2022 is the window period for intelligent driving to leap from L2 to L3/L4. More and more automakers are starting to deploy higher-level intelligent driving mass production, and the era of automotive intelligence has quietly arrived. With the improvement of LiDAR hardware technology, automotive-grade mass production and cost reduction, high-level intelligent driving functions have promoted the mass production of LiDAR in passenger cars. Many models equipped with LiDAR will be delivered this year, and 2022 is also known as the "first year of LiDAR on cars."


Juefei Technical Analysis | Application of LiDAR and Vision Fusion Perception in Autonomous Driving


LiDAR is a sensor used to accurately obtain the three-dimensional position of an object. It is essentially laser detection and ranging. With its excellent performance in target contour measurement and general obstacle detection, it is becoming the core configuration of L4 autonomous driving. However, the ranging range of LiDAR (generally around 200 meters, and the mass production models of different manufacturers have different indicators) results in a perception range much smaller than that of image sensors. And because its angular resolution (generally 0.1° or 0.2°) is relatively small, the resolution of the point cloud is much smaller than that of the image sensor. When sensing at a long distance, the points projected onto the target may be extremely sparse, or even unable to be imaged. For point cloud target detection, the effective distance of the point cloud that the algorithm can really use is only about 100 meters.


Image sensors can obtain complex information around them at high frame rates and high resolutions, and they are inexpensive. Multiple sensors with different FOVs and resolutions can be deployed for visual perception at different distances and ranges, with a resolution of 2K-4K. However, image sensors are passive sensors with insufficient depth perception and poor ranging accuracy, especially in harsh environments, where the difficulty of completing perception tasks will be greatly increased. In the face of strong light, low illumination at night, rain, snow, fog and other weather and light environments, intelligent driving has high requirements for sensor algorithms. Although lidar is not sensitive to ambient light, ranging will be greatly affected by flooded roads, glass walls, etc. It can be seen that lidar and image sensors have their own advantages and disadvantages. Most high-level intelligent driving passenger cars choose to fuse different sensors for complementary advantages and redundant fusion. Such a fusion perception solution has also become one of the key technologies for high-level autonomous driving.


Juefei Technical Analysis | Application of LiDAR and Vision Fusion Perception in Autonomous Driving


The fusion of point cloud and image belongs to the technical field of Multi-Sensor Fusion (MSF). There are traditional random methods and deep learning methods. According to the abstract degree of information processing in the fusion system, it is mainly divided into three levels:


  • Data layer fusion (Early Fusion)


First, the sensor observation data is fused, and then features are extracted from the fused data for identification. In 3D object detection, PointPainting (CVPR20) adopts this approach. The PointPainting method first performs semantic segmentation on the image, and maps the segmented features to the point cloud through a matrix from point to image pixel. The point cloud of the "painted points" is then sent to the 3D point cloud detector to regress the target box.


Juefei Technical Analysis | Application of LiDAR and Vision Fusion Perception in Autonomous Driving


  • Feature layer fusion (Deep Fusion)


First, extract the natural data features from the observation data provided by each sensor, and then fuse these features for recognition. In the fusion method based on deep learning, this method uses feature extractors for both the point cloud and image branches, and fuses the networks of the image branch and the point cloud branch at a semantic level in the forward feedback layer to achieve semantic fusion of multi-scale information. The feature layer fusion method based on deep learning has high requirements for the spatiotemporal synchronization between multiple sensors. Once the synchronization is not good, it directly affects the effect of feature fusion. At the same time, due to the differences in scale and perspective, it is difficult for the feature fusion of LiDAR and images to achieve the effect of 1+1>2.


Juefei Technical Analysis | Application of LiDAR and Vision Fusion Perception in Autonomous Driving


  • Late Fusion


Compared with the first two, this is the least complex fusion method. It is not fused at the data layer or feature layer, but a target-level fusion. Different sensor network structures do not affect each other and can be trained and combined independently. Since the two types of sensors and detectors fused at the decision layer are independent of each other, once a sensor fails, sensor redundancy processing can still be performed, which is more robust in engineering.

Juefei Technical Analysis | Application of LiDAR and Vision Fusion Perception in Autonomous Driving



Juefei Technical Analysis | Application of LiDAR and Vision Fusion Perception in Autonomous Driving


In view of the characteristics of current on-board lidar and image sensors, as well as application scenarios such as high-speed NOA, urban road autonomous driving, and automatic parking, Juefei's vehicle-side perception technology stack adopts a post-fusion approach to fuse point clouds and images.


The fusion perception algorithm integrates the raw data of vision, point cloud and millimeter wave, and then performs 3D tracking to output more accurate and complete perception results. At the same time, with the help of high-precision maps and global road samples, the behavior trajectory of traffic participants can be accurately predicted .


Juefei Technology's point cloud-based fusion perception algorithm architecture


In addition to algorithms, Juefei has also solved many engineering problems in mass production through in-depth research on fusion perception technology:


  • Transplantation and optimization of vehicle-side embedded heterogeneous computing platforms


Juefei has developed a variety of deep learning algorithms for point clouds and images, such as sparse point cloud convolution algorithm, monocular size restoration algorithm, etc. At the same time, a large number of performance optimization and acceleration have been carried out on the vehicle-mounted embedded controller platform, which can enable larger models to be deployed on the vehicle-side embedded platform, achieve more accurate laser radar + multi-channel vision fusion perception, and form a mass-produced fusion perception solution.


Juefei Technical Analysis | Application of LiDAR and Vision Fusion Perception in Autonomous Driving


Juefei Technology Point Cloud and Vision Fusion Perception-V2X Terminal


Juefei Technical Analysis | Application of LiDAR and Vision Fusion Perception in Autonomous Driving


Juefei Technology Point Cloud and Vision Fusion Perception-Car Side


  • Multi-sensor calibration and spatiotemporal synchronization technology


The fusion effect of point cloud and image is highly dependent on high-precision sensor calibration and spatiotemporal synchronization. Calibration and spatiotemporal synchronization are also important parts of vehicle mass production. Juefei Technology has developed its own point cloud and image sensor external parameter calibration technology based on high-precision maps. At the same time, it relies on its own GNSS timing and sensor hard synchronization board to perform hardware synchronization of lidar and image sensors, ensuring the high-precision spatiotemporal synchronization required by the fusion algorithm.


  • Full-stack fusion computing solution for autonomous driving


Relying on powerful fusion perception technology capabilities, Juefei has launched a "full-stack fusion computing solution". It can provide customized front-end and rear-end positioning algorithms, multi-sensor fusion perception algorithms and dynamic traffic information services for different scenarios and customers to meet the needs of all stages of autonomous driving. The solution can not only provide L2+ with pure vision-based fusion perception and fusion positioning services, but also integrate high-performance computing power platforms, and finely deploy fusion perception, fusion positioning and other services according to different computing power requirements. At present, this solution has been applied to different scenarios, including vehicle lane changes, long tunnels, and under viaducts, and has reached cooperation with multiple OEMs to provide customized, high-specification, and mass-producible services to many partners from the dimensions of accuracy, reliability, and cost.


Juefei Technical Analysis | Application of LiDAR and Vision Fusion Perception in Autonomous Driving


Juefei Technology LiDAR perception effect demonstration - highway scene


Juefei Technical Analysis | Application of LiDAR and Vision Fusion Perception in Autonomous Driving


Juefei Technology LiDAR perception effect demonstration - parking lot


Juefei Technical Analysis | Application of LiDAR and Vision Fusion Perception in Autonomous Driving


Juefei Technology LiDAR perception effect demonstration - roads within the community


With the continuous iteration of lidar and visual fusion perception technology, and the continuous accumulation of knowledge scenarios and cases, Juefei's full-stack fusion computing solution will bring a safer and more reliable future for autonomous driving.


Reference address:Juefei Technical Analysis | Application of LiDAR and Vision Fusion Perception in Autonomous Driving

Previous article:LeddarTech Releases Flexible and Modular LeddarEngine to Accelerate Development of ADAS and AD Sensors
Next article:LiDAR breaks the siege

Latest Automotive Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号