2022 is the window period for intelligent driving to leap from L2 to L3/L4. More and more automakers are starting to deploy higher-level intelligent driving mass production, and the era of automotive intelligence has quietly arrived. With the improvement of LiDAR hardware technology, automotive-grade mass production and cost reduction, high-level intelligent driving functions have promoted the mass production of LiDAR in passenger cars. Many models equipped with LiDAR will be delivered this year, and 2022 is also known as the "first year of LiDAR on cars."
LiDAR is a sensor used to accurately obtain the three-dimensional position of an object. It is essentially laser detection and ranging. With its excellent performance in target contour measurement and general obstacle detection, it is becoming the core configuration of L4 autonomous driving. However, the ranging range of LiDAR (generally around 200 meters, and the mass production models of different manufacturers have different indicators) results in a perception range much smaller than that of image sensors. And because its angular resolution (generally 0.1° or 0.2°) is relatively small, the resolution of the point cloud is much smaller than that of the image sensor. When sensing at a long distance, the points projected onto the target may be extremely sparse, or even unable to be imaged. For point cloud target detection, the effective distance of the point cloud that the algorithm can really use is only about 100 meters.
Image sensors can obtain complex information around them at high frame rates and high resolutions, and they are inexpensive. Multiple sensors with different FOVs and resolutions can be deployed for visual perception at different distances and ranges, with a resolution of 2K-4K. However, image sensors are passive sensors with insufficient depth perception and poor ranging accuracy, especially in harsh environments, where the difficulty of completing perception tasks will be greatly increased. In the face of strong light, low illumination at night, rain, snow, fog and other weather and light environments, intelligent driving has high requirements for sensor algorithms. Although lidar is not sensitive to ambient light, ranging will be greatly affected by flooded roads, glass walls, etc. It can be seen that lidar and image sensors have their own advantages and disadvantages. Most high-level intelligent driving passenger cars choose to fuse different sensors for complementary advantages and redundant fusion. Such a fusion perception solution has also become one of the key technologies for high-level autonomous driving.
The fusion of point cloud and image belongs to the technical field of Multi-Sensor Fusion (MSF). There are traditional random methods and deep learning methods. According to the abstract degree of information processing in the fusion system, it is mainly divided into three levels:
Data layer fusion (Early Fusion)
First, the sensor observation data is fused, and then features are extracted from the fused data for identification. In 3D object detection, PointPainting (CVPR20) adopts this approach. The PointPainting method first performs semantic segmentation on the image, and maps the segmented features to the point cloud through a matrix from point to image pixel. The point cloud of the "painted points" is then sent to the 3D point cloud detector to regress the target box.
Feature layer fusion (Deep Fusion)
First, extract the natural data features from the observation data provided by each sensor, and then fuse these features for recognition. In the fusion method based on deep learning, this method uses feature extractors for both the point cloud and image branches, and fuses the networks of the image branch and the point cloud branch at a semantic level in the forward feedback layer to achieve semantic fusion of multi-scale information. The feature layer fusion method based on deep learning has high requirements for the spatiotemporal synchronization between multiple sensors. Once the synchronization is not good, it directly affects the effect of feature fusion. At the same time, due to the differences in scale and perspective, it is difficult for the feature fusion of LiDAR and images to achieve the effect of 1+1>2.
Late Fusion
Compared with the first two, this is the least complex fusion method. It is not fused at the data layer or feature layer, but a target-level fusion. Different sensor network structures do not affect each other and can be trained and combined independently. Since the two types of sensors and detectors fused at the decision layer are independent of each other, once a sensor fails, sensor redundancy processing can still be performed, which is more robust in engineering.
In view of the characteristics of current on-board lidar and image sensors, as well as application scenarios such as high-speed NOA, urban road autonomous driving, and automatic parking, Juefei's vehicle-side perception technology stack adopts a post-fusion approach to fuse point clouds and images.
The fusion perception algorithm integrates the raw data of vision, point cloud and millimeter wave, and then performs 3D tracking to output more accurate and complete perception results. At the same time, with the help of high-precision maps and global road samples, the behavior trajectory of traffic participants can be accurately predicted .
Juefei Technology's point cloud-based fusion perception algorithm architecture
In addition to algorithms, Juefei has also solved many engineering problems in mass production through in-depth research on fusion perception technology:
Transplantation and optimization of vehicle-side embedded heterogeneous computing platforms
Juefei has developed a variety of deep learning algorithms for point clouds and images, such as sparse point cloud convolution algorithm, monocular size restoration algorithm, etc. At the same time, a large number of performance optimization and acceleration have been carried out on the vehicle-mounted embedded controller platform, which can enable larger models to be deployed on the vehicle-side embedded platform, achieve more accurate laser radar + multi-channel vision fusion perception, and form a mass-produced fusion perception solution.
Juefei Technology Point Cloud and Vision Fusion Perception-V2X Terminal
Juefei Technology Point Cloud and Vision Fusion Perception-Car Side
Multi-sensor calibration and spatiotemporal synchronization technology
The fusion effect of point cloud and image is highly dependent on high-precision sensor calibration and spatiotemporal synchronization. Calibration and spatiotemporal synchronization are also important parts of vehicle mass production. Juefei Technology has developed its own point cloud and image sensor external parameter calibration technology based on high-precision maps. At the same time, it relies on its own GNSS timing and sensor hard synchronization board to perform hardware synchronization of lidar and image sensors, ensuring the high-precision spatiotemporal synchronization required by the fusion algorithm.
Full-stack fusion computing solution for autonomous driving
Relying on powerful fusion perception technology capabilities, Juefei has launched a "full-stack fusion computing solution". It can provide customized front-end and rear-end positioning algorithms, multi-sensor fusion perception algorithms and dynamic traffic information services for different scenarios and customers to meet the needs of all stages of autonomous driving. The solution can not only provide L2+ with pure vision-based fusion perception and fusion positioning services, but also integrate high-performance computing power platforms, and finely deploy fusion perception, fusion positioning and other services according to different computing power requirements. At present, this solution has been applied to different scenarios, including vehicle lane changes, long tunnels, and under viaducts, and has reached cooperation with multiple OEMs to provide customized, high-specification, and mass-producible services to many partners from the dimensions of accuracy, reliability, and cost.
Juefei Technology LiDAR perception effect demonstration - highway scene
Juefei Technology LiDAR perception effect demonstration - parking lot
Juefei Technology LiDAR perception effect demonstration - roads within the community
With the continuous iteration of lidar and visual fusion perception technology, and the continuous accumulation of knowledge scenarios and cases, Juefei's full-stack fusion computing solution will bring a safer and more reliable future for autonomous driving.
Previous article:LeddarTech Releases Flexible and Modular LeddarEngine to Accelerate Development of ADAS and AD Sensors
Next article:LiDAR breaks the siege
- Popular Resources
- Popular amplifiers
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- I am looking for a detailed and detailed literature or article on the theoretical explanation of parallel communication between 51 microcontroller and FPGA.
- The signal strength of the mobile phone is -109dbm, 31asu--- Why is dBm a negative value? -100dbm and -120dbm, which signal is stronger?
- Micrometer based on TI MSP430 microcontroller
- What is eSIM? What are the key advantages of eSIM for IoT? Truphone experts will answer your questions for free!
- The east-west direction of the compass is reversed
- Looking for a serial port assistant that can send commands alternately and supports 2400 baud rate
- Registration for the first half of 2019 software exam is now open
- MSP430 G2553 Low Power Mode LPMx
- OP-07 read voltage
- How to set individual trace solder mask windows in DXP?