1. Obtain 3D lane lines through vision+lidar
1.1 LiDAR Lane Detection Principle
Generally speaking, lidar points contain information in four dimensions, xyz coordinates and intensity, where intensity represents the reflectivity of the obstacle surface to the laser. Lane lines are generally drawn using paint with high reflectivity characteristics. The intensity of lidar points in the lane line area is higher than that in other ground areas, as shown in the figure below. The ground area is low-intensity points (green), and lane lines, curbs, and ground signs are high-intensity points (red and white). Lidar lane lines can be easily extracted using deep learning or traditional methods.
Figure 1 BEV view of lidar point cloud, where red and white are high reflectivity points, and green is low reflectivity points. On the other hand, from Figure 1 we can also see that since the lidar point cloud is sparse in the distance, the effective range of the lidar lane line is short. In summary, lidar can be used to obtain 3D lane lines with a shorter range but higher positioning accuracy. It is now popular in the industry to extract lidar and vision features from the BEV perspective for subsequent fusion. We also use the BEV method to extract lane lines.
1.2 Inspiration from Image Stitching
As mentioned above, lidar can be used to obtain high-precision lane lines from the perspective of BEV in a shorter range (within 40m), but a length of 40 meters is obviously not enough to meet the needs of vehicle control. Vision 2D lane line detection has a long distance and high 2D accuracy, but it is difficult to obtain high-precision 3D lane line information. This article discusses how to use vision's high-precision 2D information and lidar's limited distance 3D information to achieve long-distance BEV lane line perception.
After using lidar to obtain the BEV lane line, a sparse BEV map can be obtained to represent the location of the lane line, as shown in Figure 2. The ground system is established according to the coordinate system construction method mentioned in (RentyZhu: Autonomous Driving Series 3: A Simple Visual 3D Lane Line Perception Method). The BEV view can be equivalent to a camera looking down at the ground plane at a height of one meter from the ground plane, and its projection equation is:
The optical axis of the camera is basically parallel to the ground. The camera's 2D lane line imaging and the BEV view can be regarded as the imaging of the lane line from two different perspectives. If we can use the image stitching method to "stitch" the camera view to the BEV view, theoretically we can achieve better 3D lane line detection.
We investigated some image stitching methods, among which the article Computer Vision Life: Introduction to Computer Vision Direction | Image Stitching introduced the classic image stitching framework. Typical image stitching can be decomposed into the following four steps:
The feature extraction has been completed, that is, the vision and lidar lane line detection results; feature matching is the matching of vision 2D lane lines and lidar 3D lane lines; many homography matrix calculation methods based on feature point matching are introduced in the visual SLAM related blog posts, which are no longer applicable in this method. This method uses line-to-line matching pairs to obtain the homography matrix, and the specific method will be expanded in subsequent blog posts. After obtaining the homography matrix H, we warp the visual 2D detection results to the BEV perspective, and then do a simple weighted fit with the lidar BEV lane lines to complete the perception of the 3D lane lines.
1.3 Results
The experimental results are shown in the figure below. The pink color is the lidar lane line, which is shorter. The yellow color is the true value, and the red color is the lane line obtained by our method. From the results, we can see that this method greatly improves the perception distance of the lane line while ensuring the 3D detection accuracy.
Figure 2 Lane line BEV view Pink: lidar detection result Yellow: ground truth Red: result of this method
2.BEVFusion
In the previous blog (RentyZhu: Autonomous Driving Series 3: A Visual 3D Lane Perception && Ground Reconstruction Method), it was mentioned that vision can reconstruct the ground without supervision:
Previous article:Autonomous Driving Positioning Technology - Particle Filter Practical Case Analysis
Next article:Analysis of the internal structure of new energy vehicles: axial magnetic field disc motor
- Popular Resources
- Popular amplifiers
- Red Hat announces definitive agreement to acquire Neural Magic
- 5G network speed is faster than 4G, but the perception is poor! Wu Hequan: 6G standard formulation should focus on user needs
- SEMI report: Global silicon wafer shipments increased by 6% in the third quarter of 2024
- OpenAI calls for a "North American Artificial Intelligence Alliance" to compete with China
- OpenAI is rumored to be launching a new intelligent body that can automatically perform tasks for users
- Arm: Focusing on efficient computing platforms, we work together to build a sustainable future
- AMD to cut 4% of its workforce to gain a stronger position in artificial intelligence chips
- NEC receives new supercomputer orders: Intel CPU + AMD accelerator + Nvidia switch
- RW61X: Wi-Fi 6 tri-band device in a secure i.MX RT MCU
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- LED chemical incompatibility test to see which chemicals LEDs can be used with
- Application of ARM9 hardware coprocessor on WinCE embedded motherboard
- What are the key points for selecting rotor flowmeter?
- LM317 high power charger circuit
- A brief analysis of Embest's application and development of embedded medical devices
- Single-phase RC protection circuit
- stm32 PVD programmable voltage monitor
- Introduction and measurement of edge trigger and level trigger of 51 single chip microcomputer
- Improved design of Linux system software shell protection technology
- What to do if the ABB robot protection device stops
- CGD and Qorvo to jointly revolutionize motor control solutions
- CGD and Qorvo to jointly revolutionize motor control solutions
- Keysight Technologies FieldFox handheld analyzer with VDI spread spectrum module to achieve millimeter wave analysis function
- Infineon's PASCO2V15 XENSIV PAS CO2 5V Sensor Now Available at Mouser for Accurate CO2 Level Measurement
- Advanced gameplay, Harting takes your PCB board connection to a new level!
- Advanced gameplay, Harting takes your PCB board connection to a new level!
- A new chapter in Great Wall Motors R&D: solid-state battery technology leads the future
- Naxin Micro provides full-scenario GaN driver IC solutions
- Interpreting Huawei’s new solid-state battery patent, will it challenge CATL in 2030?
- Are pure electric/plug-in hybrid vehicles going crazy? A Chinese company has launched the world's first -40℃ dischargeable hybrid battery that is not afraid of cold
- 【Repair】Fire emergency light fault inspection and repair
- EEWORLD University - Multi-Camera System with DS90UB960 Deserializer Hub and TDA SoC: Training Series
- Why is the maximum angle read by the resolver chip AD2S1210 only 180°?
- liunx configures static IP and changes ens33 network card to eth0
- Giant Digital Caliper Clock
- FPGA learning and design considerations V.1.pdf
- STM32F10x RS485 modbus RTU slave source program
- 【GE32E231_DIY】FreeRTOS+DAP_RTT+Multi-function button+USART_DMA and add FREEMODBUS
- 【Industrial Temperature Transmitter Design】Material Unpacking-ADICUP360
- Showing off my products + the Analog Discovery 2 I just got today