Analysis of the principle of lane line detection by lidar in automotive electronics

Publisher:zhuanshiLatest update time:2023-09-11 Source: elecfans Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

1. Obtain 3D lane lines through vision+lidar

1.1 LiDAR Lane Detection Principle

Generally speaking, lidar points contain information in four dimensions, xyz coordinates and intensity, where intensity represents the reflectivity of the obstacle surface to the laser. Lane lines are generally drawn using paint with high reflectivity characteristics. The intensity of lidar points in the lane line area is higher than that in other ground areas, as shown in the figure below. The ground area is low-intensity points (green), and lane lines, curbs, and ground signs are high-intensity points (red and white). Lidar lane lines can be easily extracted using deep learning or traditional methods.

d636aae2-a63e-11ed-bfe3-dac502259ad0.png

Figure 1 BEV view of lidar point cloud, where red and white are high reflectivity points, and green is low reflectivity points. On the other hand, from Figure 1 we can also see that since the lidar point cloud is sparse in the distance, the effective range of the lidar lane line is short. In summary, lidar can be used to obtain 3D lane lines with a shorter range but higher positioning accuracy. It is now popular in the industry to extract lidar and vision features from the BEV perspective for subsequent fusion. We also use the BEV method to extract lane lines.


1.2 Inspiration from Image Stitching

As mentioned above, lidar can be used to obtain high-precision lane lines from the perspective of BEV in a shorter range (within 40m), but a length of 40 meters is obviously not enough to meet the needs of vehicle control. Vision 2D lane line detection has a long distance and high 2D accuracy, but it is difficult to obtain high-precision 3D lane line information. This article discusses how to use vision's high-precision 2D information and lidar's limited distance 3D information to achieve long-distance BEV lane line perception.


After using lidar to obtain the BEV lane line, a sparse BEV map can be obtained to represent the location of the lane line, as shown in Figure 2. The ground system is established according to the coordinate system construction method mentioned in (RentyZhu: Autonomous Driving Series 3: A Simple Visual 3D Lane Line Perception Method). The BEV view can be equivalent to a camera looking down at the ground plane at a height of one meter from the ground plane, and its projection equation is:

d66baa8a-a63e-11ed-bfe3-dac502259ad0.png

The optical axis of the camera is basically parallel to the ground. The camera's 2D lane line imaging and the BEV view can be regarded as the imaging of the lane line from two different perspectives. If we can use the image stitching method to "stitch" the camera view to the BEV view, theoretically we can achieve better 3D lane line detection.


We investigated some image stitching methods, among which the article Computer Vision Life: Introduction to Computer Vision Direction | Image Stitching introduced the classic image stitching framework. Typical image stitching can be decomposed into the following four steps:

d685ec7e-a63e-11ed-bfe3-dac502259ad0.png

The feature extraction has been completed, that is, the vision and lidar lane line detection results; feature matching is the matching of vision 2D lane lines and lidar 3D lane lines; many homography matrix calculation methods based on feature point matching are introduced in the visual SLAM related blog posts, which are no longer applicable in this method. This method uses line-to-line matching pairs to obtain the homography matrix, and the specific method will be expanded in subsequent blog posts. After obtaining the homography matrix H, we warp the visual 2D detection results to the BEV perspective, and then do a simple weighted fit with the lidar BEV lane lines to complete the perception of the 3D lane lines.


1.3 Results

The experimental results are shown in the figure below. The pink color is the lidar lane line, which is shorter. The yellow color is the true value, and the red color is the lane line obtained by our method. From the results, we can see that this method greatly improves the perception distance of the lane line while ensuring the 3D detection accuracy.

d69cbf44-a63e-11ed-bfe3-dac502259ad0.png

Figure 2 Lane line BEV view Pink: lidar detection result Yellow: ground truth Red: result of this method


2.BEVFusion

In the previous blog (RentyZhu: Autonomous Driving Series 3: A Visual 3D Lane Perception && Ground Reconstruction Method), it was mentioned that vision can reconstruct the ground without supervision:

d6b80a2e-a63e-11ed-bfe3-dac502259ad0.png


Reference address:Analysis of the principle of lane line detection by lidar in automotive electronics

Previous article:Autonomous Driving Positioning Technology - Particle Filter Practical Case Analysis
Next article:Analysis of the internal structure of new energy vehicles: axial magnetic field disc motor

Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号