Through theoretical analysis and experimental verification, it can be seen that the information returned by the first and second layers mainly includes the road surface, lane lines, a small amount of obstacles and boundary data; the third and fourth layers mainly return road boundaries, obstacles and a small amount of road surface information. Therefore, in the feature seed point extraction stage, it is necessary to focus on analyzing the radar data of the first and second layers. The biggest interference for lane line detection in this part of the data lies in the road surface. The focus of extracting lane line seed point features is to separate lane line features from road surface features.
Lane detection based on vision system has many defects
First, the visual system is very sensitive to background light. For example, on a tree-lined road with strong sunlight, the lane lines are broken into fragments by the light, making it impossible to extract the lane lines.
Secondly, the visual system requires that the lane markings are complete. For some roads that have been in disrepair for a long time, the lane markings are unclear and incomplete, and the same is true for some roads that have only been opened for a few years.
Third, the visual system requires the lane lines to be in a unified format, which is especially important for the system that recognizes lane lines according to the model library. Some lane lines have peculiar formats, such as blue lane lines and very narrow lane lines. The model library must travel across the country to collect these peculiar lane lines one by one to ensure smooth detection.
Thirdly, the visual system cannot cope with low-light environments, especially at night without street lights. Generally, LKW requires a speed of more than 72 kilometers per hour to start. One reason is that people will not easily change lanes at higher speeds. Another reason is that the relatively low speed means that the visual system has insufficient sampling points, and the accuracy of the fitted lane lines is low. The effective distance of the laser radar is generally 4-5 times that of the visual system, with more effective sampling points. When the speed is low, the detection accuracy is much higher than that of the visual system.
Finally, if the lane line surface is covered by water, the vision system will be completely ineffective. The biggest advantage of the vision system is its low cost. Therefore, since 2008, the academic community has rarely studied lane line detection based on the vision system, and instead used LiDAR to detect lane lines. LiDAR can solve all the above problems, including lane lines covered by water. LiDAR can penetrate a maximum depth of 70 meters.
The only drawback of LiDAR is its high cost.
Lane detection based on radar scanning point density
Early lidar lane line detection was a lane line detection method based on the density of radar scanning points. This method obtained the coordinates of the radar scanning points and converted them into a grid map, and mapped the grid map with the original data. It could be a direct coordinate grid map or a polar coordinate grid map.
According to the needs of post-processing, the polar coordinate grid map is directly used for lane line recognition, that is, the grid with multiple point mappings is considered to be the lane line points. This recognition method has high requirements for feature extraction and is seriously affected by distance. The closer the polar coordinate grid distance is, the higher the grid accuracy is, and the higher the accuracy of lane line recognition is. The farther the distance is, the lower the grid accuracy is, resulting in lower accuracy in identifying lane lines. Then the density of points in the grid map is used to extract lane lines.
The point density can be obtained by histogram statistics, which is quick, intuitive and easy to understand. Since the detection method based on scanning point density does not have a very complicated intermediate process, it has high real-time performance and is favored by everyone in rapid detection.
However, this method only obtains the location information of the scanning point, and does not further analyze other information fed back by the radar. It is easy to mix some road information with similar density of lane line scanning points into the lane line detection results; or when the lane line is close to or overlaps with other obstacles, it is impossible to distinguish between the obstacles and the lane line, and they can only be retained or eliminated as a whole.
Therefore, this method has poor anti-interference ability and is prone to false detection. This method is not commonly used at present.
Four methods of lane detection using LiDAR
There are currently four main methods for LiDAR to detect lane lines:
Based on the width of the LiDAR echo;
Grayscale image based on LiDAR reflection intensity information, or filtering out invalid information based on the combination of intensity information and elevation information;
LiDAR SLAM works with high-precision maps to not only detect lane lines but also locate the vehicle.
· Using the characteristics of LiDAR that can obtain different information about the height of the curb or physical reflection information, the curb is detected first, because the road width is known, and the lane position is calculated based on the distance. This method cannot be used on some roads where the height difference between the curb and the road surface is less than 3 cm.
The latter three methods require multi-line laser radar, at least 16-line laser radar. The former can use 4-line or single-line laser radar. Considering that Audi A8 has already started using 4-line laser radar, 4-line laser radar has entered the practical stage.
Of course, these four methods can also be used in combination.
Lane detection in two steps
Lane detection is basically divided into two steps: extracting geometric or physical features and fitting lane lines using discrete data. Whether it is vision or lidar, the least squares method is usually used to fit lane lines.
Fitting lane lines to discrete data
Ibeo is the most suitable lidar for the first method. Ibeo's lidar has a unique triple echo technology. Each laser point returns three echoes, and the returned information can more reliably restore the measured object, while accurately analyzing the data of related objects and identifying the data of irrelevant objects such as rain, fog, and snow.
As shown in the figure, W represents the echo pulse width and d represents the distance of the scanned target. As an inherent property of an object, reflectivity is affected by the material and color of the object, and can well reflect the characteristics of the object, different colors.
Objects of different densities have different reflectivities. The reflectivity of the object determines the Ibeo echo pulse width characteristics. There are obvious differences between the road surface and lane lines, so the difference in echo pulse width can be used to distinguish targets.
The picture above shows a typical lane marking.
Echo width
Obviously, the echo width of the road surface is about 2 meters, and the echo width of the lane line is about 4 meters.
According to the characteristics of Ibeo, its vertical scanning angle is 3.2 degrees, and it is divided into four layers of scanning, that is, 0.8 degrees per layer. When Ibeo is installed horizontally, and taking into account the actual situation that the height of Ibeo is limited by the vehicle body, the lower two layers (first and second layers) mainly return information on the road surface, while the upper two layers (third and fourth layers) mainly return information on roads at a certain height.
According to the characteristics of LiDAR, when the laser beam scans an object, an echo will be generated immediately. The scanning distance of the first and second layers is much smaller than that of the third and fourth layers.
Through theoretical analysis and experimental verification, it can be seen that the information returned by the first and second layers mainly includes the road surface, lane lines, a small amount of obstacles and boundary data; the third and fourth layers mainly return road boundaries, obstacles and a small amount of road surface information. Therefore, in the feature seed point extraction stage, it is necessary to focus on analyzing the radar data of the first and second layers. The biggest interference for lane line detection in this part of the data lies in the road surface. The focus of extracting lane line seed point features is to separate lane line features from road surface features.
Least squares method for fitting lane lines
The minimum intra-class variance algorithm is used to find the segmentation threshold of the road surface and the lane line, and the error analysis principle is used to eliminate the gross errors within the range of the lane line set, that is, to eliminate the interference information and extract the lane line feature seed points. Then, the lane line is fitted.
Minimum intra-class variance is a method for obtaining an adaptive threshold and a fuzzy clustering method. The basic idea is to use a threshold to divide the overall data into two classes, because variance is a measure of whether the value distribution is uniform. The smaller the sum of the variances within the two classes, the smaller the difference within each class, and the greater the difference between the two classes.
If there is a threshold that minimizes the intra-class variance, it means that this threshold is the optimal threshold for dividing the two categories. Using the optimal threshold for division means that the probability of deviation in dividing the two categories is minimized.
Previous article:A comprehensive introduction to the disassembled solid-state lidar
Next article:ADAS most comprehensive function introduction
Recommended ReadingLatest update time:2024-11-16 09:44
- Popular Resources
- Popular amplifiers
- LiDAR point cloud tracking method based on 3D sparse convolutional structure and spatial...
- GenMM - Geometrically and temporally consistent multimodal data generation from video and LiDAR
- Comparative Study on 3D Object Detection Frameworks Based on LiDAR Data and Sensor Fusion Technology
- Dual Radar: A Dual 4D Radar Multimodal Dataset for Autonomous Driving
- Huawei's Strategic Department Director Gai Gang: The cumulative installed base of open source Euler operating system exceeds 10 million sets
- Analysis of the application of several common contact parts in high-voltage connectors of new energy vehicles
- Wiring harness durability test and contact voltage drop test method
- Sn-doped CuO nanostructure-based ethanol gas sensor for real-time drunk driving detection in vehicles
- Design considerations for automotive battery wiring harness
- Do you know all the various motors commonly used in automotive electronics?
- What are the functions of the Internet of Vehicles? What are the uses and benefits of the Internet of Vehicles?
- Power Inverter - A critical safety system for electric vehicles
- Analysis of the information security mechanism of AUTOSAR, the automotive embedded software framework
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications