Real-time detection of vehicle roads and sidewalks based on LiDAR in autonomous driving
[Copy link]
This post was last edited by Hot Ximixiu on 2024-7-15 08:43
Summary
In the field of autonomous driving, road and sidewalk detection in urban scenes is a challenging task. Traditional drivable space and ground filtering algorithms are not sensitive enough to small height differences. Camera or sensor fusion-based solutions are widely used to distinguish drivable roads from sidewalks or road surface detection. LiDAR sensors contain all the necessary information for feature extraction. Therefore, this paper focuses on feature extraction based on LiDAR. For road and sidewalk detection, this paper proposes a real-time (20Hz+) solution, which can also be used for local path planning. Sidewalk edge detection is a combination of three parallel algorithms. To verify the results, we used the de facto standard benchmark dataset KITTI and our own dataset, and open sourced the code in GitHub:
main content
The solution in this paper uses three different methods to find sidewalks. It is worth mentioning that the output includes not only point clouds of roads and dividers, but also simplified vectors that are easy to process. This output is very useful for other algorithms such as path planning because it is a more concise representation of the road. As a model of the urban road and sidewalk environment, you can imagine a distorted flat map and a slightly uneven sidewalk (see Figure 1). From a bird's eye view, roads and sidewalks can take many forms. The above characteristics and simplified data are shown in Figure 1. Assuming that the lidar sensor is located above the road surface, the model of the urban road and sidewalk environment is shown in Figure 1. The road is green and the sidewalk is red.
Figure 1 shows the problem. The road is in green, the sidewalk is in red, and
the two channel measurements are shown as dashed lines. In addition, some artifacts such as drains and other irregularities are shown.
The proposed solution is a published source code called urban_road_filter. (I look forward to communicating with friends in the field of autonomous driving.) The input of this solution is a normal lidar data stream, without camera or any additional sensor data, and the output is a 3D voxel point cloud of roads and sidewalks and a 2D polygon description of the road. The solution includes three sidewalk detection methods (star search method, X-zero and Y-zero methods), road detection method and road extraction based on 2D polygons.
Sidewalk detection
The detection method for the sidewalk edge is a combination of star search, X-zero and Z-zero methods. All methods have the same purpose, but they work differently. The final result is the logical result of the output of these methods. It is worth noting that the above methods run in a parallel manner. False positive curb points may appear behind the actual curb. Curb points are boundary voxels between the curb and the road. False positive curb points are created behind the curb, for example, due to similar 3D features of voxels from various imperfections. For example, an artifact can be a public bench that protrudes from the sidewalk just like the curb protrudes from the road. This can lead to false identification. The final polygon is created between the road and the first curb point. This means that later curb points will not affect the final result. This phenomenon does not have a negative impact on the method because false positive voxels will never appear on the road surface.
Star search method
The method divides the point cloud into rectangular segments. The combination of these shapes resembles a star; this is where the name comes from. From each segment, possible sidewalk start points are extracted. The algorithm created is insensitive to altitude variations based on the Z coordinate, which means that in practice, the algorithm will perform well even when the LiDAR is tilted with respect to the road surface plane. The point cloud is processed in a cylindrical coordinate system (see Figure 2).
Figure 2, Star search method. The long rectangles (boxes) in circular layout represent the parts cut out from the original LiDAR point cloud. On the right side of the enlarged image, the red dot is the starting point of the sidewalk.
Figure 3 represents the cut box (cuboid) of the scanned point cloud. The vertices of the cuboid are represented by 8 points P1,2,3; 2,4 Its orientation and position change iteratively with incremental rotations and translations. More precisely, for each bk and k=1,nk rotation, the cuboid is translated ni consecutive increments along the D direction. In order to make it easier to understand the proposed algorithm, Figure 3 shows the symmetry plane π used in Figure 4, which shows a side view of the cut box.
Figure 3. Schematic diagram of a single rectangle cut from a scanned point cloud
Figure 4. Side view of the point cloud separation process and box selection point parameters
X-Zero method
The X-zero and Z-zero methods can find sidewalks that avoid the measured X and Z components. Both the X-zero and Z-zero methods take into account the number of channels of the voxel, so the lidar must not be parallel to the road surface plane, which is a known limitation of the above two algorithms and the entire urban road filtering method. The X-zero method removes the value in the X direction and uses cylindrical coordinates instead, see Figure 6. This method uses iteration through rings (channels) and defines triangles on voxels.
Figure 6, X-zero method, cylindrical coordinate system, single channel (ring)
Figure 7, X-zero method, voxel triangle visualization
Z-Zero Method
The main difference of the Z-Zero method is that the angle is calculated as a vector direction based on the sliding window method (5+5 voxels by default). The specific steps are not expanded here.
Road representation based on 2D polygons
Along with the detection of sidewalks, our algorithm also provides a polygon, i.e., a vector output of the detected road. This is created between road voxels and roadside voxels. This output can be directly used for path planning. The algorithm distinguishes two types of road boundaries: sidewalks, including boundaries surrounded by obstacles (marked with red stripes on Figure 8).
Figure 8 shows an example scene of a 2D polygonal road. The image on the left is not involved in the algorithm, it just helps to better understand the first half of the scene .
parameter settings
There are several parameters that can be used to fine-tune the solution, although even the default values will produce adequate results, the parameters listed in Table 1 are introduced below. An important parameter is the lidar subject and its frame name, it is important to know that the algorithm works with multiple methods at the same time, the size of the inspection area can be set using multiple parameters, the region of interest (ROI) can be set via the x_direction parameter and the minimum and maximum x, y, and z parameters. The x_direction parameter may have three different values: negative, positive, and both, indicating whether the region of interest is behind, in front of, or bidirectionally related to the lidar on the x-axis.
experiment
To evaluate the proposed method, extensive analysis and experiments on real-time data were performed. In Figure 9, three images are shown to explain our results more intuitively. The first image shows the road with green voxels and the sidewalk with red voxels. Although false positive sidewalk point clouds are visible, they do not affect the overall performance. The results were collected at 20 Hz and 30 km/h speed. In addition, accurate RTK GPS positions were associated with the LiDAR data to obtain more understandable results. The second image in Figure 9 shows the overlay of the UAV image and our results, while the third image shows only the test site seen from above.
Figure 9. Experimental results on the left: road (green) and sidewalk (red). The middle shows a mix of measurements and drone imagery, and on the right, only the drone imagery visualizes the results.
Figure 10. In-car test of the approach, with camera information shown on the (left) and lidar 3D data visible on the (right) where the road (green) and sidewalk (red) are highlighted. Voxel scale based on lidar intensity
Summarize
This paper presents a new approach to road and sidewalk detection. The detection of sidewalk curbs is a combination of the star search, X-zero and Z-zero methods introduced in 3D voxels. In addition, the method provides polygonal outputs that can be used directly for local trajectory planning. The method is evaluated through extensive real-time field tests and offline analysis of previous measurements and public datasets. We compare the results of our solution with previous results. This solution has limitations. Both X-zero and Z-zero algorithms require the lidar to be in a parallel position relative to the road surface. Although this is a common sensor setup and our vehicle is equipped in this way, there are some special cases that are worth noting, which recommend setting it up differently. And our solution does not work with this solid-state sensor configuration. This new type of lidar, known as solid-state technology, is generating higher interest in the scientific community. Although these sensors are not yet fully commercialized, they have higher lifetimes and low power consumption. They generate structured 3D information, but the organization is different. As a further limitation of the algorithm, the proposed method does not support 3D data from solid-state technology.
|