To address the problem of gradual drift of extrinsic parameters, a line-based approach is proposed for automatic online extrinsic calibration of LiDAR and camera in real-world scenarios. First, line features are extracted and filtered from point clouds and images. Then, an adaptive optimization method is used to provide accurate extrinsic parameters. It is then evaluated with ground truth on the KITTI benchmark. The accuracy of the calibration method is verified experimentally. In online experiments of hundreds of frames, the proposed method automatically corrects the miscalibration error and achieves an accuracy of 0.2 degrees, verifying its applicability in various scenarios.
1 Introduction
The purpose of this paper is:
Traditional manual calibration methods require specially designed objects such as calibration plates or manually selected points, which leads to a cumbersome calibration process.
Long-term operation and different loads will cause slight drift and cumulative errors of external parameters, so automatic online calibration is needed to adjust them.
The main contributions of this paper are:
An extrinsic calibration method for online automatic estimation of six-degree-of-freedom (6-DOF) parameters is proposed. This method reduces the drift error between sensors by using general straight line features and does not require manual selection of points and special targets, so it can be applied to any given scenario.
A point cloud line extraction method is proposed, which uses point cloud processing methods to filter noise data and accurately extract line features.
The introduction of adaptive optimization method and result confidence estimation method realizes optimization in the right direction and efficient calculation of calibration results.
2 Algorithm Overview
The proposed method consists of three steps:
A series of preprocessing is performed before feature extraction of images and point clouds
Extract line features from images and point clouds and refine them through feature filtering
Finally, the point cloud line features are projected onto the pixel frame by adding small perturbations to the initial extrinsic parameters, and the score for each perturbation is calculated and optimized.
The algorithm framework of this article is as follows:
3 Algorithm Framework
3.1 Problem Statement
The problem of extrinsic calibration of LiDAR and camera is to determine the correct transformation matrix between them. The problem is defined as finding the rotation angle vector θ=(θx;θy;θz) and the translation vector t=(tx,ty,tz); the point cloud is denoted by , the image is denoted by , and the value of point I and pixel ij in the tth frame is represented. The cost score is calculated by projecting the LiDAR point onto the image, and the objective function is defined as:
Each LiDAR point iterates horizontal features and vertical features separately. The coefficient α gives different weights to horizontal and vertical line features. In this paper, α is assigned to 0.65 to strengthen the constraint on horizontal errors. In addition, w is the size of the sliding window. Considering the previous w frames, the score of the tth frame is calculated.
In simple terms, the horizontal and vertical line features detected by the lidar points are transformed into the camera coordinate system through the transformation matrix T, then projected onto the image, and then the scores are calculated through a sliding window on the image.
3.2 Image Processing
In image processing, the RGB image is first converted into a grayscale image; then line features are extracted using the line feature extraction algorithm in [1]; and then the grayscale image is subjected to distance transformation.
The following figure shows this process.
The white edges in (b) and the white lines in (c) represent the edge features and line features of the clusters, respectively. As shown in (b), after applying the distance transformation model, the edge features after clustering are more disordered. In contrast, the line features in (c) are better organized and produce smaller grayscale changes. It can allow a larger search step size, thereby preventing the optimization process from entering a local solution.
White pixels represent extracted features, and grayscale changes represent the distance to edge or line features. The whiter the pixel, the closer it is to the center of these line features.
3.3 Radar Processing
In LiDAR processing, the principle is to use distance discontinuity to obtain more edge features. To achieve this goal, a local mapping method is used to merge three frames of point clouds into one frame so that one frame can present more points. Specifically, the NDT method is used to calculate the transformation matrix between the current frame and the previous two frames.
The comparison of the boundary line points extracted in a single frame and a three-in-one frame is shown in the following figure. Figure (a) shows a denser point cloud obtained by converting a three-frame point cloud into a one-frame point cloud, which can show more points compared to another frame point cloud in Figure (b). This can improve the extraction performance, especially when low-beam lidar is applied.
The denser point cloud is then converted into an image, with each pixel storing the distance information of the corresponding radar point. By comparing the distance between the current point and the adjacent points, outliers that are too far away from the adjacent points are eliminated, and more accurate line features are extracted. It should be noted that this paper considers the distance information between multiple laser beams. This makes it possible to extract horizontal features, thereby minimizing horizontal and vertical errors using line features. Horizontal line features and vertical line features are stored in two different point clouds respectively. In this setting, the rarely occurring plane intersection line features are ignored, which is beneficial to improving computational efficiency.
3.4 Feature Filtering
After the above processing, unordered line features are obtained. The following two-step filtering method is used to eliminate outliers.
Step 1: Since the point cloud has been converted into an image, a convolution kernel is designed to filter out points whose distance from all eight neighboring points exceeds a certain threshold. This filtering method can remove all abnormal points and points corresponding to the ground. The remaining features can be identified as line features.
The line features before and after filtering are shown in the following figure.
Step 2: Use clustering algorithm to remove line features with few adjacent points and eliminate line features shorter than 8 pixels.
The above two filtering steps can provide more organized point cloud line features, ensuring better optimization effects in subsequent steps.
3.5 Adaptive Optimization
Before optimization, the LiDAR line features have been extracted onto the image and the proportion of LiDAR points projected into the gray area has been calculated.
To find the solution accurately, two search steps are used.
First, in order to prevent the search from falling into a local solution, a coarse search with wide image line features, small grayscale changes, and relatively large step sizes is used to quickly discover areas that may contain the best solution.
Then, finer image line features and larger grayscale variations are applied, along with smaller step sizes, to obtain more accurate calibration results.
When the proportion of the LiDAR point projected into the gray area exceeds a certain threshold, the switch between these two steps of size and grayscale changes occurs.
In order to improve the computational efficiency, an adaptive optimization method is proposed to make the optimization proceed in the right direction.
The current pixel score is compared with the adjacent 728 scores. During this process, if the search program finds a parameter with a higher score, the current search process is stopped and a new search process is started at the location with the higher score. In addition, the search process will stop when the set number of iterations is reached or the best score is found, which can improve the computational efficiency. In addition, a sliding window is used to set the frames that need to be considered during the optimization process. In this paper, three frames are used to prevent optimization search from the wrong direction or falling into a local optimal solution. Therefore, the final optimized external parameter should exceed other parameters in all frames of the sliding window.
4 Experiments and Results
Two experiments were conducted in different scenarios of the KITTI dataset.
4.1 Accuracy Analysis
Figures (a) and (b) show the results of Experiment 1, and Figures (c) and (d) show the results of Experiment 2. In both experiments, a 1-degree rotation bias was added to the X, Y, and Z axes, and a 0.05-meter transformation bias was added to the ground truth parameters. Then, a 0.5-degree rotation bias was added every 10 frames. It should be noted that whether the 1-degree rotation bias is positive or negative is random. The calibration error is compared with the ground truth. In addition, the ability to detect miscalibration and the speed of correcting the bias are tested.
Without accounting for human error, the maximum errors for roll, pitch, and yaw are always within 0.5 degrees. Due to the high horizontal resolution of the LiDAR, the calibration result for yaw is the most accurate. Although the resolution of the LiDAR in the vertical direction is much lower and the 3D features in this direction are less frequent, the proposed method can still achieve high accuracy due to the use of an adaptive optimization algorithm and a higher weight for this direction. Overall, the average rotation error in all dimensions is 0.12 degrees, which is lower than most offline calibration techniques.
Previous article:How ToF sensors enable automotive safety features
Next article:Comparative analysis of the advantages and disadvantages of turbocharged engines
- Huawei's Strategic Department Director Gai Gang: The cumulative installed base of open source Euler operating system exceeds 10 million sets
- Analysis of the application of several common contact parts in high-voltage connectors of new energy vehicles
- Wiring harness durability test and contact voltage drop test method
- Sn-doped CuO nanostructure-based ethanol gas sensor for real-time drunk driving detection in vehicles
- Design considerations for automotive battery wiring harness
- Do you know all the various motors commonly used in automotive electronics?
- What are the functions of the Internet of Vehicles? What are the uses and benefits of the Internet of Vehicles?
- Power Inverter - A critical safety system for electric vehicles
- Analysis of the information security mechanism of AUTOSAR, the automotive embedded software framework
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- July is here, I want to do something, please give me some ideas~~
- How to solve the problem of unstable GND?
- [A goddess who gives a good gift will not run away] Light up my heart
- 24 GHz to 44 GHz Wideband Integrated Upconverter and Downconverter Boosts Microwave Radio Performance While Reducing Size
- The relationship between baud rate and time, and the application of transistor/MOS in communication interface level conversion
- Medium and high voltage GaN devices: vertical or horizontal?
- [Help needed] This is a slightly complicated schematic diagram, about the TI processor and USB power supply part. There are some things I don't understand and I would like to ask everyone for help.
- What is GAN semiconductor technology and how does it work?
- Can the comment statement in C language be written in the IF judgment condition in KEIL5?
- How to handle the error in calling the ti.osal.ae674 library when porting the FATFS routine of pdk_omapl138?