This paper introduces a robust automatic multi-camera calibration and correction method for road scenes. The method utilizes a coarse-to-fine random search strategy, which can account for large perturbations of the initial extrinsic parameters and compensate for the drawback of nonlinear optimization methods that are trapped in local optima. Through quantitative and qualitative experiments in real and simulated environments, the results show that the proposed method can achieve both accuracy and robustness performance.
1 Introduction
In this paper, we propose a fully automatic targetless method for extrinsic calibration of panoramic cameras in road scenes based on photometric errors. Our method uses a coarse-to-fine random search strategy that can adapt to large initial extrinsic parameter errors while avoiding the problem of getting stuck in the optimal local value in nonlinear optimization methods. Our method shows promising performance on both simulated and real datasets.
The author's contributions are threefold:
A fully automatic and target-free method is proposed to calibrate the extrinsic parameters of a panoramic camera based on the photometric errors of overlapping areas in a bird's-eye view of a road scene.
A coarse-to-fine random search strategy is used to adapt to larger initial external parameter errors and avoid the problem of being trapped in the optimal local value in nonlinear optimization methods.
The proposed method shows promising performance on the authors' simulated and real datasets; meanwhile, based on the analysis, the authors developed a practical calibration software and open-sourced it on GitHub to benefit the community.
2 Related Work
Lane-based methods rely on capturing two parallel lane lines, while odometry-based methods incorporate the calibration problem into the optimization of the visual odometry or the complete SLAM system. Photometry-based methods use direct image alignment methods and use local brightness to determine the optimization steps. These methods all have some limitations and disadvantages. For example, methods that strongly rely on parallel lane lines are not applicable to SVS situations, while odometry-based methods require a lot of time and computing resources. Although photometry-based methods can show better robustness in sparse texture scenes, they are still based on nonlinear optimization methods and cannot solve large interferences. Therefore, further research on SVS-based online calibration methods is needed to overcome the limitations of the above methods and improve the robustness and efficiency of calibration.
3 Methods
This section presents the details of our approach, including texture keypoint extraction, optimization loss, and coarse-to-fine solution.
3.1. Texture points extraction and optimization loss
This section will discuss how to generate BEV images, extract textures, project texture pixels back to the original point cloud, and optimize the loss function.
Projection Model This section describes how to generate BEV images based on the camera projection model, extract textures from them, and then project the texture pixels back to the original point cloud using a loss optimization method. The projection model uses the camera's pose and intrinsic matrix to project the points of the ground coordinate system onto the camera image plane. The coordinates of the BEV image are then back-projected back to the camera coordinate system to obtain the coordinates of the points on the original camera image. Finally, by applying the mapping relationship, the BEV image for each camera view can be obtained.
2) Texture point extraction
This section introduces the method of texture point extraction. According to the common field of view of adjacent cameras, the BEV texture pixels of one camera are projected back to the image of another camera using multi-view geometry prior knowledge and initial calibration parameters, and optimized by calculating the photometric loss so that the BEV texture positions of the two cameras overlap. In the common field of view area, the texture pixel points can be extracted by calculating the photometric gradient of the pixel and comparing it with the threshold. In addition, if the adjacent cameras have different exposure conditions, processing is required.
3) Calculate the photometric loss
This section describes how to calculate the photometric loss. First, the camera's BEV texture pixels are projected onto the camera's image to obtain the corresponding pixels, and the photometric loss between the two camera views is calculated. The photometric loss measures the difference between the camera's BEV image and the camera image. By expanding the equation, the formula for calculating the photometric loss can be obtained.
The formula is further expanded by combining the pose and camera projection equations.
3.2. The coarse-to-fine solution
This paper introduces a calibration method aimed at generating seamless vehicle surround view images. First, the camera-to-vehicle extrinsics are calibrated using the front-view camera through vanishing points and horizon lines. Then, the extrinsics of all other cameras are optimized by recursively optimizing the photometric loss of adjacent cameras. Since the image alignment problem optimized by photometric loss is a non-convex optimization problem, it is difficult to optimize it by convex optimization techniques, so it is necessary to provide a reasonable and robust initial extrinsic parameter estimate. To this end, this paper adopts a coarse-to-fine random search strategy to optimize the photometric loss by randomly searching the parameter space around the current optimal parameters. This strategy is similar to the gradient descent method in nonlinear optimization, but it can avoid falling into local optimal values. In each stage of random search, the effect of the current pose is evaluated by calculating the photometric loss, and the optimal pose is updated according to the evaluation result. Through multiple rounds of random search, a relatively optimal pose can be obtained, thereby generating a vehicle surround view image with a small photometric loss.
4 Experiments
The experimental part of this paper is divided into two parts: real experiments and simulation experiments. The real experiments are carried out on the author's unmanned vehicle test platform, while the simulation experiments are based on the Carla engine. This design enables the author to comprehensively test and evaluate the proposed method in both real and virtual environments.
Experimental setup
The experimental part of this paper introduces the machine configuration and dataset settings. The author's algorithm is executed on the Ubuntu operating system, using a specially configured processor. The experiment uses Carla simulation data and real-world fisheye camera data. In order to remove the area occupied by the ego vehicle, the author uses ROI to filter the BEV image. Such an experimental setting can help the author evaluate and verify the performance and effect of the algorithm.
Qualitative results
Our approach projects the four cameras into the BEV by calibrating the intrinsic and extrinsic parameters to better visualize the performance. Figure 5 shows the results for the pinhole camera, while Figures 6 and 7 show the results for the fisheye camera before and after calibration.
Quantitative results
This study compares with existing work and demonstrates that our calibration method has high accuracy in the online extrinsic correction problem of surround view systems. Some methods cannot continue to calibrate after the initial error exceeds 0.3°, while our method can still achieve better results. Our calibration algorithm can control the initial error within 3°.
5 Conclusion
In this study, the authors solve the online calibration problem of surround view cameras by adopting a hierarchical coarse-to-fine random search method, overcoming some limitations of traditional feature or direct methods. The authors' method performs well in dealing with distortion and nonlinear optimization problems, and can effectively handle large initial errors. Future research directions include optimizing the real-time performance of the algorithm and further improving the algorithm performance in environments with less obvious textures.
Previous article:What is a car tachometer? What is the function of a tachometer?
Next article:The development history and technical characteristics of high-voltage connectors for electric vehicles
- Popular Resources
- Popular amplifiers
- Huawei's Strategic Department Director Gai Gang: The cumulative installed base of open source Euler operating system exceeds 10 million sets
- Analysis of the application of several common contact parts in high-voltage connectors of new energy vehicles
- Wiring harness durability test and contact voltage drop test method
- Sn-doped CuO nanostructure-based ethanol gas sensor for real-time drunk driving detection in vehicles
- Design considerations for automotive battery wiring harness
- Do you know all the various motors commonly used in automotive electronics?
- What are the functions of the Internet of Vehicles? What are the uses and benefits of the Internet of Vehicles?
- Power Inverter - A critical safety system for electric vehicles
- Analysis of the information security mechanism of AUTOSAR, the automotive embedded software framework
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- MSP430 Remote Upgrade Solution
- Calculate the frequency of the sinusoidal signal collected by msp430
- I recently used a chip that is compatible with both RS232 and RS485, but I don't know which pins are A and B for RS485.
- [ATmega4809 Curiosity Nano Review] Configuring GPIO using MCC
- Today at 10:30 AM Microchip Live | Manufacturing Logistics Challenges of Key Security Provisioning: Advantages of Discrete Secure Components
- Multisim circuit simulation does not oscillate?
- In order to solve network congestion, what black technologies does Wi-Fi 6 use?
- [Atria AT32WB415 Review] 1. Unboxing, installation and use of AT32 IDE development environment (especially error resolution)
- Please ask the experts about STM32, FSMC control LCD problem?
- Made a JLINK OB and shared the production process and experience