Automatic Surround Camera Calibration for Road-Based Self-Driving Cars

Publisher:XiangtanLatest update time:2023-08-24 Source: elecfans Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

This paper introduces a robust automatic multi-camera calibration and correction method for road scenes. The method utilizes a coarse-to-fine random search strategy, which can account for large perturbations of the initial extrinsic parameters and compensate for the drawback of nonlinear optimization methods that are trapped in local optima. Through quantitative and qualitative experiments in real and simulated environments, the results show that the proposed method can achieve both accuracy and robustness performance.


1 Introduction

In this paper, we propose a fully automatic targetless method for extrinsic calibration of panoramic cameras in road scenes based on photometric errors. Our method uses a coarse-to-fine random search strategy that can adapt to large initial extrinsic parameter errors while avoiding the problem of getting stuck in the optimal local value in nonlinear optimization methods. Our method shows promising performance on both simulated and real datasets.


The author's contributions are threefold:

A fully automatic and target-free method is proposed to calibrate the extrinsic parameters of a panoramic camera based on the photometric errors of overlapping areas in a bird's-eye view of a road scene.

A coarse-to-fine random search strategy is used to adapt to larger initial external parameter errors and avoid the problem of being trapped in the optimal local value in nonlinear optimization methods.

The proposed method shows promising performance on the authors' simulated and real datasets; meanwhile, based on the analysis, the authors developed a practical calibration software and open-sourced it on GitHub to benefit the community.


2 Related Work

Lane-based methods rely on capturing two parallel lane lines, while odometry-based methods incorporate the calibration problem into the optimization of the visual odometry or the complete SLAM system. Photometry-based methods use direct image alignment methods and use local brightness to determine the optimization steps. These methods all have some limitations and disadvantages. For example, methods that strongly rely on parallel lane lines are not applicable to SVS situations, while odometry-based methods require a lot of time and computing resources. Although photometry-based methods can show better robustness in sparse texture scenes, they are still based on nonlinear optimization methods and cannot solve large interferences. Therefore, further research on SVS-based online calibration methods is needed to overcome the limitations of the above methods and improve the robustness and efficiency of calibration.


3 Methods

This section presents the details of our approach, including texture keypoint extraction, optimization loss, and coarse-to-fine solution.

5df1b9a6-2b3f-11ee-a368-dac502259ad0.png

3.1. Texture points extraction and optimization loss

This section will discuss how to generate BEV images, extract textures, project texture pixels back to the original point cloud, and optimize the loss function.

Projection Model This section describes how to generate BEV images based on the camera projection model, extract textures from them, and then project the texture pixels back to the original point cloud using a loss optimization method. The projection model uses the camera's pose and intrinsic matrix to project the points of the ground coordinate system onto the camera image plane. The coordinates of the BEV image are then back-projected back to the camera coordinate system to obtain the coordinates of the points on the original camera image. Finally, by applying the mapping relationship, the BEV image for each camera view can be obtained.

2) Texture point extraction

This section introduces the method of texture point extraction. According to the common field of view of adjacent cameras, the BEV texture pixels of one camera are projected back to the image of another camera using multi-view geometry prior knowledge and initial calibration parameters, and optimized by calculating the photometric loss so that the BEV texture positions of the two cameras overlap. In the common field of view area, the texture pixel points can be extracted by calculating the photometric gradient of the pixel and comparing it with the threshold. In addition, if the adjacent cameras have different exposure conditions, processing is required.

3) Calculate the photometric loss

This section describes how to calculate the photometric loss. First, the camera's BEV texture pixels are projected onto the camera's image to obtain the corresponding pixels, and the photometric loss between the two camera views is calculated. The photometric loss measures the difference between the camera's BEV image and the camera image. By expanding the equation, the formula for calculating the photometric loss can be obtained.

5e2efad2-2b3f-11ee-a368-dac502259ad0.png

The formula is further expanded by combining the pose and camera projection equations.

5e5ab2a8-2b3f-11ee-a368-dac502259ad0.png

3.2. The coarse-to-fine solution

This paper introduces a calibration method aimed at generating seamless vehicle surround view images. First, the camera-to-vehicle extrinsics are calibrated using the front-view camera through vanishing points and horizon lines. Then, the extrinsics of all other cameras are optimized by recursively optimizing the photometric loss of adjacent cameras. Since the image alignment problem optimized by photometric loss is a non-convex optimization problem, it is difficult to optimize it by convex optimization techniques, so it is necessary to provide a reasonable and robust initial extrinsic parameter estimate. To this end, this paper adopts a coarse-to-fine random search strategy to optimize the photometric loss by randomly searching the parameter space around the current optimal parameters. This strategy is similar to the gradient descent method in nonlinear optimization, but it can avoid falling into local optimal values. In each stage of random search, the effect of the current pose is evaluated by calculating the photometric loss, and the optimal pose is updated according to the evaluation result. Through multiple rounds of random search, a relatively optimal pose can be obtained, thereby generating a vehicle surround view image with a small photometric loss.

5e86a246-2b3f-11ee-a368-dac502259ad0.png

4 Experiments

The experimental part of this paper is divided into two parts: real experiments and simulation experiments. The real experiments are carried out on the author's unmanned vehicle test platform, while the simulation experiments are based on the Carla engine. This design enables the author to comprehensively test and evaluate the proposed method in both real and virtual environments.


Experimental setup

The experimental part of this paper introduces the machine configuration and dataset settings. The author's algorithm is executed on the Ubuntu operating system, using a specially configured processor. The experiment uses Carla simulation data and real-world fisheye camera data. In order to remove the area occupied by the ego vehicle, the author uses ROI to filter the BEV image. Such an experimental setting can help the author evaluate and verify the performance and effect of the algorithm.

5ec75ee4-2b3f-11ee-a368-dac502259ad0.png

Qualitative results

Our approach projects the four cameras into the BEV by calibrating the intrinsic and extrinsic parameters to better visualize the performance. Figure 5 shows the results for the pinhole camera, while Figures 6 and 7 show the results for the fisheye camera before and after calibration.

5f07bb1a-2b3f-11ee-a368-dac502259ad0.png

5f4e8eb4-2b3f-11ee-a368-dac502259ad0.png

5f96bc16-2b3f-11ee-a368-dac502259ad0.png

Quantitative results

This study compares with existing work and demonstrates that our calibration method has high accuracy in the online extrinsic correction problem of surround view systems. Some methods cannot continue to calibrate after the initial error exceeds 0.3°, while our method can still achieve better results. Our calibration algorithm can control the initial error within 3°.

5fdf52fa-2b3f-11ee-a368-dac502259ad0.png

5 Conclusion

In this study, the authors solve the online calibration problem of surround view cameras by adopting a hierarchical coarse-to-fine random search method, overcoming some limitations of traditional feature or direct methods. The authors' method performs well in dealing with distortion and nonlinear optimization problems, and can effectively handle large initial errors. Future research directions include optimizing the real-time performance of the algorithm and further improving the algorithm performance in environments with less obvious textures.

Reference address:Automatic Surround Camera Calibration for Road-Based Self-Driving Cars

Previous article:What is a car tachometer? What is the function of a tachometer?
Next article:The development history and technical characteristics of high-voltage connectors for electric vehicles

Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号