What is an automatic parking system? Analysis of automatic parking path planning and tracking technology

Publisher:Tianran2021Latest update time:2023-03-06 Source: elecfans Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

63aadd78-b254-11ed-bfe3-dac502259ad0.png

More advanced fully automatic parking systems will be combined with millimeter-wave radar systems, which have stronger distance detection and anti-interference capabilities. For example, in order to support powerful automatic parking technology, some provide up to 12 parking radars, while the 360° camera also allows the driver to know the situation around the vehicle in the car, and can personally intervene in the parking action when necessary. Now combined with unmanned driving sensor technology, including the use of millimeter-wave radar and laser radar, the detection distance and detection accuracy can be greatly improved.

Autonomous driving - AVM surround view system algorithm framework for automatic parking

AVM is a relatively mature technology, which is deployed in mid-to-high-end models, but there are not many technical blogs that describe the AVM system algorithm in detail. I have built an AVM algorithm framework and have some good demos. I mainly want to explain each operator in the AVM algorithm framework clearly. The style of this article is a combination of theory and practice, and contains some code. It is suitable for readers who have some basic knowledge of computer vision .

AVM System Overview

As shown in the figure, the AVM car surround image system consists of four external fisheye cameras installed on the front bumper, trunk, and rearview mirror. The operators included in the system are in the following order: de-distortion, four-way fisheye camera joint calibration, projection transformation, bird's-eye view fine-tuning, splicing fusion, 3D model texture mapping, etc. The image information captured by the four fisheyes is used to generate a 2D and 3D panoramic view through the above operators. The AVM algorithm is divided into two parts: offline stage and online stage. The online stage is a simplification of the offline stage and is more suitable for engineering implementation.

63dc6686-b254-11ed-bfe3-dac502259ad0.jpg

Schematic diagram of avm system

Offline phase algorithm pipeline

Let's take a quick look at the operators included in the AVM algorithm Pipeline:

2D AVM

63f4332e-b254-11ed-bfe3-dac502259ad0.jpg

2D AVM Pipeline

3D AVM

64035c32-b254-11ed-bfe3-dac502259ad0.jpg

3D AVM Pipeline

Fisheye camera dedistortion based on distortion table

1.1 Fisheye camera distortion model The projection method of ordinary cameras and wide-angle cameras is generally perspective projection, that is, through the principle of triangle similarity, objects in the three-dimensional world under the camera coordinate system are projected onto a plane. This is based on the ideal perspective projection model (no distortion). But the actual situation is that the final image obtained is somewhat different from the ideal perspective projection model, namely radial distortion (barrel, pillow) and tangential distortion. Therefore, the distortion will be corrected in camera calibration. There are many assumptions about the projection method of fisheye cameras, such as equidistant projection, equal solid angle projection, orthogonal projection, stereoscopic projection, and linear projection. However, the real fisheye camera lens does not completely follow the above model assumptions. Therefore, Kannala-Brandt proposed a general form of estimation that is applicable to different types of fisheye cameras: , which is also included in the fisheye camera distortion model in opencv. Compare the following figure: is the incident angle of the light, is the distance between the outgoing light and O on the normalized plane of the camera or on the imaging plane of the camera (in opencv represents the imaging position of the light on the normalized plane of the camera). 6435d64e-b254-11ed-bfe3-dac502259ad0.png 644c27a0-b254-11ed-bfe3-dac502259ad0.png 645f0f8c-b254-11ed-bfe3-dac502259ad0.png 6470027e-b254-11ed-bfe3-dac502259ad0.png

6485dbee-b254-11ed-bfe3-dac502259ad0.jpg

Fisheye camera model Camera dedistortion usually uses the checkerboard calibration method. First, a relatively good initial solution is obtained through matrix derivation, and then the optimal solution is obtained through nonlinear optimization, including the camera's internal parameters, external parameters, and distortion coefficients, and then the fisheye image is dedistorted. The internal parameters are: the camera internal parameter matrix. However, this calibration method is not suitable for this scenario. 1.2 Fisheye image dedistortion based on the manufacturer's distortion table Since the checkerboard calibration method is to fit the global image to obtain a global optimal solution, it is necessary to ensure that the checkerboard of the calibration plate captured multiple times can cover the entire image area. The assumed scenario is to require the vehicle to be calibrated on the assembly line, that is, the camera has been installed on the vehicle. Obviously, due to the occlusion of the vehicle body, it is difficult to guarantee the above conditions. In addition, the checkerboard calibration method is not suitable for mass production. Therefore, the distortion table provided by the manufacturer is selected to dedistort the fisheye camera image. Camera manufacturers have professional optical engineers , and the distortion tables provided by large manufacturers are usually more accurate. Of course, there are also some optimization methods based on the distortion table, such as using the minimum reprojection method to calculate the optimal camera principal point position, and then using the distortion table for dedistortion. In other scenarios, there are some methods that first calibrate the camera's intrinsic parameters, and then use the intrinsic parameters in conjunction with the distortion table. The following is a dedistortion method based on the distortion table: 649589cc-b254-11ed-bfe3-dac502259ad0.png

64a6ca16-b254-11ed-bfe3-dac502259ad0.jpg

Distortion table provided by the manufacturer The table above shows part of the camera distortion table. The manufacturer gives the real distance from the imaging point to the center of the imaging plane on the real imaging plane of the camera with a focal length of f , in mm. If you want to use the API provided by opencv to do distortion removal, you need to use the focal length f provided by the manufacturer and convert to the normalized plane of the camera (i.e. divide by f). Then calculate these distortion parameters by polynomial fitting. For example, you can use the curve_fit library of python for polynomial fitting. Call the Opencv API, and m_distortion_coeffs is the distortion parameter of the polynomial fitting. 64bb715a-b254-11ed-bfe3-dac502259ad0.png 64d02a5a-b254-11ed-bfe3-dac502259ad0.png 64eb06cc-b254-11ed-bfe3-dac502259ad0.png 64fc7650-b254-11ed-bfe3-dac502259ad0.png 650ee4fc-b254-11ed-bfe3-dac502259ad0.png 652a73c0-b254-11ed-bfe3-dac502259ad0.png

fisheye::initUndistortRectifyMap(m_intrinsic_matrix, m_distortion_coeffs, R, NewCoeff, image_size*2, CV_32FC1, mapx, mapy);cv::remap(disImg, undisImg, mapx, mapy, INTER_LINEAR);

In layman's terms: the fisheye camera distortion removal process is actually to traverse the coordinate points on the desired distortion-free image, and find the pixel position of the coordinate point on the distortion image through the two lookup tables mapx and mapy. Usually the pixel position is a floating point type, and bilinear interpolation is required. Otherwise, there will be jagged problems on the texture edge. This conclusion has been verified by implementing the opencv remap function. If you are interested, you can implement the mapping process (lookup + interpolation). Let's take a look at the picture:

653a0eac-b254-11ed-bfe3-dac502259ad0.jpg

Fisheye image dedistortion The right figure shows the result of dedistortion based on the distortion table. It can be seen that the dedistortion effect generally meets the requirements, such as the edge of the pillar, the edge of the calibration cloth, and the lane line are straight lines. However, the dedistortion effect is still poor in some areas, and the straight lines are not straight enough. This problem will appear more prominent in the bird's-eye view, and it is also an important reason for the uneven stitching of the coverage area. There may be several reasons: (1) The intersection point (principal point) of the camera optical axis and the imaging plane does not coincide with the center of the image plane, that is, in the intrinsic parameter matrix . (2) The focal length f given by the manufacturer is inaccurate (3) The distortion table given by the manufacturer has errors. In theory, camera calibration is a process of calculating the global optimal solution. It can be understood as follows: the intrinsic parameters can be not so accurate, and the obtained distortion table can also be not so accurate, but as long as the optimization target reprojection error is small, or the distortion is removed relatively cleanly, then this global optimal solution is acceptable. Therefore, the method of minimizing the reprojection error is used to obtain the internal parameters , and then the distortion table is used; in some scenes, some people use chessboards to calibrate the camera's internal parameters, and then use them in conjunction with the distortion table. These contents will be optimized later. 655580ba-b254-11ed-bfe3-dac502259ad0.png 655580ba-b254-11ed-bfe3-dac502259ad0.png

657e5f80-b254-11ed-bfe3-dac502259ad0.png

Four-way fisheye joint calibration The purpose of fisheye camera joint calibration is to obtain the pose relationship between the four fisheye cameras, and then put the captured images into the same coordinate system to obtain a panoramic view.

659379d8-b254-11ed-bfe3-dac502259ad0.jpg

The schematic diagram of camera joint calibration is shown in the figure. The field of view of the panoramic bird's-eye view is a parameter given by humans and can be adjusted according to user preferences. The size of the chessboard on the calibration cloth, the size of the black grid, and the distance between the car and the calibration cloth are all known prior information. The scale relationship between the above prior information in the real world and the panoramic view is 1:1, that is, 1 pixel represents 1cm (of course, this scale can also be adjusted, and you can let one pixel represent n centimeters). The significance of doing this joint calibration is that you can know the coordinate relationship between the corner points of the chessboard in the image after the front, back, left, and right fisheye cameras are dedistorted, and the corresponding coordinate relationship between the corner points of the chessboard in the front, back, left, and right bird's-eye views. In this way, the entire image can be projected onto the corresponding bird's-eye view according to the projection transformation. In addition, since the four bird's-eye views are just spliced ​​together in the joint calibration, the above method is used to project all four images onto the bird's-eye view. In the ideal case without considering the error, they should be just spliced ​​together. The above is the idea of ​​joint calibration.

[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] ..[14]
Reference address:What is an automatic parking system? Analysis of automatic parking path planning and tracking technology

Previous article:Analysis of the research and application of commercial vehicle drive-by-wire chassis technology
Next article:Six technical routes for in-vehicle gesture interaction research

Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号