What is an automatic parking system? Analysis of automatic parking path planning and tracking technology

Publisher:Tianran2021Latest update time:2023-03-06 Source: elecfans Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

6734af14-b254-11ed-bfe3-dac502259ad0.jpg

The three-dimensional objects in the 2D image are a wall on the left and a car on the right (yes, the front bumper of the car is stretched so long). The fisheye cameras installed around the car body are monocular cameras, which cannot obtain the depth of three-dimensional objects. In graphics, there is a method of enhancing the sense of reality: make a three-dimensional model and map the two-dimensional texture map to the three-dimensional model in some way. 3D AVM uses this texture mapping technology to present a pseudo-3D effect to the driver. 6.1 AVM 3D model construction 3D models are composed of small patches, which can be triangular patches, polygonal patches, and so on. Patches are composed of multiple vertices. For example, a triangular patch corresponds to 3 vertices. Let's take a look at a three-dimensional model intuitively:

67825336-b254-11ed-bfe3-dac502259ad0.jpg

If you zoom in on a 3dsMax 3D model, you can see that the 3D model is made up of many small polygonal patches. There are many file formats for 3D models, but generally they all contain three-dimensional information such as model vertices, patches, texture coordinates, and normal vectors. I won't describe how to use 3dsMax to make 3D models. If you are not a professional artist, your method may not be very smart, but you just need to understand the essence. The AVM 3D model is a bowl-shaped 3D model. It simulates the driver's perspective, that is, the road surface is near the car, and this part is directly mapped to the bottom plane of the bowl; and the location farther away from the car may be a 3D object such as a building, a tree, a wall, etc. This part of the content will be mapped to a 3D point in some way. The following shows the necessary information in the 3D model, including vertex coordinates, texture coordinates, normal vectors, and triangle patch indexes.



//Vertex coordinates v 166.2457 190.1529 575.8246 v 169.0261 192.6147 575.0482 v 163.5212 194.2559 576.8094 v 160.4214 177.1097 576.3941 v 160.5880 183.6252 577.0156......//Texture coordinates vt 0.227618 0.463987 vt 0.254011 0.468448 vt 0.251903 0.470549 vt 0.248436 0.466586 vt 0.267204 0.509296......//Normal vector information vn 0.3556 -0.4772 -0.8036vn 0.3606 -0.4537 -0.8149vn 0.3145 -0.3999 -0.8609vn 0.3101 -0.3998 -0.8626vn 0.3170 -0.3811 -0.8685......//Triangle patch information f 5825/5825/4368 5826/5826/4369 5827/5827/4370f 5828/5828/4371 5829/5829/4372 5830/5830/4373f 5831/5831/4374 5832/5832/4375 5833/5833/4376f 5834/5834/4377 5835/5835/4378 5836/5836/4379f 5837/5837/4380 5838/5838/4381 5839/5839/4382 6.2 3D model texture mapping This section describes: (1) where the texture is mapped from and to (2) what strategy is used for mapping The ultimate goal is to find the texture coordinates of each vertex on the 3D model corresponding to the fisheye 2D image. A 3D texture mapping method based on the idea of ​​a virtual camera is used, as shown in the figure:

679d8124-b254-11ed-bfe3-dac502259ad0.jpg

The 3D texture mapping model based on the idea of ​​virtual camera assumes that the panoramic bird's-eye view of the 2D AVM is taken by a virtual camera directly above the car, and it is treated as a 2D texture and mapped onto the 3D model by perspective projection. In the figure, Lw-Rw is a panoramic bird's-eye view, and the intersection of the straight line between the virtual camera and vertex A on the bird's-eye view is A', so the 2D texture mapping coordinate A' corresponding to vertex A is obtained. Then, the texture coordinates of vertex A of the 3D model on the fisheye camera are found through the inverse projection transformation H_inverse, distortion mapx, and mapy. By traversing each point on the 3D model, the mapping relationship between the three-dimensional model and the texture coordinates of the fisheye camera can be obtained:

According to the perspective projection principle, calculate the bird's-eye view texture A' corresponding to vertex A

Use matrix transformation and homography transformation to inversely deduce the coordinates A1 of A' on the dedistorted image

Find the coordinates of A1 on the fisheye camera distortion map through the dedistortion lookup table map

By traversing the above process, you can get the texture coordinates of all vertices on the 3D model corresponding to the fisheye image.

The specific process is shown in the figure:

67b857f6-b254-11ed-bfe3-dac502259ad0.jpg

3D model texture mapping process to see the effect: the normal vector of this model is inverted, so the rendering result has a problem with the light, it is very dark. But you can see the pseudo-3D realism enhancement effect, just understand the spirit.

67d1885c-b254-11ed-bfe3-dac502259ad0.jpg

Right model mapping 6.3 3D fusion In fact, four surface models are used, namely the front, rear, left and right. These four surface models correspond to the four fisheye images one by one. This is done to increase the parallelism of OpenGl rendering and avoid the use of if and else statements when doing stitching and fusion. The 3D model in the above figure is the surface model corresponding to the left fisheye camera. Let's review the 2D AVM approach first: generate a bird's-eye view and then do fusion. The size of the bird's-eye view generated at that time was 1080*1080, which is equivalent to 1080cm*1080cm in the real world, which is enough to show the surroundings of the car body. However, this range can only be mapped to the area near the bottom of the 3D bowl model at most, as shown in the following figure: Ignore the jagged edges, which is a bug in the interpolation during rendering, so ignore it for now. From this figure, it can be seen that if the bird's-eye view is selected to be very small, it will only be mapped to the bottom of the bowl.

680b8e3a-b254-11ed-bfe3-dac502259ad0.jpg

Map the small bird's-eye view onto the model and enlarge the size of the bird's-eye view to see:

6824f6fe-b254-11ed-bfe3-dac502259ad0.jpg

The large bird's-eye view is mapped onto the model. Figure 1 is the image after the fisheye camera on the left is distorted. Figure 2 is the bird's-eye view obtained by projecting Figure 1. Figure 3 is the result after mapping to the model on the left. It should be noted here that in the algorithm implementation, it is not possible to really generate a bird's-eye view like 2D AVM. It is not difficult to see from the bird's-eye view that the part far away from the chessboard is severely elongated, and the pixels close to the vanishing point and vanishing line in Figure 1 will be pulled to infinity in the bird's-eye view. It can be understood that the vanishing point and vanishing line in Figure 1 indicate that in the current camera posture, all points on a plane (such as the ground) are below this vanishing line. The bird's-eye view is equivalent to shooting with the camera parallel to the ground, so the image at infinity on the ground plane (i.e., the vanishing point and vanishing line in Figure 1) will inevitably be stretched to infinity in the bird's-eye view, just like Figure 2. If you are interested, you can take a look at the explanation of the vanishing point.

683b40c6-b254-11ed-bfe3-dac502259ad0.jpg

If you want to fill the entire bowl-shaped model with texture, you need to generate a very large bird's-eye view. That is, you need to calculate a very large map, which requires a huge amount of calculation. Therefore, in the algorithm implementation, you should choose to traverse each vertex on the model and perform inverse texture mapping to calculate the texture coordinates (no longer relying on generating a bird's-eye view). The gaps between the vertices will be filled by the rendering engine through interpolation. This is a mature technology. After talking so much, we finally talk about three-dimensional fusion. 2D fusion is to perform morphological operations on the coverage area of ​​the bird's-eye view, get the following picture, and then calculate the weight. The 3D algorithm emphasizes the thinking of discrete points and will no longer generate a super large bird's-eye view. In other words, the algorithm will no longer calculate an image that covers the area like the picture below. Therefore, we need to find other ways to solve the problem of 3D fusion.

6857aa86-b254-11ed-bfe3-dac502259ad0.jpg

The overlapping area in the upper left corner is shown in the figure:

68730650-b254-11ed-bfe3-dac502259ad0.jpg

The idea of ​​the schematic diagram of the overlapping area in the upper right corner is to calculate the texture coordinate B of the bird's-eye view corresponding to the 3D vertex. The weight is calculated by the angle between AB and m and l. Of course, the overlapping area cannot be so ideal. The schematic diagram l and m in this paper intersect at point A. The actual situation is like the figure above. It is necessary to use a strategy specifically for 3D AVM fusion to achieve this. Map the weight map corresponding to the vertices of the 3D model to a two-dimensional schematic diagram:

6885ee8c-b254-11ed-bfe3-dac502259ad0.jpg

Zoom in and see that in general, the 3D AVM algorithm is to first create a 3D model, and then bind each vertex on the 3D model to a 2D texture map through texture mapping. OpenGl uses the above data for rendering. The final 3D effect will be posted later. Online stage engineering implementation pipeline The previous introduction is the algorithm flow of the offline stage. The offline stage is only used on the assembly line or in 4S stores. It is an initialization process. The initialization content includes: distortion table, projection transformation matrix, texture mapping relationship, splicing fusion weight map, etc. The most important thing is to write the distortion removal, projection transformation, texture mapping and other processes into a lookup table, store it in memory, and call it directly during online processing. Attached is part of the code. It may be a little difficult to understand the remap of the map.



//4 labels are the position coordinates of the bird's-eye view on the avm panorama for (int i = label1; i < label2; i++){ float *map2_x = map2_xR.ptr(i); float *map2_y = map2_yR.ptr(i); for (int j = label3; j < label4; j++) { Mat vec = (Mat_(3, 1) << j, i, 1);//grid grid coordinates of AVM panorama vec = matrix * (vec);//Get the coordinates of the bird's-eye view Mat coor = Homo_inverse * vec;//Reverse projection from the bird's-eye view to the dedistorted map map2_x[j] = coor.at(0, 0); map2_y[j] = coor.at(1, 0); }}//map1 (distortion) remap map2 (projection + rotation) remap(map1_y, my, map2_xR, map2_yR, INTER_LINEAR);remap(map1_x, mx, map2_xR, map2_yR, INTER_LINEAR);//distortion+projection+rotation+finetuneif (finetune){ remap(mx, mx, m_finetune_l_blendX, m_finetune_l_blendY, INTER_LINEAR, BORDER_REPLICATE); remap(my, my, m_finetune_l_blendX, m_ finetune_l_blendY, INTER_LINEAR, BORDER_REPLICATE);} 2D AVM

[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] ..[14]
Reference address:What is an automatic parking system? Analysis of automatic parking path planning and tracking technology

Previous article:Analysis of the research and application of commercial vehicle drive-by-wire chassis technology
Next article:Six technical routes for in-vehicle gesture interaction research

Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号