The three-dimensional objects in the 2D image are a wall on the left and a car on the right (yes, the front bumper of the car is stretched so long). The fisheye cameras installed around the car body are monocular cameras, which cannot obtain the depth of three-dimensional objects. In graphics, there is a method of enhancing the sense of reality: make a three-dimensional model and map the two-dimensional texture map to the three-dimensional model in some way. 3D AVM uses this texture mapping technology to present a pseudo-3D effect to the driver. 6.1 AVM 3D model construction 3D models are composed of small patches, which can be triangular patches, polygonal patches, and so on. Patches are composed of multiple vertices. For example, a triangular patch corresponds to 3 vertices. Let's take a look at a three-dimensional model intuitively:
If you zoom in on a 3dsMax 3D model, you can see that the 3D model is made up of many small polygonal patches. There are many file formats for 3D models, but generally they all contain three-dimensional information such as model vertices, patches, texture coordinates, and normal vectors. I won't describe how to use 3dsMax to make 3D models. If you are not a professional artist, your method may not be very smart, but you just need to understand the essence. The AVM 3D model is a bowl-shaped 3D model. It simulates the driver's perspective, that is, the road surface is near the car, and this part is directly mapped to the bottom plane of the bowl; and the location farther away from the car may be a 3D object such as a building, a tree, a wall, etc. This part of the content will be mapped to a 3D point in some way. The following shows the necessary information in the 3D model, including vertex coordinates, texture coordinates, normal vectors, and triangle patch indexes.
//Vertex coordinates v 166.2457 190.1529 575.8246 v 169.0261 192.6147 575.0482 v 163.5212 194.2559 576.8094 v 160.4214 177.1097 576.3941 v 160.5880 183.6252 577.0156......//Texture coordinates vt 0.227618 0.463987 vt 0.254011 0.468448 vt 0.251903 0.470549 vt 0.248436 0.466586 vt 0.267204 0.509296......//Normal vector information vn 0.3556 -0.4772 -0.8036vn 0.3606 -0.4537 -0.8149vn 0.3145 -0.3999 -0.8609vn 0.3101 -0.3998 -0.8626vn 0.3170 -0.3811 -0.8685......//Triangle patch information f 5825/5825/4368 5826/5826/4369 5827/5827/4370f 5828/5828/4371 5829/5829/4372 5830/5830/4373f 5831/5831/4374 5832/5832/4375 5833/5833/4376f 5834/5834/4377 5835/5835/4378 5836/5836/4379f 5837/5837/4380 5838/5838/4381 5839/5839/4382 6.2 3D model texture mapping This section describes: (1) where the texture is mapped from and to (2) what strategy is used for mapping The ultimate goal is to find the texture coordinates of each vertex on the 3D model corresponding to the fisheye 2D image. A 3D texture mapping method based on the idea of a virtual camera is used, as shown in the figure:
The 3D texture mapping model based on the idea of virtual camera assumes that the panoramic bird's-eye view of the 2D AVM is taken by a virtual camera directly above the car, and it is treated as a 2D texture and mapped onto the 3D model by perspective projection. In the figure, Lw-Rw is a panoramic bird's-eye view, and the intersection of the straight line between the virtual camera and vertex A on the bird's-eye view is A', so the 2D texture mapping coordinate A' corresponding to vertex A is obtained. Then, the texture coordinates of vertex A of the 3D model on the fisheye camera are found through the inverse projection transformation H_inverse, distortion mapx, and mapy. By traversing each point on the 3D model, the mapping relationship between the three-dimensional model and the texture coordinates of the fisheye camera can be obtained:
According to the perspective projection principle, calculate the bird's-eye view texture A' corresponding to vertex A
Use matrix transformation and homography transformation to inversely deduce the coordinates A1 of A' on the dedistorted image
Find the coordinates of A1 on the fisheye camera distortion map through the dedistortion lookup table map
By traversing the above process, you can get the texture coordinates of all vertices on the 3D model corresponding to the fisheye image.
The specific process is shown in the figure:
3D model texture mapping process to see the effect: the normal vector of this model is inverted, so the rendering result has a problem with the light, it is very dark. But you can see the pseudo-3D realism enhancement effect, just understand the spirit.
Right model mapping 6.3 3D fusion In fact, four surface models are used, namely the front, rear, left and right. These four surface models correspond to the four fisheye images one by one. This is done to increase the parallelism of OpenGl rendering and avoid the use of if and else statements when doing stitching and fusion. The 3D model in the above figure is the surface model corresponding to the left fisheye camera. Let's review the 2D AVM approach first: generate a bird's-eye view and then do fusion. The size of the bird's-eye view generated at that time was 1080*1080, which is equivalent to 1080cm*1080cm in the real world, which is enough to show the surroundings of the car body. However, this range can only be mapped to the area near the bottom of the 3D bowl model at most, as shown in the following figure: Ignore the jagged edges, which is a bug in the interpolation during rendering, so ignore it for now. From this figure, it can be seen that if the bird's-eye view is selected to be very small, it will only be mapped to the bottom of the bowl.
Map the small bird's-eye view onto the model and enlarge the size of the bird's-eye view to see:
The large bird's-eye view is mapped onto the model. Figure 1 is the image after the fisheye camera on the left is distorted. Figure 2 is the bird's-eye view obtained by projecting Figure 1. Figure 3 is the result after mapping to the model on the left. It should be noted here that in the algorithm implementation, it is not possible to really generate a bird's-eye view like 2D AVM. It is not difficult to see from the bird's-eye view that the part far away from the chessboard is severely elongated, and the pixels close to the vanishing point and vanishing line in Figure 1 will be pulled to infinity in the bird's-eye view. It can be understood that the vanishing point and vanishing line in Figure 1 indicate that in the current camera posture, all points on a plane (such as the ground) are below this vanishing line. The bird's-eye view is equivalent to shooting with the camera parallel to the ground, so the image at infinity on the ground plane (i.e., the vanishing point and vanishing line in Figure 1) will inevitably be stretched to infinity in the bird's-eye view, just like Figure 2. If you are interested, you can take a look at the explanation of the vanishing point.
If you want to fill the entire bowl-shaped model with texture, you need to generate a very large bird's-eye view. That is, you need to calculate a very large map, which requires a huge amount of calculation. Therefore, in the algorithm implementation, you should choose to traverse each vertex on the model and perform inverse texture mapping to calculate the texture coordinates (no longer relying on generating a bird's-eye view). The gaps between the vertices will be filled by the rendering engine through interpolation. This is a mature technology. After talking so much, we finally talk about three-dimensional fusion. 2D fusion is to perform morphological operations on the coverage area of the bird's-eye view, get the following picture, and then calculate the weight. The 3D algorithm emphasizes the thinking of discrete points and will no longer generate a super large bird's-eye view. In other words, the algorithm will no longer calculate an image that covers the area like the picture below. Therefore, we need to find other ways to solve the problem of 3D fusion.
The overlapping area in the upper left corner is shown in the figure:
The idea of the schematic diagram of the overlapping area in the upper right corner is to calculate the texture coordinate B of the bird's-eye view corresponding to the 3D vertex. The weight is calculated by the angle between AB and m and l. Of course, the overlapping area cannot be so ideal. The schematic diagram l and m in this paper intersect at point A. The actual situation is like the figure above. It is necessary to use a strategy specifically for 3D AVM fusion to achieve this. Map the weight map corresponding to the vertices of the 3D model to a two-dimensional schematic diagram:
Zoom in and see that in general, the 3D AVM algorithm is to first create a 3D model, and then bind each vertex on the 3D model to a 2D texture map through texture mapping. OpenGl uses the above data for rendering. The final 3D effect will be posted later. Online stage engineering implementation pipeline The previous introduction is the algorithm flow of the offline stage. The offline stage is only used on the assembly line or in 4S stores. It is an initialization process. The initialization content includes: distortion table, projection transformation matrix, texture mapping relationship, splicing fusion weight map, etc. The most important thing is to write the distortion removal, projection transformation, texture mapping and other processes into a lookup table, store it in memory, and call it directly during online processing. Attached is part of the code. It may be a little difficult to understand the remap of the map.
//4 labels are the position coordinates of the bird's-eye view on the avm panorama for (int i = label1; i < label2; i++){ float *map2_x = map2_xR.ptr(i); float *map2_y = map2_yR.ptr(i); for (int j = label3; j < label4; j++) { Mat vec = (Mat_(3, 1) << j, i, 1);//grid grid coordinates of AVM panorama vec = matrix * (vec);//Get the coordinates of the bird's-eye view Mat coor = Homo_inverse * vec;//Reverse projection from the bird's-eye view to the dedistorted map map2_x[j] = coor.at(0, 0); map2_y[j] = coor.at(1, 0); }}//map1 (distortion) remap map2 (projection + rotation) remap(map1_y, my, map2_xR, map2_yR, INTER_LINEAR);remap(map1_x, mx, map2_xR, map2_yR, INTER_LINEAR);//distortion+projection+rotation+finetuneif (finetune){ remap(mx, mx, m_finetune_l_blendX, m_finetune_l_blendY, INTER_LINEAR, BORDER_REPLICATE); remap(my, my, m_finetune_l_blendX, m_ finetune_l_blendY, INTER_LINEAR, BORDER_REPLICATE);} 2D AVM
Previous article:Analysis of the research and application of commercial vehicle drive-by-wire chassis technology
Next article:Six technical routes for in-vehicle gesture interaction research
- Huawei's Strategic Department Director Gai Gang: The cumulative installed base of open source Euler operating system exceeds 10 million sets
- Analysis of the application of several common contact parts in high-voltage connectors of new energy vehicles
- Wiring harness durability test and contact voltage drop test method
- Sn-doped CuO nanostructure-based ethanol gas sensor for real-time drunk driving detection in vehicles
- Design considerations for automotive battery wiring harness
- Do you know all the various motors commonly used in automotive electronics?
- What are the functions of the Internet of Vehicles? What are the uses and benefits of the Internet of Vehicles?
- Power Inverter - A critical safety system for electric vehicles
- Analysis of the information security mechanism of AUTOSAR, the automotive embedded software framework
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- EEWORLD University - Texas Instruments Deep Learning (TIDL) Overview
- Can the transmission speed of Zigbee be achieved on a small car?
- Applying Over-the-Air Firmware Upgrades to MSP 430 Microcontrollers
- LED glass screen
- Download and get a gift! Meet spring and have fun with us! Come and learn about optical communication test and measurement solutions with Keysight
- Best Practices for 4G/5G Smartphones: How to Achieve Aperture Tuning? (Part 2)
- TouchGFX Design + HMI Design for HVAC Air Conditioning
- [Repost] 10 rules for PCB layout and wiring
- Switching Power Supply
- Dual-core communication exchanges data through shared memory ARM-side program learning