Research on the technology to solve the trapezoidal distortion and barrel distortion of wide-angle lenses

Publisher:colcheryLatest update time:2013-11-25 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere
This paper takes the Freescale Smart Car Competition as the background, uses the 16-bit microcontroller MC9S12XS128 produced by Freescale as the control core, and makes a camera car that can patrol the line and drive quickly. Because the optical axis of the camera is at a certain angle to the ground, its imaging has trapezoidal distortion; in order to expand the field of view, wide-angle lenses are increasingly used by many teams, so there is barrel distortion. These two distortions are problems that every camera team using wide-angle lenses will encounter. Many teams avoid this problem and directly use the pixel points after image preprocessing for control. However, if the pixel points are converted into actual physical coordinates, it is undoubtedly more intuitive, which brings great convenience to program writing or modeling. In addition, the method proposed in this paper can effectively solve these two distortions, and the actual operation is not complicated.

  Overview of each team’s solutions

  The method proposed in the literature [1] is that the trapezoidal distortion can be eliminated by performing a linear correction on each row of extracted road positions, and the coefficient of the linear compensation can be determined experimentally. However, the experimental method is relatively complicated and cannot eliminate the barrel distortion.

  Reference [2] produced an image calibration plate, as shown in Figure 1.

  The principle is: the shaded part in Figure 1(a) is the position of the vehicle body. Many black lines are pasted on the calibration board at equal intervals. After taking a photo of the calibration board, the relationship between the actual position and the position in the image can be known. This method will have a large error because the black lines have a certain width.

  Reference [3] adopts a non-uniform row acquisition scheme. The so-called non-uniform row acquisition is the opposite of uniform row acquisition. In uniform row acquisition, the rows acquired by the AD module are evenly distributed in the image output by the camera. Non-uniform row acquisition means that the rows acquired by the AD module are non-uniformly distributed in the original image according to a certain rule, and this rule is to ensure that the acquired image is not distorted in the vertical direction (the direction of the center axis of the car) with the actual scene. Then the lateral distortion coefficient of each row is determined.

  As shown in Figure 2, when non-uniform data is collected, the data collected at a distance is dense, while the data collected at a nearby location is sparse. Since the camera installation method is often changed during the experiment, in order to determine the optimal depression angle and optimal height, recalibration is required every time the camera is changed. This solution is not very convenient. Reference [4] established a light path geometric model diagram, as shown in Figure 3.

  Experimental plan: Measure the height H of the camera frame fixing screw and the deflection angle (pitch angle) θ of the camera center relative to the vertical rod. Since the calculation of the optical center is completely determined by these two data and the distance S from the proximal end to the fixing rod (that is, the distance S0 from the bumper to the fixing rod and the distance S' from the proximal end to the bumper are added together, and the distance S from the proximal end black line to the camera fixing rod can also be directly measured on the experimental board), the more accurate the better. Draw a vertical line with a length of H from point O to point A, draw a horizontal line AB, intercept AD with a length of S, and make a ray DB through point O that is θ with the vertical line and intersects AB at C. D draw DE perpendicular to OC through D, and make OC the perpendicular bisector of DE, connect BE and extend it, intersecting OC and O', then O' is the optical center. From the figure, it can be calculated that the distance from O' to the bottom edge is H', and the pitch angle remains unchanged. Place the experimental board vertically, make a square calibration area with a side length of A1, that is, the DE plane in Figure 3, and place the camera horizontally facing the center C of the experimental board. The distance between the camera frame fixing screw and the experimental board is H1. Read out the corresponding pixel points of the feature points on the calibration test board. The relationship between (X, Y) and pixel points (U, V) in Figure 4 can be obtained (U is the number of rows and V is the number of columns).

  Since the relationship between the experimental plane and the real visual field plane is purely geometric, this part of the conversion function relationship can be derived geometrically. The formula is relatively complicated and will not be listed here one by one.

  The biggest drawback of this formula is that there are many trigonometric functions such as sin() and cos(), but it takes a lot of time for the microcontroller to do such calculations, so trigonometric functions and square root calculations should be avoided as much as possible. Moreover, if a wide-angle lens is used or the camera is mounted low, point B will be far away from point A and point B cannot be found. Therefore, this method is not universal. The experiment itself is also relatively complicated.

  The experimental method used in the literature [5] is: draw a series of small squares on a white board in advance. The smaller the squares, the higher the accuracy. Then mark the center black thick line to determine the placement of the car and the center of the image. As shown in Figure 5, the corresponding pixel coordinates of each feature point can be directly read out to establish a corresponding relationship.

  The experimental scheme is very intuitive, but its operation is not necessarily easy. Because the camera has a wide field of view, the required correction network is also large, and it is difficult to ensure that the grid lines drawn on it are absolutely horizontal or vertical.

  Reference [6] uses geometric mathematical modeling to derive the relationship between the imaging coordinates of the image captured by the camera and the actual world coordinates of the scene. The coordinate transformation relationship is as follows:

  After the camera is installed and fixed, c/tanθ, a, b, c, h and h/cosθ are all constants. This method is relatively good, but it requires knowing f, L, and H. The manufacturer will provide these three parameters, but they may not be accurate. θ is also difficult to measure accurately, and it cannot solve the problem of barrel distortion.

Reference address:Research on the technology to solve the trapezoidal distortion and barrel distortion of wide-angle lenses

Previous article:Low power consumption design of wireless sensor based on MSP430
Next article:System Design for Detecting Fiber Bragg Grating Sensors Using Matched Grating Demodulation

Latest Analog Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号