Research on Automobile Four-Wheel Alignment Technology Based on Computer Vision

Publisher:huanguuLatest update time:2011-07-27 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

introduction

With the rapid growth of the number of cars, the quantity and quality of car testing equipment are constantly improving. As an important part of vehicle testing, wheel alignment parameter testing has a significant impact on the safety performance of the entire vehicle. If the wheel alignment parameters are abnormal, it will lead to abnormal tire wear, driving deviation, wheel vibration, heavy steering, increased fuel consumption and other problems, which will directly affect the driving safety of the car.

Traditional four-wheel alignment instruments mainly include laser, infrared, level, optical and wire-drawing types. In the current domestic and foreign automobile inspection industry, the latest four-wheel alignment product is an image-based four-wheel alignment instrument based on computer vision. This alignment instrument is completely based on computer image processing technology. It only requires two high-performance CCD sensor cameras and four target discs installed on the wheels (as shown in Figure 1). It does not require traditional electronic sensors, eliminating possible faults caused by the circuit. Compared with traditional four-wheel alignment instruments, the number of sensors is greatly reduced, and there is no need for repeated calibration. It can be reused after only one calibration. It is simple to operate, has fast detection speed and high accuracy.



This product is technologically advanced, but currently it is mainly imported and expensive. Its principle is strictly confidential to foreign manufacturers, and there is no detailed report in China. Therefore, its principle is analyzed and discussed in detail based on the perspective method of foreign countries and the space vector method of this paper. A mathematical model based on the space vector method is given, and its effectiveness is verified through real vehicle experiments.

1. Based on perspective

Foreign V3D locators use this principle (such as the JohnBean locator in the United States). When this method is used for detection, the camera captures the wheel movement (target disk), and after image processing, it is compared with the known data. According to the principles of perspective and analytical geometry, the distance from the target to the camera and the rotation angle and other geometric parameters are accurately calculated based on the position and size change of the reflective spot of the target disk. After data processing, the positioning data of each wheel is calculated by comparing it with the reference surface.

1.1 Basic principles of perspective

This method needs to apply the perspective principle and perspective foreshortening principle in perspective[3]. Perspective is the study of the principles of object generation and change in a certain visual space by observing objects through an imaginary transparent plane.



Take a circle as an example, as shown in Figure 2 (a), according to the principle of perspective, when a circle approaches from a distance, the visual size of the circle will become larger and larger, that is, the image of the same object is larger near and smaller far away. This principle can be used to measure the distance to the object;

As shown in Figure 2 (b), according to the principle of perspective shortening, when a circle rotates along the horizontal axis, its vertical dimension will become shorter and shorter, gradually turning into a line segment (the length of which is the diameter of the circle). When the circle continues to rotate, it will gradually expand from a line segment into an ellipse, and finally turn back into a circle. During the rotation process, the length (diameter) of the circle's rotation axis remains unchanged in the visual effect. Therefore, the angle that the circle has rotated along the horizontal axis can be calculated from the change in the appearance of the circle's height. Similarly, the angle that the circle has rotated along the vertical axis can also be calculated. By combining the effects of the horizontal and vertical axis rotations of the circle, the angle that the circle has rotated in any direction in space and the spatial position of its rotation axis can be calculated.

1.2 Target distance and turning angle calculation

Due to the need to obtain positioning parameters, it is necessary to first determine geometric parameters such as the distance from the camera to the observed target and the rotation angle, which can be calculated based on trigonometric functions and basic geometric theory.

As shown in Figure 3, L is the initial position of the target, and its image is L'; L1 is the position of L after rotation by θ, and its image is L1'. And the focal length f and the actual size of the target L (L1= L) are known, and the image size L' of the target at the focal length can be calculated by the lens imaging formula. Equation (1) is obtained.


When f, L, L' and p are known, equation (2) can be obtained.

Similarly, the sum α after the rotation angle θ can be calculated, and θ can be calculated from α, L1 and d.

The distance, imaging size and spatial rotation angle are calculated through the above process.

1.3 Obtaining positioning parameters

In this method, a circle is generally selected as a regular figure on the target disk because it has unique geometric characteristics and is the most ideal figure for calculating related parameters. It is an axisymmetric figure and a centrally symmetrical figure. When positioning, three mutually perpendicular planes determined by the wheel axle are used as the positioning reference: the body plane, the axle plane, and the wheel plane. The axle plane is the reference plane for the castor; the wheel plane is the reference plane for the toe angle, camber angle, and kingpin inclination angle.

When the actual size of the target is known, and the distance from the observation point of the camera to the target, the imaging size and the rotation angle are calculated, the positioning parameter data can be obtained by calculation.

The target disc is mounted on the wheel at a certain angle using a fixture. When the vehicle moves forward and backward, the wheel and the target disc rotate forward and backward together. By detecting the rotation of the circle on the target disc along its longitudinal axis, the toe angle can be detected. At the same time, the symmetry lines of the target disc will form a set of vector surfaces during this process. The angle between the two symmetry lines of the target disc before and after the rotation is called the vector angle. The camber angle of the wheel can be calculated through the vector angle [4] (as shown in Figure 4 (a)).

When the vehicle is stationary (as shown in Figure 4 (b)), the wheel and the target disc are rotated to the left or right. The rotation of the upper circle on the detection disc around its longitudinal axis can detect the kingpin inclination angle; the rotation of the detection circle along its transverse axis can detect the kingpin caster angle.


This method is relatively simple in principle and is a foreign patented technology. Although it has certain requirements on the shape of the pattern on the target disk, the derivation and calculation process is simple and ingenious, easy to achieve rapid positioning, and has no strict requirements on the positioning platform.

2. Space vector based approach

This method is to shoot the target disk (with regular patterns) installed on the wheel before and after movement, and then perform image processing and analysis to extract the feature points on the target disk. The spatial rotation vector of the wheel is calculated based on the change in the position of the feature points in the spatial coordinates, and the positioning parameters are obtained from the angle relationship between the vector and the coordinate axes of the spatial coordinate system.

2.1 Reference coordinate system

In computer vision, three coordinate systems are needed: world coordinate system, camera coordinate system and image coordinate system.

The world coordinate system (Xw, Yw, Zw) is a reference coordinate system selected in the environment to describe the camera position. It can be freely selected based on the principles of description and calculation convenience. For some camera models, choosing an appropriate world coordinate system can greatly simplify the mathematical expression of the visual model.

The camera coordinate system (Xc, Yc, Zc) takes the optical center Oc of the camera lens as the coordinate origin. The Xc and Yc axes are parallel to the imaging plane, and the Zc axis is perpendicular to the imaging plane. The coordinates of their intersection in the image coordinate system are (u0, v0), which is the camera principal point.

The image coordinate system is a rectangular coordinate system defined on a two-dimensional image, which is divided into two types: pixel-based and physical length (such as millimeters), represented here by (u, v) and (x, y), respectively, as shown in Figure 5. The most commonly used coordinate system is the pixel-based coordinate system, and its origin is usually defined at the upper left corner of the image.


Assume that the physical size of each pixel of the camera CCD in the X-axis and Y-axis directions is dx, dy (the parameters are provided by the camera manufacturer and are known parameters. The ratio dy/dx is called the Aspect Ratio). As shown in Figure 5, the relationship between the pixel value (u, v) and the coordinate (x, y) on the image is, , which can be expressed as formula (3) using homogeneous coordinates and matrices.


2.2 Camera Model

The pinhole model is derived from the principle of pinhole imaging. It is a linear camera model obtained by adding rigid body transformation (rotation and translation of rigid body) to simple central projection (also called perspective projection). It does not consider the distortion of various lenses, but it can simulate the actual camera very well and is the basis of other models and calibration methods.

Let P be a point in space, whose coordinates in the camera coordinate system are (Xc, Yc, Zc); q is the corresponding point on the imaging plane of P, whose coordinates are (x, y), and let f be the focal length of the camera. Then the proportional relationship according to perspective projection is given by equation (4).

2.3 Obtaining coordinate transformation relationship

As part of the parameter solving process, the target disk is photographed in advance, and the world coordinates of the target disk feature points are obtained by extracting the pixel coordinates of the feature points on the image. Here, the conversion relationship between the world coordinates and the ideal image pixel coordinates is established based on the pinhole model.

The geometric features and rules of the target disk surface are known. The world coordinate system is established on the disk surface, and the world coordinates of the feature points are known, and Zw=0. Let the coordinates of one point P(Xw, Yw, Zw) in the camera coordinate system be (Xc, Yc, Zc), and after shooting, it is imaged on the CCD image plane. Let the coordinates of the imaging point be (x, y), and the corresponding image pixel coordinates are (u, v).

The first is the conversion of the spatial coordinate system. According to computer vision theory, rigid body motion can be decomposed into the synthesis of rotation and translation. The world coordinate system can be converted to the camera coordinate system and transformed into homogeneous coordinates, as shown in equation (5).

s is the scale factor, and H is the homography matrix. Thus, the correspondence between the image coordinates and the world coordinates can be established. Then, the known world coordinates of the target disk feature points and the extracted image pixel coordinates are substituted into equation (9) to obtain their conversion relationship for subsequent calculations.

2.4 Obtaining positioning parameters

The motion of the wheel can also be regarded as a rigid body motion. The motion of a point on the wheel (since the target disk and the wheel move in the same way, we are studying a point on the target disk fixed on the wheel) is decomposed into a rotation and translation around the rotation axis.

Assume that a pair of corresponding points before and after the wheel (target disk) moves are P and P', and their world coordinate system coordinates are (Xw, Yw, Zw)T and (Xw', Yw', Zw')T respectively. Then the transformation formula is (10).


Among them, R, T are exactly the same as those in formula (5), but their meanings have changed. Here they are used to describe the transformation process at different spatial positions. θ in R is the rotation angle of the point on the wheel around the rotation axis, and (n1, n2, n3) is the spatial vector of the wheel rotation axis; while R, T in formula (5) are used to describe the transformation relationship between the camera coordinate system and the world coordinate system.

According to the coordinate transformation relationship obtained in 2.3, the world coordinates of the spatial target disk before and after the movement are obtained using its image coordinates, and then substituted into formula (10) to obtain the rotation axis vector (n1, n2, n3) of the wheel movement. The angles α, β, γ between this vector and the coordinate axes Xw, Yw, Zw in the world coordinate system. The positioning parameters can then be obtained using the angles.

For the convenience of explanation, as shown in Figure 6, let Zw be the direction of the car's forward movement, Xw point to the left side of the car, Yw be perpendicular to the plane of the car body, N be the wheel rotation axis vector, N' be the translation of N, then β-900 is the wheel camber angle, arctan|cosγ/cosα|×1800/π is the toe angle.


Using the same method, take an image of the car when it turns left or right by an angle, find the spatial vector of the kingpin, and then find the kingpin inclination angle and caster angle.

In principle, this method targets the wheel rotation axis and kingpin, starting from their vectors. Although the derivation process is relatively complicated, the calculation result is relatively direct. It has low requirements on the pattern shape on the target disk and the positioning platform, and it is easy to achieve stable and precise positioning.

3. Experimental results and analysis

Field experiments were conducted to detect wheel alignment parameters using the most advanced John Bean image wheel alignment instrument and the space vector-based measurement method proposed in this paper. The experimental vehicle was a Volkswagen Golf 2004 sedan. A self-made target board was used to verify the method proposed in this paper. The regular pattern set was a chessboard pattern, and the target board was fixed on the front and rear wheels. Some experimental images are shown in Figure 7. Taking the detection of the left wheel of the car as an example, the average value of multiple measurement results was taken as the final result. The experimental results are shown in Table 1. It can be seen from the results in Table 1 that within the allowable error range, the measurement results of the method proposed in this paper are basically consistent with the measurement results of the chariot wheel alignment instrument, which proves the correctness and effectiveness of the model proposed in this paper.


The main reason for the error is the precision of the self-made target board and the influence of image noise. The error is also affected by factors such as camera resolution, camera calibration, corner point coordinate extraction accuracy, etc. Therefore, to improve the measurement accuracy, the target board must be made as accurate as possible, the camera resolution must be as high as possible, and the camera calibration accuracy must be improved.

4. Conclusion

The four-wheel alignment method based on computer vision makes full use of visual theory and cleverly uses spatial geometry knowledge to achieve accurate, fast and convenient detection of wheel alignment parameters. This paper discusses and analyzes the principle based on perspective and the principle based on space vector, and gives the mathematical model based on space vector proposed in this paper. The actual test proves its correctness and effectiveness. It provides new ideas and new technologies for the domestic automotive electronics testing industry.

Reference address:Research on Automobile Four-Wheel Alignment Technology Based on Computer Vision

Previous article:Vehicle GPS positioning information collection system
Next article:A brief discussion on the application of automobile lateral tilt angle sensor

Latest Industrial Control Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号