How to realize the real-time online calibration design of autonomous driving system sensors

Publisher:和谐相伴Latest update time:2024-04-19 Source: elecfans Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Autonomous driving vehicles usually have multiple sensors installed as perception input sources, and first of all, the coordinate relationship between them needs to be determined. Therefore, sensor calibration is a basic requirement for autonomous driving. In general, the entire calibration work can be divided into two parts: internal parameter calibration and external parameter calibration. The internal parameters determine the mapping relationship inside the sensor, such as the focal length, eccentricity and pixel aspect ratio (+ distortion coefficient) of the camera, while the external parameters determine the conversion relationship between the sensor and an external coordinate system, such as attitude parameters (rotation and translation 6 degrees of freedom).


Sensor calibration is the basis for the robust performance of smart vehicles. The traditional and commonly used calibration method is to conduct electrical inspections on smart vehicles at the factory. Once the calibration is completed, its internal and external parameters will not be automatically optimized and updated after it is delivered to the customer.


As we all know, the entire intelligent driving perception field almost exclusively uses computer vision-related AI processing technologies, such as deep learning, artificial intelligence, and various data-driven optimizations.

Take lane line recognition, the most common perception, as an example. With the advent of the era of autonomous driving, the application scenarios that require lane line recognition are becoming more and more extensive, and the corresponding lane line recognition technology is becoming more and more important. The specific reasons are as follows:

First, it covers a wide range of scenarios. Lane recognition exists in most application scenarios of autonomous driving and is an indispensable basic algorithm for autonomous driving.

Second, as an important safety guarantee, lane line recognition can accurately identify roads and ultimately help machines make decisions, which is also an important guarantee for driving safety.

Third, in the field of high-precision map or light map construction, lanes are the primary key for road association of all high-precision map elements, so lane line elements are of paramount importance in high-precision maps, and lane line recognition algorithms are also crucial.

Based on this, autonomous driving has extremely high requirements for the accuracy of lane recognition results. We have found that in the actual recognition process, many Tier 1 or Tier 2 companies have more or less discovered some unfavorable perception results during actual road tests.

For example, after a vehicle is put on the road for a certain period of time after it is rolled off the production line, it is easy to have lane line jumps, inaccurate target perception and other results. In fact, these algorithm suppliers also have certain analysis and response capabilities for the source of such problems. Because most of these problems come from the projection matrix P or homography matrix H determined by the calibration of internal and external parameters. Usually, the optimization methods they use include the following:

1) If the two lane lines jump intermittently, it may be caused by the perception algorithm itself. In this case, consider preprocessing on the control side. The method is to use the two lane lines as the original input, and the Kalman filter predicts the output of the driving center line;

2) If one of the lane lines jumps intermittently, consider whether some internal and external parameter values ​​of the sensor have undergone calibration deviation over time. The solution is to use the lane line and the target to calculate a better camera pitch value, and then add some parallel assumptions at the back end to solve this kind of problem well;

3) The long-term operation and maintenance-free advantages of the camera system can only be guaranteed by an automatic calibration method, which can update the camera parameters as needed. However, in the natural environment, interference can easily pose challenges to calibration. One possibility is to recalibrate the sensor using natural objects of known shapes. If the sensor is calibrated online in real time using the recognized static targets in the environment, a high-quality Pitch value can be obtained using optical flow or other advanced video tracking detection techniques.

01

Overview of Self-Calibration Methods

Common camera calibration methods require some special reference patterns, such as a chessboard, whose physical size is known in advance. However, these special patterns are not easy to obtain when the vehicle is on the road. In this case, the automatic calibration system needs to extract reference information from its environment.

In summary, the methods of automatic calibration can be classified into the following categories:

1) Extract vanishing points or road corners from the road virtual lines for online calibration.

2) Receive predefined feature points from other cars to build a corresponding model for obtaining traffic camera parameters.

3) Monitor camera calibration based on three vanishing points that define the vehicle flow.

4) Accomplish the same thing by targeting the walking character as a vertical line segment.

wKgaomUBDRWAaB9gAAWWzkME3r8288.jpg

In fact, for the self-calibration method, the current strategy considered is to extract feature points from the road line for calibration, but it is impossible to assume that the width of the road line in different areas is the same. In addition, there may be defects along the contour of the road line due to wear. These pseudo-images and image changes will lead to certain calibration errors.

There are two solutions to the above problems. One approach is to design a structured road environment (a grid of known size drawn on the road) on the vehicle side, which is detected by the camera when the vehicle approaches. Another approach uses several markers placed on the hood of the car to quickly estimate the relative position and direction of the on-board camera. However, this may distract other drivers, assuming that the hood is within the camera's field of view, and if the hood is not flat, the performance of different vehicles may vary. Therefore, neither of the above two methods is suitable for real driving scenarios.

02

The optimized self-calibration scheme proposed in this paper

The focus of this paper is to study a new optimization scheme for real-time online calibration of sensors. The focus here is on a method based on identifying traffic signs in the environment (such as stop signs, speed limit signs and other common signs) and using them to recalibrate the camera. The method is implemented based on detection, geometry estimation, calibration and recursive update steps. The results of real-time calibration can clearly show its convergence and improved performance.

Unlike design patterns, stop signs are ubiquitous. The physical dimensions of stop signs are standardized by the Department of Transportation. Using traffic signs, especially stop signs, has three advantages over other reference information present in the urban environment:

Sufficient feature points (at least eight starting from the inner octagon of the stop sign) can be detected with sub-pixel accuracy.

Compared to other reference objects (e.g., road lines), the relevant geometric properties of stop signs remain unchanged because they are made of metal that is robust to external forces.

The vehicle slows down when approaching a stop sign, reducing image blur and rolling shutter effects.

Sensor calibration is fundamental for the robust performance of intelligent vehicles. In natural environments, disturbances can easily pose challenges to calibration. One possibility is to use natural objects of known shapes to recalibrate the sensors. Therefore, we propose an approach for automatic calibration using stop signs.

The entire calibration logic flow chart is as follows:

The detection described above focuses on the following important module processing.

1) Traffic sign detection:

For the case of real-time online calibration relying on road signs, it is necessary to detect the traffic sign information in the road in real time while driving. Currently, object detection has been a well-studied field with the help of convolutional neural networks (CNN). General detection architectures (such as YOLO, YOLOv3) can easily run in real time (45 Hz); while proposal-based architectures (such as Faster R-CNN, FPN) require more resources and usually take longer to run, and are usually better than the former in terms of accuracy, especially for small objects and complex scenes. In addition, instance segmentation methods that label object pixels can also be used to find stop sign bounding boxes.

2) Shape matching:

Shape matching is done by checking the similarity of the shapes of the two signs to be compared using a similarity metric. In particular, in our example of online calibration using a stop sign, the shape enclosed by the detected stop sign edge must be an octagon. Here, the polygon can be represented by a rotation function and the Lp distance can be measured. Of course, affine invariance can also be considered (in addition to rotation, translation and scale invariance).

The real-time online calibration system for smart cars can be divided into seven modules (as shown below).

wKgZomUBDRWAV-0gAAJM4ajO8AQ731.jpg

The first five parts generate 2D-3D correspondence pairs of corner points from the city image sequence, and the last two parts are the last two steps to calculate the intrinsic parameters. Here we will explain the process separately.

1. Stop Sign Detection

The sign detection stage processes the image frame using the Mask R-CNN model with a ResNet-50 and Feature Pyramid Network backbone to generate a 2D bounding box (green box in Figure a above). The relatively low speed of the Mask R-CNN model does not affect our current non-real-time system because the overhead is mainly in the edge line fitting module. Therefore, Mask R-CNN has a higher stop sign detection rate compared to Faster R-CNN, which may improve the calibration accuracy of our system by providing more reference candidates.

[1] [2] [3]
Reference address:How to realize the real-time online calibration design of autonomous driving system sensors

Previous article:Silergy Automotive Grade Electronic Fuse SA21816
Next article:How to significantly improve the thermal efficiency of automobile engines?

Recommended ReadingLatest update time:2024-11-16 10:30

Automobile ABS wheel speed sensor product technology manufacturing and application - Part 1
Even though the rapid development of new energy and smart cars has led to the gradual elimination of systems powered by internal combustion engines, and many powertrain system sensors around internal combustion engines and gearboxes have been removed by designers, locomotive ABS wheel speed sensors will still be widel
[Automotive Electronics]
Automobile ABS wheel speed sensor product technology manufacturing and application - Part 1
Bosch launches new MEMS sensor to continuously record changes in direction and speed
According to foreign media reports, Bosch has released a new MEMS sensor SMI230 that can continuously record changes in vehicle direction and speed, evaluate the information, and transmit it to the navigation system. This information is combined with GNSS position data for navigation. If the GPS signal is suddenly int
[Automotive Electronics]
Bosch launches new MEMS sensor to continuously record changes in direction and speed
Why current and magnetic sensors are crucial to the design of TWS (true wireless earphones)
In recent years, TWS (True Wireless Stereo) has been rapidly rising in the headphone market. Now, users no longer have to worry about the tangled headphone wires when using streaming devices. True wireless headphones are wireless headphones based on Bluetooth®, with the left and right channels separated into two ind
[Embedded]
Why current and magnetic sensors are crucial to the design of TWS (true wireless earphones)
Allegro MicroSystems Introduces First ASIL C Safety-Rated Magnetic Field Current Sensor for Electric Vehicle Powertrains
Allegro MicroSystems Introduces First ASIL C Safety-Rated Magnetic Field Current Sensor for Electric Vehicle Powertrains Sensor sampling now available to help designers meet new safety and efficiency standards Manchester, NH, USA – Allegro MicroSystems (NASDAQ: ALGM), a global l
[Automotive Electronics]
Allegro MicroSystems Introduces First ASIL C Safety-Rated Magnetic Field Current Sensor for Electric Vehicle Powertrains
Smartphones and wearable devices usher in an era of sensor fusion innovation
    According to MEMS Consulting, sensor fusion can analyze data by integrating multiple sensors and is currently being rapidly applied to smartphones, wearable devices, automobiles and the Internet of Things (IoT). Sensor fusion has also injected vitality into emerging technology applications such as augmented realit
[sensor]
Smartphones and wearable devices usher in an era of sensor fusion innovation
Integrated Digital Sensor Interfaces: Technology That Saves Lives in Vehicle Safety Testing
In the automotive industry, vehicle testing plays a vital role and is a necessary procedure to verify key indicators such as vehicle power, tires, durability, and safety. Before mass production of vehicles, manufacturers must ensure that the entire vehicle and its components have high safety and adaptability. Vehicle
[Automotive Electronics]
Integrated Digital Sensor Interfaces: Technology That Saves Lives in Vehicle Safety Testing
Jaguar Land Rover introduces active noise cancellation technology using sensors on the wheels to monitor road vibrations
Jaguar Land Rover is using a high-tech active noise cancellation technology in its new vehicles that promises to reduce unwanted noise by up to 10 decibels, helping to provide a more refined and relaxing driving experience while also helping to reduce driver fatigue and improve overall safety. The technology was int
[Automotive Electronics]
Jaguar Land Rover introduces active noise cancellation technology using sensors on the wheels to monitor road vibrations
BAE Systems develops 360-degree multi-function vehicle protection sensor to detect threats and send warnings
(Image source: BAE Systems official website) According to foreign media reports, BAE Systems has recently launched the 360 ​​Multi-Function Vehicle Protection (MVP) sensor. As part of the company's integrated Vehicle Protection System (VPS) suite, the sensor provides better visibility, situational awareness, threat
[Automotive Electronics]
BAE Systems develops 360-degree multi-function vehicle protection sensor to detect threats and send warnings
Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号