Application of multi-sensor information fusion technology in intelligent driving system

Publisher:SereneWandererLatest update time:2010-03-17 Source: 电子技术应用 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

With the development of sensor technology, information processing technology, measurement technology and computer technology, intelligent driving systems (assisted driving systems - unmanned driving systems) have also developed rapidly. Consumers are paying more and more attention to driving safety and comfort, which requires sensors to be able to identify cars driving in front of them in the same lane, and to remind drivers or automatically change the state of the car when there are obstacles to avoid accidents. Major international automobile companies are also committed to research in this area and have developed a series of safe driving systems, such as collision warning systems (CW), deviation warning systems (LDW) and intelligent cruise systems (ICC). There are also certain studies in these areas in China, but there is still a big gap compared with foreign countries. This article will mainly discuss the application of multi-sensor information fusion technology in intelligent driving systems (ITS).

1 Problems in ICC/CW and LDW Systems

1.1 Misidentification Problems in ICC/CW Systems

Single beam sensors are often used in ICC/CW systems. Such sensors use a very narrow beam width to detect the vehicle ahead. For curved roads (see Figure 1(a)), the vehicle ahead can easily run out of the sensor's measurement range, which will cause the intelligent cruise system to accelerate incorrectly. If the vehicle ahead slows down or another car enters the lane at a curve, the collision warning system will not be able to respond within the safe parking range and may easily cause a collision. Similarly, when the curve is extended (see Figure 1(b)), the radar system may easily mistake vehicles on adjacent roads or roadside guardrails as obstacles and issue an alarm. When the road is uneven, the road ahead of the radar sensor is inclined upward, and hillocks or mounds may also be mistaken for obstacles, all of which reduce the stability of the system. There are some filtering algorithms that can handle these problems [6] and have achieved certain results, but they cannot completely solve the problem.

1.2 Scene recognition problems in LDW systems

The LDW system also has the problem of scene recognition in the public driving area. The LDW system relies on a camera on one side (often only able to measure the position of adjacent vehicles on the road), which makes it difficult to distinguish between curved roads and various personal driving modes. The LDW system uses a forward-facing camera to detect the geographical conditions of the road in front of the vehicle, which has accuracy problems for long-distance measurements, all of which affect the accuracy of TLC (Time-to-Line-Crossing) measurements. Dead zone recognition and driving information revision methods are commonly used for processing, but they cannot provide any prior knowledge to identify faults.

2 Application of Multi-sensor Information Fusion Technology in ITS System

In response to some of the problems existing in the above systems, researchers have introduced multi-sensor information fusion technology and proposed different fusion algorithms. Sensors based on the visual system can provide a large amount of scene information, and other sensors (such as radar or laser, etc.) can measure distance, range and other information. After fusing the two aspects of information, more reliable recognition information can be given. Fusion technology can be implemented using methods such as the CLARK algorithm (Combined Likelihood Adding Radar) proposed by Beaurais et al. in 1999 [3] and the ICDA (Integrative Coupling of Different Algorithms) algorithm proposed by Institude Neuroinformatik [4].

2.1 Sensor Selection

The first problem in identifying obstacles is the choice of sensors. The advantages and disadvantages of several sensors are described below (see Table 1). The simplest way to detect obstacles is to use an ultrasonic sensor, which uses ultrasonic pulses to emit to the target and calculates the round-trip time to determine the distance. This method is widely used in the research of mobile robots. Its advantages are low price, easy to use, and accurate measurement within 10m. However, in addition to the scenario limitations mentioned above, there are the following problems in the ITS system. First of all, because it can only be used effectively within 10m, it is not suitable for the ITS system. In addition, the working principle of the ultrasonic sensor is based on the speed of sound. Even if it can be measured up to 100m away, its update frequency is 2Hz, and it may be interfered by other signals during transmission, so it is not practical to use it in the CW/ICC system.

Vision sensors are widely used in CW systems. Their advantages are small size, reasonable price, ability to measure multiple targets within a certain width and visual field, and the ability to use measured images to classify targets based on shape and size. However, the algorithm is complex and the processing speed is slow.

Radar sensors have been used in the military and aviation sectors for decades. The main advantage is that they can robustly detect obstacles regardless of weather or lighting conditions. In the past decade, as their size and price have decreased, they have begun to be used in the automotive industry. However, there is still a problem of cost-effectiveness.

In order to overcome these problems, some new methods have been proposed using information fusion technology, such as fusing the information of ultrasonic sensors and image sensors, fusing radar and image information, or fusing laser and image information, as described in references [5-6]. These methods can achieve more reliable detection than a single sensor.

2.2 Basic principles of information fusion[1]

Information fusion is the process of processing information from multiple sensors or multiple sources in order to draw more accurate and reliable conclusions. Multi-sensor information fusion is a basic function that is prevalent in humans and other biological systems. Humans instinctively have the ability to integrate the information (scenery, sound, smell and touch) detected by various functional organs (eyes, ears, nose, limbs) in the body with prior knowledge in order to make estimates of the surrounding environment and ongoing events. Because human senses have different measurement characteristics, they can measure various physical phenomena in different spatial ranges. This process is complex and adaptive. It transforms various information (images, sounds, smells and physical shapes or descriptions) into valuable interpretations of the environment.

Multi-sensor information fusion is actually a functional simulation of the human brain's comprehensive processing of complex problems. In a multi-sensor system, the information provided by various sensors may have different characteristics: time-varying or non-time-varying, real-time or non-real-time, fuzzy or definite, accurate or incomplete, mutually supportive or complementary. Multi-sensor information fusion is like the process of comprehensive information processing by the human brain. It makes full use of multiple sensor resources, combines the complementary and redundant information of various sensors in space and time according to a certain optimization criterion through the reasonable control and use of various sensors and their observation information, and produces a consistent interpretation or description of the observation environment. The goal of information fusion is to separate observation information based on various sensors and derive more effective information through the optimal combination of information. This is the result of optimal synergy, and its ultimate goal is to use the advantages of the joint or combined operation of multiple sensors to improve the effectiveness of the entire system.

2.3 Commonly used information fusion algorithms

Information fusion technology involves many theories and technologies, such as signal processing, estimation theory, uncertainty theory, pattern recognition, optimization technology, neural networks and artificial intelligence. Various methods formed by different application requirements are a subset of fusion methods. Table 2 summarizes some commonly used information fusion methods.

2.4 Basic Structure of Information Fusion Algorithm in Intelligent Driving System

Due to the limitations of a single sensor, ITS systems now use a group of sensors to detect information from different viewpoints, and then fuse the information to complete initial target detection and recognition. The commonly used algorithm structure for identifying obstacles in intelligent driving systems is shown in Figure 2.

3 CLARK Algorithm

The CLARK algorithm is a method for accurately measuring obstacle locations and road conditions, using information from both range sensors (radar) and cameras. The CLARK algorithm consists of two main parts: ① robust obstacle detection using multi-sensor fusion technology; ② comprehensive consideration of the above information in the LOIS (Likelihood of Image Shape) road detection algorithm to improve the recognition performance of long-distance roads and obstacles.

3.1 Obstacle detection using radar

At present, a radar sensor is often used to detect vehicles or obstacles ahead. As analyzed above, although the radar performs well on straight roads, when the road is curved, the detection signal will not be completely reliable, and sometimes there will be blind spots or false alarms. In order to prevent false alarms, the output of the radar is often subjected to standard Kalman filtering, but this does not effectively solve the problem of blind spots. In order to solve this kind of problem more reliably, scanning radar or multi-beam radar can be used, but it is expensive. Here, a low-cost visual sensor is selected as additional information. The visual sensor can often provide information that scanning radar and multi-beam radar cannot provide.

3.2 Fusion of Visual Information in Object Recognition

The CLARK algorithm uses the contrast and color information of the visual image to detect targets and uses the rectangular template method to identify targets. This template consists of rectangles with different left and right borders and bottom sizes, which are then matched with the visual image contrast domain to select the obstacle template that is closest to the radar sensor output.

The CLARK algorithm first performs Kalman filtering on the radar signal to remove strong interference from the sensor output, which is handled by the following state and observation equations:

Where R(t) is the true distance of the obstacle ahead (unknown), is its speed (unknown), D(t) is the distance observation value, Δt is the interval between two observations, and w(t) and v(t) are Gaussian noise. Given D(t), the Kalman filter estimates the values ​​of R(t) and , and uses the estimated value as the distance input value, and uses the difference between and D(t) to determine the deviation of the rectangular template used. Since there is always a deviation between the position detected by the radar and the center position of the radar wave, the position on one side of the road can be changed as compensation.

The above algorithm can effectively improve the reliability of radar detection, but it still cannot get satisfactory results when the image contains strong edge information or the obstacle occupies only a small area of ​​the phase plane. Therefore, in addition to contrast, the color domain of the visual image is introduced.

3.3 Concordance Likelihood Method

After detecting an obstacle, the CLARK algorithm integrates this information into the road detection algorithm (LOIS). LOIS uses the fact that the edge of the deformed road should be the part with the maximum contrast in the image and its orientation should be perpendicular to the edge of the road to search for the road. If the two information are simply integrated, the pixels of the obstacle detection part are hidden, and its image gradient value will not affect the likelihood of LOIS. This can prevent LOIS from mistaking the edge of the obstacle in front of the car for the edge of the road. However, when the real edge of the road is very close to the edge of the obstacle, the hiding technology fails.

In order to make the hidden technology effective, a compromise method can be adopted between obstacle and road detection. This compromise method is the coincidence likelihood method. It changes the fixed position and size parameters of the obstacle detection into parameters that can be changed within a small range. The new likelihood function is a fusion of the likelihood of LOIS and the likelihood of the obstacle detection. It uses a seven-dimensional parameter detection method (three dimensions for obstacles and four dimensions for roads) to give the best results for obstacle and road prediction at the same time. Its formula is as follows:

Where Tb, Tl, and Tw are the three deformation parameters of the bottom position, left boundary, and width of the rectangular template in the phase plane, and [xr(t), xc(t)] is the center of the phase plane of the deformed template. [yr(t), yc(t)] is the position of the obstacle detected by the radar and filtered by Kalman in the phase plane. The ground plane is compressed and transformed into the phase plane, σr2(t) is the real-time estimate of , and σc2 is the value of a road width in the phase plane (3.2m). The compression ratio of tan-1 is not less than Tmin (half the road width) and not greater than Tmax (road width) in the phase plane. Obstacles and road targets are obtained by solving the maximum value of the seven-dimensional posterior pdf P(k', b'LEFT, b'RIGHT, vp, Tb, Tl, Tw|[yr(t), yc(t)], Observed Image).

3.4 Limitations of the CLARK Algorithm

The CLARK algorithm assumes that the obstacle is rectangular and its minimum size is half the standard road width, so it meets the requirements when the obstacle is a passenger car, truck, tractor and bus; but it is not applicable when the obstacle is a motorcycle, bicycle and pedestrian. This rectangular shape assumption also requires the radar to be a narrow beam radar, which is invalid for other wide beam radars, scanning radars or multi-beam radars, and assumes that the deflected position of the detected obstacle is always in the center of the radar beam.

The use of multi-sensor information fusion technology in intelligent driving systems (ITS) has greatly improved the stability and safety of the system. Various fusion algorithms have also improved the performance of the system from different aspects. However, there is still a problem of how to reduce the cost, which is very important for the widespread use of ITS systems. In addition, reducing the amount of calculation and enhancing the reliability of multi-target recognition also need further research and solution.

Reference address:Application of multi-sensor information fusion technology in intelligent driving system

Previous article:Design and Application of HART Protocol in Intelligent Electromagnetic Flowmeter
Next article:Application of Distributed Sensors in Power Cable Temperature System

Latest Industrial Control Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号