Keywords: Bayesian estimation, information fusion, obstacle detection, intelligent driving
With the development of sensor technology, information processing technology, measurement technology and computer technology, intelligent driving systems (assisted driving systems - unmanned driving systems) have also developed rapidly. Consumers are paying more and more attention to driving safety and comfort, which requires sensors to be able to identify cars driving in front of them in the same lane, and to remind drivers or automatically change the state of the car when there are obstacles to avoid accidents. Major international automobile companies are also committed to research in this area and have developed a series of safe driving systems, such as collision warning systems (CW), deviation warning systems (LDW) and intelligent cruise systems (ICC). There are also certain studies in these areas in China, but there is still a big gap compared with foreign countries. This article will mainly discuss the application of multi-sensor information fusion technology in intelligent driving systems (ITS).
1 Problems in ICC/CW and LDW Systems
1.1 Misidentification Problems in ICC/CW Systems
Single beam sensors are often used in ICC/CW systems. Such sensors use a very narrow beam width to detect the vehicle ahead. For curved roads (see Figure 1(a)), the vehicle ahead can easily run out of the sensor's measurement range, which will cause the intelligent cruise system to accelerate incorrectly. If the vehicle ahead slows down or another car enters the lane at a curve, the collision warning system will not be able to respond within the safe parking range and may easily cause a collision. Similarly, when the curve is extended (see Figure 1(b)), the radar system may easily mistake vehicles on adjacent roads or roadside guardrails as obstacles and issue an alarm. When the road is uneven, the road ahead of the radar sensor is inclined upward, and hillocks or mounds may also be mistaken for obstacles, all of which reduce the stability of the system. There are some filtering algorithms that can handle these problems [6] and have achieved certain results, but they cannot completely solve the problem.
1.2 Scene recognition problems in LDW systems
The LDW system also has the problem of scene recognition in the public driving area. The LDW system relies on a camera on one side (often only able to measure the position of adjacent vehicles on the road), which makes it difficult to distinguish between curved roads and various personal driving modes. The LDW system uses a forward-facing camera to detect the geographical conditions of the road in front of the vehicle, which has accuracy problems for long-distance measurements, all of which affect the accuracy of TLC (Time-to-Line-Crossing) measurements. Dead zone recognition and driving information revision methods are commonly used for processing, but they cannot provide any prior knowledge to identify faults.
2 Application of Multi-sensor Information Fusion Technology in ITS System
In response to some of the problems existing in the above systems, researchers have introduced multi-sensor information fusion technology and proposed different fusion algorithms. Sensors based on the visual system can provide a large amount of scene information, and other sensors (such as radar or laser, etc.) can measure information such as distance and range. After fusing the two aspects of information, more reliable recognition information can be given. Fusion technology can be implemented using methods such as the CLARK algorithm (Combined Likelihood Adding Radar) proposed by Beaurais et al. in 1999 [3] and the ICDA (Integrative Coupling of Different Algorithms) algorithm proposed by Institude Neuroinformatik [4].
2.1 Sensor Selection
The first problem in identifying obstacles is the choice of sensors. The advantages and disadvantages of several sensors are described below (see Table 1). The simplest way to detect obstacles is to use an ultrasonic sensor, which uses ultrasonic pulses to emit to the target and calculates the round-trip time to determine the distance. This method is widely used in the research of mobile robots. Its advantages are low price, easy to use, and can give accurate measurements within 10m. However, in addition to the scenario limitations mentioned above, there are also the following problems in the ITS system. First of all, because it can only be used effectively within 10m, it is not suitable for the ITS system. In addition, the working principle of the ultrasonic sensor is based on the speed of sound. Even if it can be measured up to 100m away, its update frequency is 2Hz, and it may be interfered by other signals during transmission, so it is not practical to use it in the CW/ICC system.
Vision sensors are widely used in CW systems. Their advantages are small size, reasonable price, ability to measure multiple targets within a certain width and visual field, and the ability to use measured images to classify targets based on shape and size. However, the algorithm is complex and the processing speed is slow.
Radar sensors have been used in the military and aviation sectors for decades. The main advantage is that they can robustly detect obstacles regardless of weather or lighting conditions. In the past decade, as their size and price have decreased, they have begun to be used in the automotive industry. However, there is still a problem of cost-effectiveness.
In order to overcome these problems, some new methods have been proposed using information fusion technology, such as fusing the information of ultrasonic sensors and image sensors, fusing radar and image information, or fusing laser and image information, as described in references [5-6]. These methods can achieve more reliable detection than a single sensor.
2.2 Basic principles of information fusion[1]
Information fusion is the process of processing information from multiple sensors or multiple sources in order to draw more accurate and reliable conclusions. Multi-sensor information fusion is a basic function that is prevalent in humans and other biological systems. Humans instinctively have the ability to integrate the information (scenery, sound, smell and touch) detected by various functional organs (eyes, ears, nose, limbs) in the body with prior knowledge in order to make estimates of the surrounding environment and ongoing events. Because human senses have different measurement characteristics, they can measure various physical phenomena in different spatial ranges. This process is complex and adaptive. It transforms various information (images, sounds, smells and physical shapes or descriptions) into valuable interpretations of the environment.
Multi-sensor information fusion is actually a functional simulation of the human brain's comprehensive processing of complex problems. In a multi-sensor system, the information provided by various sensors may have different characteristics: time-varying or non-time-varying, real-time or non-real-time, fuzzy or definite, accurate or incomplete, mutually supportive or complementary. Multi-sensor information fusion is like the process of comprehensive information processing by the human brain. It makes full use of multiple sensor resources, combines the complementary and redundant information of various sensors in space and time according to a certain optimization criterion through the reasonable control and use of various sensors and their observation information, and produces a consistent interpretation or description of the observation environment. The goal of information fusion is to separate observation information based on various sensors and derive more effective information through the optimal combination of information. This is the result of optimal synergy, and its ultimate goal is to use the advantages of the joint or combined operation of multiple sensors to improve the effectiveness of the entire system.
2.3 Commonly used information fusion algorithms
Information fusion technology involves many theories and technologies, such as signal processing, estimation theory, uncertainty theory, pattern recognition, optimization technology, neural networks and artificial intelligence. Various methods formed by different application requirements are a subset of fusion methods. Table 2 summarizes some commonly used information fusion methods.
2.4 Basic Structure of Information Fusion Algorithm in Intelligent Driving System
Due to the limitations of a single sensor, ITS systems now use a group of sensors to detect information from different viewpoints, and then fuse the information to complete initial target detection and recognition. The commonly used algorithm structure for identifying obstacles in intelligent driving systems is shown in Figure 2.
3 CLARK Algorithm
The CLARK algorithm is a method for accurately measuring obstacle locations and road conditions, using information from both range sensors (radar) and cameras. The CLARK algorithm consists of two main parts: ① robust obstacle detection using multi-sensor fusion technology; ② comprehensive consideration of the above information in the LOIS (Likelihood of Image Shape) road detection algorithm to improve the recognition performance of long-distance roads and obstacles.
3.1 Obstacle detection using radar
At present, a radar sensor is often used to detect vehicles or obstacles ahead. As analyzed above, although the radar performs well on straight roads, when the road is curved, the detection signal will not be completely reliable, and sometimes there will be blind spots or false alarms. In order to prevent false alarms, the radar output is often subjected to standard Kalman filtering, but this does not effectively solve the problem of blind spots. In order to solve this kind of problem more reliably, scanning radar or multi-beam radar can be used, but it is expensive. Here, low-cost visual sensors are selected as additional information. Visual sensors can often provide information that scanning radar and multi-beam radar cannot provide.
3.2 Fusion of Visual Information in Object Recognition
The CLARK algorithm uses the contrast and color information of the visual image to detect targets and uses the rectangular template method to identify targets. This template consists of rectangles with different left and right borders and bottom sizes, which are then matched with the visual image contrast domain to select the obstacle template that is closest to the radar sensor output.
The CLARK algorithm first performs Kalman filtering on the radar signal to remove strong interference from the sensor output, which is handled by the following state and observation equations:
Where R(t) is the true distance of the obstacle ahead (unknown), is its speed (unknown), D(t) is the distance observation value, Δt is the interval between two observations, and w(t) and v(t) are Gaussian noise. Given D(t), the Kalman filter estimates the values of R(t) and , and uses the estimated value as the distance input value, and uses the difference between and D(t) to determine the deviation of the rectangular template used. Since there is always a deviation between the position detected by the radar and the center position of the radar wave, the position on one side of the road can be changed as compensation.
The above algorithm can effectively improve the reliability of radar detection, but it still cannot get satisfactory results when the image contains strong edge information or the obstacle occupies only a small area of the phase plane. Therefore, in addition to contrast, the color domain of the visual image is introduced.
3.3 Concordance Likelihood Method
After detecting an obstacle, the CLARK algorithm integrates this information into the road detection algorithm (LOIS). LOIS uses the fact that the edge of the deformed road should be the part with the maximum contrast in the image and its orientation should be perpendicular to the edge of the road to search for the road. If the two information are simply integrated, the pixels of the obstacle detection part are hidden, and its image gradient value will not affect the likelihood of LOIS. This can prevent LOIS from mistaking the edge of the obstacle in front of the car for the edge of the road. However, when the real edge of the road is very close to the edge of the obstacle, the hiding technology fails.
In order to make the hidden technology effective, a compromise method can be adopted between obstacle and road detection. This compromise method is the coincidence likelihood method. It changes the fixed position and size parameters of the obstacle detection into parameters that can be changed within a small range. The new likelihood function is a fusion of the likelihood of LOIS and the likelihood of the obstacle detection. It uses a seven-dimensional parameter detection method (three dimensions for obstacles and four dimensions for roads) to give the best results for obstacle and road prediction at the same time. Its formula is as follows:
Where Tb, Tl, and Tw are the three deformation parameters of the bottom position, left boundary, and width of the rectangular template in the phase plane, and [xr(t), xc(t)] is the center of the phase plane of the deformed template. [yr(t), yc(t)] is the position of the obstacle detected by the radar and Kalman filtered in the phase plane. The ground plane is compressed and transformed into the phase plane, σr2(t) is the real-time estimate of , and σc2 is the value of a road width in the phase plane (3.2m). The compression ratio of tan-1 is not less than Tmin (half the road width) and not greater than Tmax (road width) in the phase plane. Obstacles and road targets are obtained by solving the maximum value of the seven-dimensional posterior pdf P(k', b'LEFT, b'RIGHT, vp, Tb, Tl, Tw|[yr(t), yc(t)], Observed Image).
3.4 Limitations of the CLARK Algorithm
The CLARK algorithm assumes that the obstacle is rectangular and its minimum size is half the standard road width, so it meets the requirements when the obstacle is a passenger car, truck, tractor and bus; but it is not applicable when the obstacle is a motorcycle, bicycle and pedestrian. This rectangular shape assumption also requires the radar to be a narrow beam radar, which is invalid for other wide beam radars, scanning radars or multi-beam radars, and assumes that the deflection position of the detected obstacle is always in the center of the radar beam.
The use of multi-sensor information fusion technology in intelligent driving systems (ITS) has greatly improved the stability and safety of the system. Various fusion algorithms have also improved the performance of the system from different aspects. However, there is still a problem of how to reduce the cost, which is very important for the widespread use of ITS systems. In addition, reducing the amount of calculation and enhancing the reliability of multi-target recognition also need further research and solution.
Previous article:Design of Fiber Bragg Grating Demodulator Based on Single Chip Microcomputer
Next article:Application of SIMATIC VS 710 vision sensor in the automotive industry
- Popular Resources
- Popular amplifiers
- Molex leverages SAP solutions to drive smart supply chain collaboration
- Pickering Launches New Future-Proof PXIe Single-Slot Controller for High-Performance Test and Measurement Applications
- CGD and Qorvo to jointly revolutionize motor control solutions
- Advanced gameplay, Harting takes your PCB board connection to a new level!
- Nidec Intelligent Motion is the first to launch an electric clutch ECU for two-wheeled vehicles
- Bosch and Tsinghua University renew cooperation agreement on artificial intelligence research to jointly promote the development of artificial intelligence in the industrial field
- GigaDevice unveils new MCU products, deeply unlocking industrial application scenarios with diversified products and solutions
- Advantech: Investing in Edge AI Innovation to Drive an Intelligent Future
- CGD and QORVO will revolutionize motor control solutions
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- I'm new to power supplies, some power chips have multiple output SWs, how do I connect them?
- Homemade super mini writing robot (including schematics, PCB, firmware code, mechanical drawings)
- Using a fuel gauge chip to achieve fast and intelligent charging of dual-series lithium batteries
- FAQ_How to solve the problem of drastic jumps in the ADC sampling of BlueNRG-12
- Matrix keyboard program problem
- [GD32L233C-START Review] (4) LPTIMER wake-up trigger ADC in Sleep low power mode
- DSP Math Library - TMS320C55x DSP Library
- Qorvo helps Murata launch small UWB module to enable low-power IoT devices
- Ding~ It’s time to punch in on the first day of work after the holiday~Are you regretting that the holiday is not long enough or are you touched that you can finally start working again?
- How does Arteli AT32 MCU use on-chip Flash to implement EEPROM function?