Sensor fusion: How to make autonomous driving "see" more clearly?

Publisher:温暖的微风Latest update time:2024-07-30 Source: eepwKeywords:Sensors Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

The advent of autonomous driving is not just a leap forward in automotive technology, but also a revolution in the way people perceive and interact with automotive mobility. At the heart of this transformation is sensor fusion.

In autonomous vehicles, sensor fusion is a technology that is critical to vehicle safety and efficiency. It combines data from multiple sensors and produces more accurate, reliable and comprehensive information about the vehicle's environment. For example, sensors such as cameras, radars and lidar work together and compensate for each other to provide a 360° panoramic view around the vehicle. At the same time, the application and popularity of ADAS and autonomous driving technologies have further stimulated unprecedented demand for sensors.

According to data from TE Connectivity, vehicles with the latest ADAS/AV technology, even typical non-electric vehicles, require 60 to 100 on-board sensors, of which 15 to 30 are dedicated to engine management. Commercial trucks are equipped with as many as 400 sensors, and about 70 sensors are used for engine management. It is expected that the next few generations of electric vehicles, especially those equipped with autonomous or semi-autonomous driving functions, will likely have two to three times the number of sensors as other models.

The emergence of autonomous driving technology has rapidly changed the existing transportation landscape, providing vehicles with unprecedented safety and reliability. Sensor fusion is a game-changer at the heart of autonomous vehicles. By seamlessly integrating data from a variety of advanced sensors, including cameras, radars, lidars, and ultrasonic sensors, sensor fusion can perceive the environment from different locations, providing more accurate and reliable information about the vehicle and its surroundings than any single source.

Sensors in autonomous driving

Early applications of sensors in cars were mainly based on basic advanced driver assistance systems (ADAS) with rearview cameras. As the level of autonomous driving increases, the level of vehicle intelligence has greatly increased, and the types and number of sensors required are also increasing. Currently, the following are the main types of sensors used in autonomous driving technology:

Camera (camera)

This is the sensor closest to human vision, which can be used to detect visual information around the vehicle, such as traffic signs, lane markings, pedestrians, cyclists, and other vehicles. Front cameras allow the car to "see" where it is going, and reversing cameras can help with parking and reversing. Some new models are also equipped with 360° cameras, which are miniature cameras placed around the body of the car to obtain a bird's-eye view of the surrounding environment.

There are many varieties of such cameras on the market, which can basically meet the needs of automotive use. Take the ON Semiconductor AR0820AT as an example. This is a 1/2-inch CMOS digital image sensor with a 3848 H x 2168 V active pixel array. This advanced automotive sensor can capture images in linear or high dynamic range, and also has rolling shutter readout. In addition, the AR0820AT is also optimized for high dynamic range scene performance in low light and harsh conditions. It uses 2.1 µm DR-Pix BSI pixels and on-chip 140dB HDR capture capability, which is very helpful for obtaining image information around the vehicle.

Radar

To ensure the safe driving of the vehicle, whether it can obtain high-resolution images and even 4D imaging data of the vehicle's surrounding environment in a timely manner is a major challenge for intelligent driving. At this time, millimeter-wave radar, which plays a key role in ADAS functions, comes to the fore because it can "see" objects more clearly than cameras, has higher resolution and performance, and good directivity, and is not easily affected by environmental interference or weather. It should be noted that millimeter-wave radar cannot be used to identify non-metallic objects.

The TIDA-020047 dual-device millimeter-wave cascade radar reference design provided by TI for automotive 4D imaging radar needs perfectly solves the problem of "seeing clearly" in ADAS functions by combining two 76GHz to 81GHz radar transceivers, a radar processor, two CAN-FD PHYs, an Ethernet PHY and a low-noise power supply.

The AWR2243 device in the reference design is an integrated single-chip FMCW transceiver capable of operating in the 76GHz to 81GHz frequency band. It achieves ultra-high integration in an extremely small package, and implements a monolithic 3TX, 4RX system with built-in PLL and ADC converters. Simple programming model changes can

support a variety of sensor deployments, including short-range, medium-range and long-range, forming a multi-mode sensor solution. The AM273x, which undertakes the task of radar processor, is a highly integrated, high-performance microcontroller based on the Arm Cortex-R5F and C66x floating-point DSP cores. The internally integrated hardware security module (HSM) ensures the functional safety of the vehicle.

Sensor fusion: How to make autonomous driving "see" more clearly?

Figure 1: TIDA-020047, a dual-device millimeter-wave cascade reference design for automotive 4D imaging radar (Source: TI)


LiDAR

Light Detection and Ranging (LiDAR) uses light pulses to measure the distance between the vehicle and other objects. With this information, the vehicle can create a detailed 3D map based on the environment. LiDAR has many advantages for autonomous vehicle sensors: first, it has excellent distance, angle and speed resolution and strong anti-interference ability; second, LiDAR can obtain a large amount of data and information that is beneficial to autonomous driving, including distance, angle, speed and reflection intensity of objects to generate multi-dimensional images of objects. At present, the high price has affected the large-scale application of LiDAR in the automotive industry to a certain extent.

OSRAM SPL SxL90A LiDAR is a cost-effective product that meets the AEC-Q102 standard. It allows autonomous vehicles to "see" farther and drive safer and more efficiently. The product is available in two series: single-channel and four-channel, both with 40A current and 125W performance per channel, with an efficiency of up to 33%, and very low thermal resistance, so it is not easy to heat up even when running at high current. The single-channel device, the SPL S1L90A_3 A01, is very compact at just 2.0mm x 2.3mm x 0.65mm. The quad-channel device, the SPL S4L90A_3 A01, has four emission zones and delivers excellent optical power at 480W, in a slightly larger size than the single-channel device, but allows for a greater detection range.


Sensor fusion: How to make autonomous driving "see" more clearly? Figure 2: Four-channel LiDAR SPL S4L90A_3 A01 (Source: AMS OSRAM)


3D ToF LiDAR

Time of Flight (ToF) is a type of LiDAR suitable for short-range automotive use cases, which can obtain more details without scanning. This is an increasingly popular type of LiDAR, which has been widely used in smartphone applications. In the automotive environment, high-resolution ToF cameras use 3D sensing technology to scan the area around the car and the ground, detecting curbs, walls or other obstacles regardless of lighting conditions, supporting gesture recognition and building a 360° view outside the car to assist in self-parking.

The IRS2877A is a product in Infineon's REAL3 ToF LiDAR series mainly for automotive applications. It adopts a 9 x 9 mm² plastic BGA package and uses a 4 mm micro-photosensitive area to achieve a VGA system resolution of 640 x 480 pixels. Only one ToF camera is needed to create a driver status monitoring system with 3D facial recognition. Based on the 3D body model generated by the IRS2877A, the body shape and weight of the occupants can also be accurately estimated, and highly accurate occupant and seat position data can be obtained, providing key information for the deployment and restraint system of smart airbags. In addition to safety applications, 3D ToF sensors can also be used to implement functions such as in-vehicle gesture control.


Sensor fusion: How to make autonomous driving "see" more clearly?

Figure 3: IRS2877A 3D ToF LiDAR application block diagram (Source: Infineon)


The Power of Sensor Fusion

Sensor fusion is the process of combining data sources from multiple sensors to create more accurate and reliable information than what can be obtained from any single source alone. It is usually impossible for a single sensor to work independently and provide all the necessary information for the autonomous driving system to make a decision or take any action. The first step for an autonomous vehicle to perceive the world is to capture a wide range of data about the surrounding environment through its array of sensors, including cameras, LiDAR, and radar, so that accurate action instructions can be finally made. Figure 4 graphically shows the process of an autonomous vehicle using sensor fusion to perceive the environment.


Sensor fusion: How to make autonomous driving "see" more clearly? Figure 4: The sensor fusion solution consisting of cameras, LiDAR and radar can perceive the vehicle's surrounding environment in 360 degrees (Source: TE)


Each of these sensors has its own unique advantages. LiDAR provides precise distance measurement, and range is a factor in the selection process that determines whether a short-range, medium-range, or long-range LiDAR architecture is best, not just for autonomous driving functions outside the car, but also for many functions inside the car. Radar or millimeter-wave radar is better at detecting the speed and position of objects in all weather conditions. Cameras or cameras can capture rich visual images. The fusion of information from these sensor inputs creates a comprehensive and high-resolution data representation of both inside and outside the car, providing autonomous vehicles with an unparalleled level of situational awareness.


Sensor fusion: How to make autonomous driving "see" more clearly? Figure 5: Sensor fusion solutions required in autonomous vehicles (Source: Aptiv)

[1] [2]
Keywords:Sensors Reference address:Sensor fusion: How to make autonomous driving "see" more clearly?

Previous article:Honda, Nissan and Mitsubishi to form alliance to pool resources to catch up with electric car rivals
Next article:Safety protection at your fingertips: Microchip HOD solutions lead automotive safety technology

Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号