In-depth analysis - important sensors in driverless vehicles
With the advent of advanced driver assistance systems (ADAS) and the promotion and development of driverless driving, cars need to be fully aware of their surroundings. By relying on a variety of sensors, cars can sense complex surroundings in different situations. Then, this data will be transmitted to high-precision processors such as TI TDA2x, and finally used for functions such as automatic emergency braking (AEB), lane departure warning (LDW) and blind spot monitoring. Today's WeChat will talk about the things before sensors and processors in automotive electronics~
At present, there are mainly the following types of sensors used for ambient environment sensing.
Passive sensors - mainly used to sense rays reflected or emitted from objects.
-
Visible Image Sensors – All imagers operate in the visible spectrum
-
Infrared Image Sensors – Operate outside the visible spectrum. Can be near infrared or thermal infrared (far infrared).
Passive sensors are affected by the environment - time of day, weather, etc. For example, a visual sensor is affected by the amount of visible light at different times of day.
Active sensors - emit radiation and measure the response of the reflected signal. The advantage is that the measurement results can be obtained at any time, regardless of time or season.
-
Radar - uses radio waves to determine the distance, direction and speed of an object by reflecting the radio waves back from the object
-
Ultrasonic wave - by transmitting ultrasonic wave, the distance of the object is determined according to the ultrasonic wave reflected from the object.
-
LiDAR – determines the distance to an object by scanning laser light reflected off it
-
Time of Flight - uses a camera to measure the time it takes for an infrared beam to bounce off an object and return to the sensor to determine the object's distance
-
Structured light – A known light pattern is projected onto an object, such as through a TI digital light processing (DLP) device. The deformed light pattern is then captured by a camera and analyzed to determine the object’s distance.
In order to provide enhanced accuracy, reliability and robustness in many different situations, it is usually necessary to use at least one sensor to observe the same scene. All sensor technologies have their inherent limitations and advantages. Different sensor technologies can be combined to fuse data from different sensors on the same scene to provide a more stable and robust solution, comparing view confusion through data fusion. A typical example is the combination of visible light sensor and radar.
The advantages of visible light sensors include high resolution, the ability to identify and classify objects, and the ability to provide important information. However, their performance can be affected by the amount of natural light and weather (such as fog, rain, and snow). In addition, other factors such as overheating can cause image quality to degrade due to noise. Precision image processing provided by TI's processors can mitigate some of these effects.
Radar, on the other hand, is able to see through rain and snow and can measure distances very quickly and efficiently. Doppler radar has the added advantage of being able to detect the motion of objects. However, radar has a lower resolution and cannot easily identify objects. Therefore, the fusion of data produced by visible light and radar provides a more robust solution in many different situations.
In addition, the cost of different sensors varies, which will also affect the best choice for a specific application. For example, LIDAR can provide very accurate distance measurement capabilities, but it is much more expensive than passive image sensors. However, as the technology continues to develop, the cost will continue to decrease, and cars will eventually be able to see and hear in all directions with the help of multiple sensors.
The TDA processor family has a very high level of integration and is developed on a programmable platform that meets the high-intensity processing needs of automotive ADAS. Data from different sensors used to observe the scene can be provided to the TDA2x and combined into a more complete picture to help make quick and intelligent decisions. For example, a visual sensor may display a letterbox as a human-like shape in a darker scene. TI's processor can perform sophisticated pedestrian detection. First, based on the scale of the object, the system may identify it as a pedestrian on the side of the road. However, data from the thermal sensor will identify that the temperature of this object is too low and it is unlikely to be a living object, so it may not be a pedestrian. Therefore, sensors with different working characteristics can provide a higher level of safety.
Regarding driverless cars, perhaps our ultimate goal is to create a fully autonomous car, and these driverless vehicles will eventually achieve a world without traffic accidents. TI is actively focusing on the research and development of sensor and processing technologies to help customers develop driverless vehicles. After a series of technological breakthroughs and continuous development, when we face the research and development of driverless cars, the question is no longer "if" we can achieve it, but "when" we can achieve driverless cars.