With sensors ranging in cost from $15 to $1, automakers wonder how many sensors are needed for a vehicle to be fully autonomous. These sensors are used to collect data about the surrounding environment, and they include imaging, lidar, radar, ultrasonic, and thermal sensors. One type of sensor is not enough, as each has its limitations. This is the key driver behind sensor fusion, which combines multiple types of sensors to enable safe autonomous driving. All vehicles at level 2 or higher rely on sensors to “see” their surroundings and perform tasks such as lane centering, adaptive cruise control, emergency braking, and blind spot warnings. Until now, OEMs are taking very different approaches to design and deployment.
In May 2022, Mercedes-Benz launched its first vehicle capable of Level 3 autonomous driving in Germany. Level 3 autonomous driving is an option for the S-Class and EQS and is scheduled to be launched in the United States in 2024. According to the company, DRIVE PILOT, which is based on the driving assistance package (radar and camera), adds new sensors, including lidar, advanced stereo cameras for the front windows and multifunction cameras for the rear windows. Microphones (especially for detecting emergency vehicles) and humidity sensors in the front cabin have also been added. A total of 30 sensors are installed to capture the data required for safe autonomous driving. Tesla is taking a different path. In 2021, Tesla announced that the Tesla visual camera autonomous driving technology strategy will be implemented in Model 3 and Model Y, and in 2022 in Model S and Model X. The company has also decided to remove the ultrasonic sensors.
Sensor limitations
One of the challenges facing autonomous driving design today is the limitations of different sensors. In order to achieve safe autonomous driving, sensor fusion may be required. The key issue is not only the number, type, and deployment location of sensors, but also how AI/ML technology should interact with sensors to analyze data for optimal driving decisions.
To overcome sensor limitations, sensor fusion may be required, combining multiple sensors for autonomous driving to achieve optimal performance and safety. “Autonomous driving makes extensive use of artificial intelligence technologies,” said Thierry Kouthon, technical product manager for Rambus Safety IP. “Autonomous driving, even entry-level ADAS features, requires the vehicle to exhibit a level of environmental awareness that is comparable to or better than that of a human driver. First, the vehicle must recognize other vehicles, pedestrians, and roadside infrastructure and determine their correct location.
This requires AI deep learning technologies that can solve pattern recognition capabilities well. Visual pattern recognition is an advanced deep learning field that vehicles use a lot. In addition, the vehicle must be able to calculate its optimal trajectory and speed at all times. This requires AI to also solve route planning capabilities well. In this way, lidar and radar can provide distance information, which is crucial to correctly reconstruct the vehicle's environment. "
Sensor fusion, which combines information from different sensors to better understand the vehicle's environment, remains an active area of research. "Each type of sensor has limitations," Kouthon said. "Cameras are great for object recognition but provide poor range information, and image processing requires a lot of computational resources. In contrast, lidar and radar provide excellent range information but with poor clarity. Also, lidar doesn't work well in bad weather conditions."
How many sensors are needed?
There is no simple answer to the question of how many sensors are needed for an autonomous driving system. OEMs are currently trying to figure this out. Other considerations here include the fact that, for example, a truck driving on the open road and an urban robotaxi have very different needs. “This is a tough calculation because each automotive OEM has its own architecture that protects the vehicle by providing better spatial localization, longer range and high visibility, as well as the ability to identify and classify objects and then differentiate between various objects,” said Amit Kumar, director of product management and marketing at Cadence. “It also depends on the level of autonomy the automaker decides to enable (for example, to provide breadth). In short, to achieve partial autonomy, the minimum number of sensors can be 4 to 8 sensors of various types. To achieve full autonomy, 12+ sensors are used today.”
Kumar pointed out that in Tesla’s case, there are 20 sensors (8 camera sensors plus 12 ultrasonic sensors for level 3 or below), no lidar or radar. “The company is a firm believer in computer vision, and its sensor suite is geared towards L3 autonomy. According to media reports, Tesla may introduce radar to improve autonomous driving.” Zoox has implemented four lidar sensors, and a combination of camera and radar sensors. It is a fully autonomous vehicle with no driver inside, and the goal is to operate on clearly mapped and well-understood routes. Commercial deployment has not yet begun, but will soon be available for limited use cases (not as widespread as for passenger cars). Nuro’s self-driving delivery van, where aesthetics are not that important, uses a 360-degree camera system with four sensors, plus a 360-degree lidar sensor, four radar sensors, and ultrasonic sensors.
There is no simple formula for implementing these systems. “The number of sensors you need is the number of sensors that are acceptable to the risk level for the organization, and that also depends on the application,” said Chris Clark, senior manager of automotive software and safety in the automotive group at Synopsys. “If you are developing robotaxis, they not only need sensors for road safety, but they also need sensors inside the car to monitor the behavior of the passengers inside the car to ensure passenger safety. In this case, we will be in an area with a large population and high urban density, which has fairly unique characteristics as opposed to a vehicle designed for highway driving, where you have longer ranges and more room to react. On the highway, the potential for intrusions into the road is less. I don’t think there is a set rule that you have to have three different types of sensors and three different cameras to cover all the different angles of all the autonomous vehicles.”
How many sensors there are will depend on the use case that vehicle will be addressing, though. “In the robotaxi example, lidar and regular cameras would have to be used, along with ultrasonic or radar, because there’s so much density to deal with,” Clark said. “Also, we need to include a sensor for V2X, where the data coming into the vehicle will be consistent with what the vehicle is seeing in its surroundings. In a highway trucking solution, different types of sensors will be used. Ultrasonic is not as beneficial on the highway, unless we’re doing something like platooning, but that’s not a forward-looking sensor. Instead, it’s probably forward-looking and rear-looking sensors so that we can connect to all of the team assets. But lidar and radar become more important because of the distances and ranges that trucks have to consider when they’re driving on the highway.”
Another consideration is the level of analysis required. “With so much data to process, we have to decide how much of it is important,” he said. “That’s where sensor types and capabilities become interesting. For example, if a lidar sensor can perform local analysis early in the cycle, this will reduce the amount of data that flows back to sensor fusion for additional analysis. Reducing the amount of data, in turn, will reduce overall computing power and system design costs. Otherwise, the vehicle will require additional processing in the form of a consolidated computing environment or a dedicated ECU focused on sensor meshing and analysis.”
Cost is always an issue
Sensor fusion can be expensive. In the early days, a lidar system consisting of multiple units could cost as much as $80,000. The high cost came from the mechanical parts in the unit. Today, the cost is much lower, and some manufacturers expect that at some point in the future, it could be as low as $200 to $300 per unit. Emerging thermal sensor technology will be in the thousands of dollars. Overall, OEMs will continue to be under pressure to reduce the total cost of sensor deployment. Using more cameras instead of lidar systems will help OEMs reduce manufacturing costs.
“In an urban environment, the basic definition of safety is to eliminate all avoidable collisions,” says David Fritz, vice president of hybrid and virtual systems at Siemens Digital Industries Software. The minimum number of sensors required depends on the use case. Some believe that in the future, smart city infrastructure will become sophisticated and ubiquitous, reducing the need for onboard sensing in urban environments. Vehicle-to-vehicle communications may also have an impact on sensors. “Here, the number of onboard sensors may decrease, but we are not there yet,” Fritz observes. “In addition, there will always be situations where the AV will have to assume that not all external information is available due to a power failure or other outage. Therefore, there will always need to be some sensors on the vehicle—not only in urban areas, but also in rural areas.”
Many of the designs we’ve been working on require eight cameras on the outside of the vehicle and several cameras on the interior. With two cameras on the front, properly calibrated, we can achieve low-latency, high-resolution stereo vision that provides depth range of objects, reducing the need for radar. We do this on the front, rear, and sides of the vehicle for a full 360° view.” With all cameras performing object detection and classification, critical information is passed to a central computing system to make control decisions. “If infrastructure or other vehicle information is available, it is fused with information from onboard sensors to generate a more comprehensive 3D view, enabling better decisions,” Fritz said. “Inside, additional cameras are used for driver monitoring and to detect occupancy conditions such as objects left behind. Possibly adding a low-cost radar to handle adverse weather conditions such as fog or rain is an advanced addition to the sensor suite. We haven’t seen a lot of use of lidar lately.”
Previous article:Notes on the inspection and maintenance of charging piles
Next article:In-depth analysis of the application of silicon carbide (SiC) power devices in new energy vehicles
- Popular Resources
- Popular amplifiers
- Red Hat announces definitive agreement to acquire Neural Magic
- 5G network speed is faster than 4G, but the perception is poor! Wu Hequan: 6G standard formulation should focus on user needs
- SEMI report: Global silicon wafer shipments increased by 6% in the third quarter of 2024
- OpenAI calls for a "North American Artificial Intelligence Alliance" to compete with China
- OpenAI is rumored to be launching a new intelligent body that can automatically perform tasks for users
- Arm: Focusing on efficient computing platforms, we work together to build a sustainable future
- AMD to cut 4% of its workforce to gain a stronger position in artificial intelligence chips
- NEC receives new supercomputer orders: Intel CPU + AMD accelerator + Nvidia switch
- RW61X: Wi-Fi 6 tri-band device in a secure i.MX RT MCU
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- LED chemical incompatibility test to see which chemicals LEDs can be used with
- Application of ARM9 hardware coprocessor on WinCE embedded motherboard
- What are the key points for selecting rotor flowmeter?
- LM317 high power charger circuit
- A brief analysis of Embest's application and development of embedded medical devices
- Single-phase RC protection circuit
- stm32 PVD programmable voltage monitor
- Introduction and measurement of edge trigger and level trigger of 51 single chip microcomputer
- Improved design of Linux system software shell protection technology
- What to do if the ABB robot protection device stops
- CGD and Qorvo to jointly revolutionize motor control solutions
- CGD and Qorvo to jointly revolutionize motor control solutions
- Keysight Technologies FieldFox handheld analyzer with VDI spread spectrum module to achieve millimeter wave analysis function
- Infineon's PASCO2V15 XENSIV PAS CO2 5V Sensor Now Available at Mouser for Accurate CO2 Level Measurement
- Advanced gameplay, Harting takes your PCB board connection to a new level!
- Advanced gameplay, Harting takes your PCB board connection to a new level!
- A new chapter in Great Wall Motors R&D: solid-state battery technology leads the future
- Naxin Micro provides full-scenario GaN driver IC solutions
- Interpreting Huawei’s new solid-state battery patent, will it challenge CATL in 2030?
- Are pure electric/plug-in hybrid vehicles going crazy? A Chinese company has launched the world's first -40℃ dischargeable hybrid battery that is not afraid of cold
- Collection of popular domestic chip data downloads
- LIS25BA Bone Vibration Sensor Adapter Board Related Information
- What type of chip does SH366002 replace?
- [RVB2601 Creative Application Development] Build CDK Development Environment
- It seems that I have a good relationship with Shide——Part 2
- 【Silicon Labs BG22-EK4108A Bluetooth Development Review】I. Hardware Appreciation and Development Environment Introduction
- [ESP32-S2-Kaluga-1 Review] 4. LCD example compilation pitfalls
- Several issues on key-controlled 8X8LED dot matrix screen displaying graphics program
- FAQ: PolarFire SoC FPGA Secure Boot | Microchip Security Solutions Seminar Series 12
- Some Problems on Measuring AC Current with Current Transformer