Today, most self-driving cars rely on sensor fusion, which is to analyze and integrate multi-sensor data from millimeter-wave radar, lidar and cameras according to certain rules to collect environmental information. As demonstrated by the giants in the self-driving car industry, multi-sensor fusion improves the performance of self-driving car systems and makes vehicle travel safer.
But not all sensor fusion is created equal. While many autonomous vehicle manufacturers rely on “target-level” sensor fusion, only centralized sensor-forward fusion can provide the autonomous system with the information it needs to make the best driving decisions. We’ll further explain the difference between target-level fusion and centralized sensor-forward fusion, and explain why centralized sensor-forward fusion proves indispensable.
Centralized sensor fusion preserves raw sensor data for more accurate decision making
Autonomous driving systems typically rely on a set of specialized sensors to collect low-level raw data about their environment. Each type of sensor has advantages and disadvantages, as shown in the figure:
The fusion of millimeter-wave radar, lidar and camera sensors can maximize the quality and quantity of collected data to generate a complete picture of the environment.
The advantages of multi-sensor fusion over individual sensor processing have been widely accepted by autonomous vehicle manufacturers, but this fusion method usually occurs at the "target level" post-processing stage. In this mode, the collection, processing, fusion and classification of object data all occur at the sensor level. However, before the data is processed comprehensively, individual sensors filter the information separately in advance, so that the background information required for autonomous driving decisions is almost eliminated, which makes it difficult for target-level fusion to meet the needs of future autonomous driving algorithms.
Centralized sensor front fusion avoids such risks. Millimeter wave radar, lidar and camera sensors send the underlying raw data to the vehicle's central domain controller for processing. This approach maximizes the amount of information obtained by the autonomous driving system, allowing the algorithm to obtain all valuable information, thereby achieving better decisions than target-level fusion.
AI-enhanced mmWave radar significantly improves autonomous driving system performance through centralized processing
Today, autonomous driving systems already process camera data centrally. But when it comes to millimeter-wave radar data, centralized processing is still unrealistic. High-performance millimeter-wave radars typically require hundreds of antenna channels, which greatly increases the amount of data generated. Therefore, local processing becomes a more cost-effective option.
However, Ambarella's AI-enhanced mmWave radar perception algorithms can improve radar angular resolution and performance without the need for additional physical antennas. Raw radar data from fewer channels can be transmitted to the central processor at a lower cost using interfaces such as standard automotive Ethernet. When autonomous driving systems fuse raw AI-enhanced radar data with raw camera data, they can take full advantage of these two complementary sensing methods to build a complete picture of the environment, making the fused results more comprehensive and beyond the information obtained by any single sensor.
The update and iteration of millimeter-wave radar helps to reduce costs and greatly improve the performance of autonomous driving systems. When traditional low-cost radars are mass-produced, the price of each millimeter-wave radar can be less than US$50, which is an order of magnitude lower than the target cost of lidar. Combined with ubiquitous low-cost camera sensors, AI radar provides acceptable accuracy, which is critical for the mass production of large-scale commercial autonomous vehicles. The lidar sensor overlaps with the camera/millimeter-wave radar perception fusion system running AI algorithms. If the cost of lidar gradually decreases, it will be used as a safety redundancy for camera + millimeter-wave radar in L4/L5 autonomous driving systems.
Algorithm-first central processing architecture deepens sensor fusion to optimize autonomous driving system performance
Current object-level sensor fusion has certain limitations. This is because front-end sensors all have local processors, which limits the size, power consumption, and resource distribution of each smart sensor, further limiting the performance of the entire autonomous driving system. In addition, large amounts of data processing will quickly drain the vehicle's power and shorten its driving range.
In contrast, the algorithm-first central processing architecture enables what we call deep, centralized sensor-front fusion. This technology optimizes the performance of autonomous driving systems using the most advanced semiconductor process nodes, primarily because of the technology's dynamically distributed processing power across all sensors and the ability to improve the performance of different sensors and data flows based on the driving scenario. With access to high-quality, low-level raw data, the central processor can make smarter and more accurate driving decisions.
Autonomous vehicle manufacturers can use low-power millimeter-wave radar and camera sensors combined with cutting-edge algorithm-first application-specific processors, such as Ambarella’s recently announced 5nm process CV3 AI large-computing domain control chip, which has the best perception and path planning performance and the highest energy efficiency ratio, significantly increasing the mileage of each autonomous vehicle while reducing battery consumption.
Don’t throw away sensors – invest in their fusion
Autonomous driving systems require diverse data to make correct driving decisions, and only deep, centralized sensor fusion can provide the broad data required for optimal autonomous driving system performance and safety. In our ideal model…
1. Low-power, AI-enhanced mmWave radar and camera sensors are locally connected to embedded processors at the periphery of the autonomous vehicle.
2. The embedded processor sends the raw detection-level object data to the central domain SoC.
3. Using AI, the central domain processor analyzes the combined data to identify objects and make driving decisions.
Centralized sensor-front fusion can improve upon existing high-level fusion architectures, making autonomous vehicles using sensor fusion powerful and reliable. To reap these benefits, autonomous vehicle manufacturers must invest in algorithm-first CPUs, as well as AI-enabled millimeter-wave radar and camera sensors. Through these efforts, AI manufacturers can usher in the next stage of technological change in the development of autonomous vehicles.
Previous article:TMC automotive-grade SiC power module packaging (Part 2)
Next article:IAA Commercial Vehicles: Cepton and ZKW Demonstrate LiDAR in Headlights
- Popular Resources
- Popular amplifiers
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- SIMterix-Simplis Collection
- The most annoying thing about making a power supply is drawing the board yourself
- The first FreeRTOS check-in station is open: Application scenario station, closing time is August 14
- [Evaluation of EC-01F-Kit, the NB-IoT development board of Anxinke] Evaluation summary
- Basic points of DSP C language
- Isolated DC-DC power supply module, input and output common ground problem.
- PCB Star Decoration
- Introduction to TTL level, CMOS level, and RS232 level
- RedMonk Programming Language Survey: Python Remains at No. 2
- What is the functional difference between these two capacitors?