The future of autonomous vehicles: centralized sensor fusion

Publisher:rockstar7Latest update time:2022-08-25 Source: 盖世汽车 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Today, most self-driving cars rely on sensor fusion, which is to analyze and integrate multi-sensor data from millimeter-wave radar, lidar and cameras according to certain rules to collect environmental information. As demonstrated by the giants in the self-driving car industry, multi-sensor fusion improves the performance of self-driving car systems and makes vehicle travel safer.


But not all sensor fusion is created equal. While many autonomous vehicle manufacturers rely on “target-level” sensor fusion, only centralized sensor-forward fusion can provide the autonomous system with the information it needs to make the best driving decisions. We’ll further explain the difference between target-level fusion and centralized sensor-forward fusion, and explain why centralized sensor-forward fusion proves indispensable.


Centralized sensor fusion preserves raw sensor data for more accurate decision making


Autonomous driving systems typically rely on a set of specialized sensors to collect low-level raw data about their environment. Each type of sensor has advantages and disadvantages, as shown in the figure:


The future of autonomous vehicles: centralized sensor fusion


The fusion of millimeter-wave radar, lidar and camera sensors can maximize the quality and quantity of collected data to generate a complete picture of the environment.


The advantages of multi-sensor fusion over individual sensor processing have been widely accepted by autonomous vehicle manufacturers, but this fusion method usually occurs at the "target level" post-processing stage. In this mode, the collection, processing, fusion and classification of object data all occur at the sensor level. However, before the data is processed comprehensively, individual sensors filter the information separately in advance, so that the background information required for autonomous driving decisions is almost eliminated, which makes it difficult for target-level fusion to meet the needs of future autonomous driving algorithms.


Centralized sensor front fusion avoids such risks. Millimeter wave radar, lidar and camera sensors send the underlying raw data to the vehicle's central domain controller for processing. This approach maximizes the amount of information obtained by the autonomous driving system, allowing the algorithm to obtain all valuable information, thereby achieving better decisions than target-level fusion.


AI-enhanced mmWave radar significantly improves autonomous driving system performance through centralized processing


The future of autonomous vehicles: centralized sensor fusion


Today, autonomous driving systems already process camera data centrally. But when it comes to millimeter-wave radar data, centralized processing is still unrealistic. High-performance millimeter-wave radars typically require hundreds of antenna channels, which greatly increases the amount of data generated. Therefore, local processing becomes a more cost-effective option.


However, Ambarella's AI-enhanced mmWave radar perception algorithms can improve radar angular resolution and performance without the need for additional physical antennas. Raw radar data from fewer channels can be transmitted to the central processor at a lower cost using interfaces such as standard automotive Ethernet. When autonomous driving systems fuse raw AI-enhanced radar data with raw camera data, they can take full advantage of these two complementary sensing methods to build a complete picture of the environment, making the fused results more comprehensive and beyond the information obtained by any single sensor.


The update and iteration of millimeter-wave radar helps to reduce costs and greatly improve the performance of autonomous driving systems. When traditional low-cost radars are mass-produced, the price of each millimeter-wave radar can be less than US$50, which is an order of magnitude lower than the target cost of lidar. Combined with ubiquitous low-cost camera sensors, AI radar provides acceptable accuracy, which is critical for the mass production of large-scale commercial autonomous vehicles. The lidar sensor overlaps with the camera/millimeter-wave radar perception fusion system running AI algorithms. If the cost of lidar gradually decreases, it will be used as a safety redundancy for camera + millimeter-wave radar in L4/L5 autonomous driving systems.


Algorithm-first central processing architecture deepens sensor fusion to optimize autonomous driving system performance


Current object-level sensor fusion has certain limitations. This is because front-end sensors all have local processors, which limits the size, power consumption, and resource distribution of each smart sensor, further limiting the performance of the entire autonomous driving system. In addition, large amounts of data processing will quickly drain the vehicle's power and shorten its driving range.


In contrast, the algorithm-first central processing architecture enables what we call deep, centralized sensor-front fusion. This technology optimizes the performance of autonomous driving systems using the most advanced semiconductor process nodes, primarily because of the technology's dynamically distributed processing power across all sensors and the ability to improve the performance of different sensors and data flows based on the driving scenario. With access to high-quality, low-level raw data, the central processor can make smarter and more accurate driving decisions.


Autonomous vehicle manufacturers can use low-power millimeter-wave radar and camera sensors combined with cutting-edge algorithm-first application-specific processors, such as Ambarella’s recently announced 5nm process CV3 AI large-computing domain control chip, which has the best perception and path planning performance and the highest energy efficiency ratio, significantly increasing the mileage of each autonomous vehicle while reducing battery consumption.


Don’t throw away sensors – invest in their fusion


Autonomous driving systems require diverse data to make correct driving decisions, and only deep, centralized sensor fusion can provide the broad data required for optimal autonomous driving system performance and safety. In our ideal model…


1. Low-power, AI-enhanced mmWave radar and camera sensors are locally connected to embedded processors at the periphery of the autonomous vehicle.


2. The embedded processor sends the raw detection-level object data to the central domain SoC.


3. Using AI, the central domain processor analyzes the combined data to identify objects and make driving decisions.


Centralized sensor-front fusion can improve upon existing high-level fusion architectures, making autonomous vehicles using sensor fusion powerful and reliable. To reap these benefits, autonomous vehicle manufacturers must invest in algorithm-first CPUs, as well as AI-enabled millimeter-wave radar and camera sensors. Through these efforts, AI manufacturers can usher in the next stage of technological change in the development of autonomous vehicles.


Reference address:The future of autonomous vehicles: centralized sensor fusion

Previous article:TMC automotive-grade SiC power module packaging (Part 2)
Next article:IAA Commercial Vehicles: Cepton and ZKW Demonstrate LiDAR in Headlights

Latest Automotive Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号