When automotive chips encounter multi-sensor fusion: "I'm too stupid"

Publisher:心若水仙Latest update time:2019-12-16 Source: 盖世汽车 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

If human drivers mainly observe the surrounding road conditions with their eyes, then self-driving cars perceive road environment information through various sensors installed on the car. For self-driving cars, sensors are their "eyes".


At present, the environmental perception sensors on self-driving cars mainly include cameras, ultrasonic radars, millimeter-wave radars, and laser radars. However, these sensors have their own advantages and disadvantages in performance. In terms of obtaining environmental information such as vehicles, pedestrians, and traffic lights, cameras have excellent performance due to their ability to accurately identify the properties of objects. However, in harsh environments such as strong light, rain, snow, and fog, the recognition accuracy of cameras is easily affected. In addition, camera-based ADAS solutions often require cars to have strong computer vision capabilities. If the latter does not meet the requirements, the camera's ability to perceive the environment will also be greatly reduced; although millimeter-wave radars have strong penetration, the imaging effect is poor; as for laser radars, although they have high resolution and long detection distance, they cannot be used normally in harsh weather such as rain, snow, haze, and sandstorms, and they are expensive. As the level of autonomous driving of cars becomes higher and higher, relying solely on a single sensor can no longer meet the needs of autonomous driving to cope with complex scenarios and safety redundancy, and multi-sensor fusion has become inevitable.


For example, Tesla's Autopilot 2.0 is equipped with 8 cameras, 1 millimeter-wave radar and 12 ultrasonic radars, while the new Audi A8, known as the "world's first L3 autonomous driving mass production model", is equipped with as many as 24 sensors, including 4 fisheye cameras, 12 ultrasonic radars, 4 medium-range millimeter-wave radars, 1 long-range millimeter-wave radar, 1 laser radar, and 1 front-view camera. WM Motor's Living Pilot driver assistance system also uses 20 sensors, including 1 front monocular high-definition camera, 3 millimeter-wave radars, 4 panoramic cameras and 12 ultrasonic radars.


Autonomous driving, self-driving cars, sensors, FPGA chips, ADAS


Why does a car need to be equipped with so many sensors? The most direct reason is to improve the accuracy and precision of the autonomous driving perception system, thereby improving the safety and robustness of the autonomous driving system. But this does not mean that the more sensors on an autonomous driving car, the better.


Theoretically, the more sensors installed on a car, the more effectively it can detect risks and obstacles in the surrounding environment. But this will also bring other problems, such as cost. There is no doubt that installing one more sensor on the car will increase the cost, as well as the problem of massive data processing, because these sensors generate a large amount of data every day, which will inevitably put higher requirements on the processing power of the chip.


At present, as the computing requirements of ADAS and autonomous driving continue to increase, the computing power of traditional MCUs can no longer meet the computing requirements of autonomous vehicles. The industry is increasingly relying on AI chips such as CPUs, GPUs, FPGAs, and ASICs. In particular, FPGAs, compared with CPUs, GPUs, and ASICs, can better meet the functional requirements of different levels of autonomous driving due to their highly adaptive adaptability, high throughput, and low latency.


Autonomous driving, self-driving cars, sensors, FPGA chips, ADAS


"For example, in terms of latency, the use of the FPGA solution has improved by 12 times, while the energy consumption of the FPGA solution may be only 1/10 of that of the general computing architecture. In addition, the FPGA solution has some additional benefits that the general computing architecture cannot provide, such as very strong timing and higher security. In addition, based on the FPGA platform, we can also do some customized development to make our algorithms more efficient." Li Hengyu, general manager of Pony.ai Beijing R&D Center, said this at the Xilinx XDF Conference Asia a few days ago.


Xilinx has always been at the forefront of the industry in the development of FPGA chips. It is reported that as of 2018, the cumulative shipments of Xilinx's adaptive devices have reached 170 million pieces, covering 111 models of 29 car brands. Based on the existing achievements, Xilinx once again launched two automotive-grade products, Zynq UltraScale MPSoC 7EV and 11 EG, at this event. It is reported that compared with previous products, the new products provide the best programmability, performance and I/O functions, realize high-speed data aggregation, preprocessing and distribution, and computing acceleration, and can meet the development needs of advanced driver assistance systems and autonomous driving vehicles at levels L2 to L4.


Autonomous driving, self-driving cars, sensors, FPGA chips, ADAS


Dan Isaacs, director of automotive strategy and customer marketing at Xilinx, believes that scalability and high adaptability are particularly important. He pointed out that the requirements for data processing are different at different stages of development of autonomous driving, which requires that the relevant systems and devices of the car are scalable. Only scalable and adaptive products can better support the continuous iteration of autonomous driving products and technologies.


Moreover, Dan Isaacs believes that the internal connectivity of future self-driving cars is very important, and distributed algorithms must be used to enable each node to have computing capabilities in order to achieve high performance of different functions in self-driving. Li Hengyu agrees with this. In his opinion, a distributed system that enables each node to have computing capabilities and intelligence is the most efficient system. If all computing and intelligence are concentrated in a central node or in the cloud, it is not a very good solution.


Autonomous driving, self-driving cars, sensors, FPGA chips, ADAS


另外,此次活动中赛灵思还还发布了其最新的Vitis统一软件平台,该平台可以根据软件或算法代码自动适配和使用赛灵思硬件架构,将用户从繁杂的硬件专业知识中解放出来。而对于硬件开发者来说,Vitis则可以通过软硬件工程师在同一工具平台上的协作,显著提升工作效率,是赛灵思从器件向平台企业战略转型的里程碑式产品之一。


It is worth mentioning that although Xilinx has rich R&D experience in the two fields of ADAS and autonomous driving, Dan Isaacs believes that there is still a long way to go for automotive products to achieve full automation. Many technical functions are still in the early stages and need further breakthroughs. In particular, if you want to achieve autonomous driving in a densely populated area, there are still many problems to be solved and more tests need to be carried out on the road to improve the performance of autonomous driving.


Reference address:When automotive chips encounter multi-sensor fusion: "I'm too stupid"

Previous article:The University of Waterloo combines radar technology and AI to create a new device that can detect whether children are left in cars
Next article:Competing with Waymo, RoboSense launches LiDAR solution

Latest Automotive Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号