Designing an autonomous driving system chip challenges Mobileye and Nvidia

Publisher:蓝天飞行Latest update time:2022-01-27 Source: 佐思产研 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Is it a bit presumptuous to design an autonomous driving chip to challenge Mobileye and Nvidia?

 

Not necessarily. Currently, the design of autonomous driving system chips, or SoCs, is mostly done in a building block manner. Various third-party IPs are the building blocks. As long as they are properly matched, it is still possible to challenge Mobileye and NVIDIA. The key lies not in technology, but in continuous and massive capital injection. This is a marathon race, and perseverance to the end is victory.

 

Mobileye's advantages are hardware and software integration, the shortest development cycle, the lowest development cost, and mature and stable technology. Its disadvantages are obvious product homogeneity, which cannot highlight the characteristics of vehicle manufacturers. The MIPS instruction set makes its system closed and has poor upgrade capabilities, which cannot adapt to the rapid evolution of sensor technology, especially the emergence of a large number of lidars.

 

NVIDIA's advantage is that its AI computing power is extremely powerful, with sufficient computing power redundancy to adapt to various algorithms and meet the algorithm evolution in the next 3-6 years. The GPU's floating-point computing power is also very powerful, with sufficient computing power redundancy and strong visual perception capabilities. NVIDIA provides a complete software stack, and the system is a bit closed, but the computing power is very powerful, so vehicle manufacturers can still make their own characteristics. The disadvantage is that the GPU occupies a large area of ​​the bare crystal, and the hardware cost remains high.

 

In actual autonomous driving, there is much hype but little action. In 2017 and 2018, many technology companies or vehicle manufacturers vowed to launch L4-level vehicles in 2020 or 2021, without steering wheels and brake pedals, but in the end, nothing came of it. In 2021, many manufacturers launched mass-produced vehicles claiming to be L3/L4, but annual sales are unlikely to exceed a thousand. The main reason is that the cost-effectiveness is too low. The so-called L3/L4 is an enhanced version of adaptive cruise control or automatic following in traffic jams, but the cost is very high. At present, the technical conditions of L3/L4 are far from mature and have not reached the point of mass production. For example, in the field of perception, there are challenges such as the sudden appearance of stationary targets, non-standard obstacles, low-profile detection and recognition of infants and children, and rapid changes in light intensity, as well as the decision-making field of allowing or not allowing congestion. In addition, basic conditions are not available, such as high-precision maps and positioning, and V2X for traffic lights.

 

In the next 10 years, L2+ will still be the mainstream of the market, accounting for at least 90% of the market. High AI computing power is useless in the L2+ field. The so-called AI computing power is just a convolution accelerator, just a function of visual classification. When encountering sudden static targets, non-standard obstacles, and low targets such as infants and children, even with the highest AI computing power, accidents will still occur. However, the promotion of hardware manufacturers has made consumers mistakenly believe that high computing power means high-level autonomous driving.

 

Improving safety is not something that can be achieved by simply increasing AI computing power. LiDAR or stereo binocular sensors that can provide native 3D perception can improve safety. At the same time, it must also be based on traditional explainable and predictable algorithms to improve safety. Deep learning is a black box. It is impossible to predict the results, explain, or improve safety. Stereo binocular technology is too difficult to master. Only Mercedes-Benz and Toyota can do it. This is the result of more than ten years of internal talent training. Most companies can only choose LiDAR.

 

The laser radar's demand for SoC computing resources can be met by the CPU, avoiding the use of high-cost GPUs.

 

 Image source: Internet

 

LiDAR can measure the distance to an object through laser reflection. Since the vertical angle of the laser is fixed, denoted as a, we can directly calculate the z-axis coordinate as sin(a)*distance. From cos(a)*distance, we can get the projection of distance on the xy plane, denoted as xy_dist. While recording the distance of the reflection point, LiDAR also records the horizontal angle b of the current LiDAR rotation. Based on a simple set transformation, we can get the x-axis coordinate and y-axis coordinate of the point as cos(b)*xy_dist and sin(b)*xy_dist respectively.

 

Velodyne VLP-16 Data Pack

 

Image source: Internet

 

The data processing of LiDAR consists of two parts. The first part is coordinate transformation, including the transformation between polar coordinates and rectangular coordinates XYZ, and the transformation between the LiDAR coordinate system and the vehicle coordinate system, which mainly involves trigonometric function transformation. The second part is point cloud registration. Preprocessing such as noise removal can be considered as part of point cloud registration.

 

In the early days, multiplication and floating-point computing resources were extremely scarce, so J. Volder proposed a fast algorithm in 1959, called CORDIC (COordinate Rotation DIgital Computer) coordinate rotation digital calculation algorithm. This algorithm can calculate the values ​​of common trigonometric functions, such as Sin, Cos, Sinh, Cosh, etc., using only shift and addition and subtraction operations. J. Walther further improved this algorithm in 1974, making it possible to calculate a variety of transcendental functions, further expanding the application of the Cordic algorithm. Because the Cordic algorithm only uses shift and addition, it is easy to implement with the CPU. The Cordic algorithm was first used in navigation systems, so that the rotation and orientation operations of vectors do not require complex operations such as looking up trigonometric tables, multiplication, square root, and inverse trigonometric functions.

 

CORDIC uses continuous rotation to find the corresponding sine and cosine values, which is an approximate solution. The rotation angle is very particular, and the angle of each rotation must make the tangent value approximately equal to 1/(2^N). The purpose of the rotation is to make the Y axis approach 0. Accumulate the angle of each rotation, that is, the sum of the rotation angles is the tangent value. For example, if the Y axis rotates 45 degrees, the value decreases by 1/2, and then rotates 26.56505°, and then decreases by 1/4, and then rotates by an angle of 14.03624º, and then decreases by 1/8, and then decreases by 1/16, 1/32..., and finally the value of the Y axis is infinitely small and approaches 0. It is also very simple to avoid floating-point operations. We use 256 to represent 1 degree (that is, 8 bits). In this way, it can be accurate to 1/256=0.00390625 degrees, which is accurate enough for most cases. 256 represents 1 degree, so 45 degrees is 45*256 = 115200. Other degrees are similar. Only integer operations are performed, avoiding floating-point operations.

 

The second part has a relatively large amount of calculation and is also of concern to programmers. Generally, trigonometric function calculation and coordinate transformation are provided by the SDK provided by the lidar manufacturer.

 

When it comes to point cloud registration, the PCL library is indispensable. The PCL library is derived from ROS, the robot operating system. The robot system often uses laser radar 3D images. In order to reduce repeated development and connect various platforms, ROS launched the PCL library. PCL (Point Cloud Library) is a large cross-platform open source C++ programming library built on the basis of previous point cloud related research. It implements a large number of general point cloud-related algorithms and efficient data structures, involving point cloud acquisition, filtering, segmentation, registration, retrieval, feature extraction, recognition, tracking, surface reconstruction, visualization, etc. It supports multiple operating system platforms and can run on Windows, Linux, Android, MacOSX, and some embedded real-time systems. If OpenCV is the crystallization of 2D information acquisition and processing, then PCL has the same status in 3D information acquisition and processing. PCL is a BSD license method and can be used for commercial and academic applications free of charge. The main people behind it are the University of Munich (TUM - Technische Universität München) and Stanford University. In the industrial field, in addition to all lidar manufacturers supporting the PCL library, Toyota and Honda are also sponsors of the PCL library.

 

The PCL library was established in 2013. At that time, the CPU was the main computing resource, so the PCL library was optimized for the CPU. The underlying data structures in PCL make extensive use of SSE (which can be seen as SIMID) optimization. Most of its mathematical operations are implemented based on Eigen (an open source C++ linear algebra library). In addition, PCL also provides support for OpenMP and Intel Thread Building Blocks (TBB) libraries to achieve multi-core parallelization. The backbone of the fast nearest neighbor search algorithm is provided by FLANN (a library that performs fast approximate nearest neighbor search). All modules and algorithms in PCL pass data using Boost shared pointers, thus avoiding re-copying data that already exists in the system.

 

The two most commonly used methods for point cloud registration are the iterative closest point algorithm ICP (Iterative Closest Point), the KD tree can be seen as a high-dimensional version of ICP, and the normal distribution transform NDT (Normal Distribution Transform). These two algorithms are naturally in the PCL library. Intel launched the Open3D open source library similar to the PCL library in 2020. Needless to say, there are also optimizations for the CPU.

[1] [2] [3] [4]
Reference address:Designing an autonomous driving system chip challenges Mobileye and Nvidia

Previous article:Zadar Labs Launches Software-Defined 4D Millimeter-Wave Imaging Radar with 0.4° Resolution
Next article:How good is the “first stock in automotive chips”?

Recommended ReadingLatest update time:2024-11-16 14:39

NVIDIA Startup Showcase 2024 is about to kick off as we usher in the new industrial revolution
We are in the midst of a new industrial revolution. In addition to innovative technologies, the new industrial revolution relies more on a large number of innovative and adventurous technology startups to quickly transform the latest scientific research results into practical applications and accelerate the populari
[Industrial Control]
NVIDIA Startup Showcase 2024 is about to kick off as we usher in the new industrial revolution
Qualcomm President Amon: Production capacity is tight and I can't sleep well. I oppose NVIDIA's acquisition of Arm
Recently, Qualcomm President and CEO-elect Cristiano Amon (hereinafter referred to as Amon) was interviewed by the American media CENT and responded to many hot topics recently, including tight production capacity, the appointment of a new CEO, and NVIDIA's acquisition of Arm. Amon said that the capacity shortage in t
[Mobile phone portable]
Mobileye's first mass-produced Robotaxi: Based on the NIO ES8, equipped with Chinese-made LiDAR
At the 2021 Munich International Motor Show (IAA Mobility 2021), Intel, as a chip company, attracted a lot of attention. The reason is that its autonomous driving subsidiary Mobileye has made another big move. Intel's newly appointed CEO Pat Gelsinger personally took the stage to announce many new developme
[Automotive Electronics]
Mobileye's first mass-produced Robotaxi: Based on the NIO ES8, equipped with Chinese-made LiDAR
10 years later, AMD and NVIDIA's new generation graphics cards are once again manufactured by TSMC
If nothing unexpected happens, AMD and NVIDIA will both launch a new generation of graphics cards in 2022. AMD's RX 7000 series is based on the RDNA3 architecture, while NVIDIA's RTX 40 series is based on the Ada Lovelace architecture. The specific performance of the RX 7000 and RTX 40 series graphics cards is unknown
[Home Electronics]
Intel announces plans to list Mobileye as an independent company
Decision to pursue IPO based on Mobileye’s revenue growth and innovation progress to unlock value for Intel shareholders News Headlines Mobileye is a market-leading provider of assisted and autonomous driving solutions. Mobileye's full-year revenue in 2021 is expected to increase by mor
[Semiconductor design/manufacturing]
Intel announces plans to list Mobileye as an independent company
200PB of data: Mobileye’s secret recipe for autonomous driving
Through powerful computer vision technology and natural language models, the industry-leading Mobileye dataset has become a gold mine for "training" of autonomous driving. Mobileye announced at CES 2022 that it has collected 200PB of data, which means that Mobileye has a virtual treasure trove of driv
[Automotive Electronics]
200PB of data: Mobileye’s secret recipe for autonomous driving
NVIDIA announces that Microsoft, Tencent, and Baidu use CV-CUDA to develop computer vision AI
NVIDIA announces that Microsoft, Tencent, and Baidu use CV-CUDA to develop computer vision AI The upcoming public beta version optimizes pre- and post-processing to achieve higher throughput at a quarter of the cost and energy consumption. Microsoft, Tencent and Baidu are using NVIDIA CV-CUDA to develop computer v
[Industrial Control]
NVIDIA announces that Microsoft, Tencent, and Baidu use CV-CUDA to develop computer vision AI
NVIDIA and Deloitte to Bring New Services Based on NVIDIA AI and Omniverse Platform to Enterprises Globally
The two companies’ expanded collaboration will help global enterprises easily build and run advanced AI and Metaverse services, including IoT edge AI, voice AI, recommendation systems, customer service chatbots, cybersecurity, digital twins, and more SANTA CLARA, Calif. — GTC — September 20, 2
[Network Communication]
NVIDIA and Deloitte to Bring New Services Based on NVIDIA AI and Omniverse Platform to Enterprises Globally
Latest Automotive Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号