After Tesla's hammer, Nvidia: New chip is called "Father of Thor"

Publisher:bluepionLatest update time:2019-12-21 Source: EEWORLDKeywords:Nvidia Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

After Tesla's "sledgehammer on the door", Nvidia released a chip called "Orin".


Regardless of what the "ethical issues" are, Huang Xiaoming gave several technical parameters at GTC China 2019:


17 billion transistors


8 core 64-bit CPU


200 TOPS deep learning computing power


Compatible with L2-L5 autonomous driving development


Achieve ISO 26262 ASIL-D and other system safety standards


2022 SOP


Interpretation——


Compared with the previous generation system-level chip Xavier, Orin's computing power has increased by nearly 7 times and is backward compatible with Xavier.


Orin supports the simultaneous operation of its own new generation GPU architecture and Arm Hercules CPU core to improve fault tolerance


Provide a low-cost version for OEMs, which can meet the needs of L2 autonomous driving with a single camera and can also use the software stack of the entire autonomous driving product line


"Leather Yellow" highly condenses three key words: "scalable", "programmable" and "software defined".

However, compared with the nuclear bomb-level release speed of "a new product every few minutes" in previous years, such a small SoC chip could not stop the more than 6,100 people present from asking: "How come Nvidia, which makes chips, only showed a hardware product?"

This made Huang Renxun, who was dressed exactly the same as last year on stage, look hardcore but not sexy enough.

What's more, in the company's just released Q3 quarterly financial report, the data from the automotive business is not good. Although the media friends generally kept their mouths shut during the concentrated interviews, it still could not cover up the decline in performance caused by the overall weakness of the automotive industry: public data showed that Nvidia's automotive business stopped growing in Q3 2019 after continuing the growth momentum in the first seven quarters, with a year-on-year decline of 6%.


You should know that the company's automotive business has seen a year-on-year increase of 63% in the past three years. At a time when automakers are slowing down their investment in automotive electronics and autonomous driving due to cost cuts, a report from JD Power Survey shows that by 2034, the budget for this aspect is expected to account for only about 10% of the overall auto sales share.


Is Nvidia, the biggest beneficiary of the explosive growth in automotive electronics, about to bid farewell to the century-old automotive industry?

Computing power = hardcore?

From a technical perspective, NVIDIA has always insisted that its own GPU is the perfect architecture for deep learning.

(“With the end of Moore’s Law, it is now generally accepted that GPU-accelerated computing will be the future.”

“Great chips are just the starting point.”)


Having said that, perhaps due to the power consumption problem after the chip is "installed on the car", Nvidia plans to build its own CPU architecture from scratch based on the ARM architecture license. Future products will also be gradually iterated from suppliers to self-research.


After all, when Tesla's FSD chip was first released, Nvidia brought out the 320 TOPS of the AGX Pegasus dual chip to compete. But the more realistic problem is that compared with the FSD chip, which consumes about 200W of power, the layout of Nvidia's two chips consumes up to 500W of power. Some media calculated that this is equivalent to an electric car consuming half a kilowatt-hour more per hour.


This provides a good answer to our question: For autonomous driving chips, does strong computing power mean super hardcore?


The answer is obviously no. Computing power cannot be used as the only indicator to measure the performance of AI chips. In practical use, in the case of multi-service switching, especially when measuring the performance of node-side chips and cloud-side chips, the performance of the computing core should theoretically be regarded as an important criterion.


In addition, AI acceleration hardware is also very dependent on memory bandwidth. Therefore, under the same algorithm and computing workload, the efficiency of bandwidth usage by the computing core also determines the overall performance of the system. According to reports, Nvidia's own NV Link 2.0 multi-chip data transmission standard can currently provide a bandwidth of 100GB/s, while Tesla has a 2TB bandwidth SRAM.


In other words, the Nvidia AGX Pegasus with 320 TOPS computing power is of no value to Tesla. After all, the latter is struggling to reduce the operating costs of the Robotaxi project for fully autonomous vehicles. Musk also directly said: "The key is how many TOPS in the chip can actually be used for image processing and behavior prediction related to autonomous driving?"


Compared with Tesla's FSD chip, which is tailor-made for itself and has precisely controlled various indicators, Nvidia's voice in the competition is obviously much weaker. To some extent, Tesla represents the real needs of most OEMs in the market.


After all, there are too many dedicated chips for fixed functions in the automotive industry. They are small in size, low in cost, and low in power consumption, and these characteristics are almost exactly what car manufacturers want.


But in Nvidia's eyes, this is synonymous with "low programmability". Dedicated chips cannot handle the complex workload required for advanced high-level autonomous driving, which includes diversified data processing required by the fusion of more than ten cameras, millimeter-wave radars, lidars and other sensors, and leaves enough safety redundancy, which requires at least several hundred TOPS of computing power.


"When it comes to deep learning training through neural networks, there are only two artificial intelligence supercomputers. One is from Tesla and the other is from Nvidia," said Danny Shapiro, senior director of Nvidia's automotive business unit.


According to him, NVIDIA is the only company that has submitted test applications for all categories and ranked first in all categories. Its full software stack can support all mainstream AI frameworks. You know, in order to achieve autonomous driving, in addition to software and hardware, a lot of pre-processing and post-processing work is also required.

Taking the "new face" Orin as an example, its unique design lies in the use of multiple processors, including GPU, ARM architecture CPU, programmable processor, and codec. Since both Orin and Xavier can be programmed through open CUDA, TensorRT API and various libraries, developers can use products across multiple generations after a one-time investment.


Obviously, compared with OEMs that strictly control costs and deliver immediately after mass production, Nvidia's current customer base is probably more focused on technology companies dedicated to high-level autonomous driving. The latter can only maximize Nvidia's value by continuously completing software iterations on the same development board.


Huawei is also trying to reach the same wave of customers. Just four months ago, Xu Zhijun released two AI chips, Ascend 910 and Ascend 310, under the big screen of "The Strongest Chip on Earth". Even the MDC in-vehicle technology unit launched on this basis also pointed to Nvidia's monopoly in the chip industry.


According to the description at the time, the Ascend 910 has a computing power of 256 TOPS, while the Ascend 310 has a weaker computing power, but it takes into account power consumption and is more suitable for self-driving cars. "The Ascend 910 is benchmarked against Google and Nvidia's AI computing chips for training AI models. The specific pricing has not yet been determined, but it will definitely be lower than Nvidia and Google." Xu Zhijun said with a smile.

Ecology: Business Acumen

In fact, what Xu Zhijun wants to target is not just computing power, but Nvidia’s highest moat.


In Q1 2020, MindSpore will be officially open source and "build its own ecosystem like NVIDIA."


This explains why Huang Renxun only brought one hardware product but was still not in a hurry. Within the company, the number of software developers has far exceeded the number of hardware engineers. Danny Shapiro also mentioned that the significance of NVIDIA Drive system is to provide customers with pre-trained models, thereby opening up the ecosystem to join the GPU cloud.


By 2019, Nvidia, which has sold 1.5 billion GPUs, no longer needs to hide its ambitions. Every GPU in use on the market is compatible with CUDA, and behind this platform are rich libraries, tools, and applications. In 2018, Nvidia released more than 500 SDKs and libraries, improving GPU performance by optimizing the software stack, allowing deep learning training to increase fourfold in three years and deep learning reasoning to increase twofold in one year.

Today, NVIDIA has once again decided to open source NVIDIA DRIVE deep neural networks for autonomous vehicle development to the transportation industry.


In other words, with NVIDIA's open source pre-trained AI models and training codes, as long as autonomous driving car developers are willing to join this ecosystem, they can freely expand and customize models through a set of NVIDIA AI tools to improve the robustness and capabilities of their autonomous driving systems.


This product runs on the deep neural network core of the DRIVE AGX platform and is composed of dozens of deep neural networks that can handle redundant and different tasks to ensure accurate perception, positioning and path planning, and complete tasks such as traffic light and traffic sign detection, target detection (vehicles, pedestrians, bicycles), path perception, and on-board eye tracking and gesture recognition.


In addition to open sourcing deep neural networks, NVIDIA has also released a set of advanced tools that allow developers to customize and enhance NVIDIA's deep neural networks with their own datasets and target feature sets. This set of tools uses active learning, federated learning, and transfer learning to train deep neural networks.


The popularity of Nvidia products among startups in the past may not have been convincing, but the fact that internet giant Didi Chuxing has bought into them says it all.

[1] [2]
Keywords:Nvidia Reference address:After Tesla's hammer, Nvidia: New chip is called "Father of Thor"

Previous article:Neptune has landed! Nvidia releases autonomous driving chip Orin
Next article:Nexeon obtains key silicon anode patent to enhance or replace traditional graphite anode with silicon

Recommended ReadingLatest update time:2024-11-15 08:50

Nvidia's Automotive Bureau
Some people say that in the era of smart phones, mobile phone manufacturers cannot do without Qualcomm; in the era of smart cars, car manufacturers may not be able to do without Nvidia.   Nvidia founder and CEO Huang Renxun's automotive dream is gradually becoming a reality. Not long ago, Huang Renxun announced that t
[Automotive Electronics]
Nvidia's Automotive Bureau
Arm CEO's Fireside Chat with Nvidia's Jen-Hsun Huang
This article was written by Paul McLellan, a Cadence technical expert Arm recently held the 2020 Arm DevSummit, which was hosted by Arm CEO Simon Segars. In his opening remarks, he briefly discussed the acquisition of NVIDIA and Arm. Let me try to get this straight, I am a huge fan of both companies, imagine what
[Embedded]
Arm CEO's Fireside Chat with Nvidia's Jen-Hsun Huang
Joining forces with NVIDIA to build a “super brain” for automotive computing. Why Lenovo?
Lenovo Group announced on March 22 that it will independently develop the latest generation of automotive domain controller platform based on the new generation of NVIDIA DRIVE Thor system-on-chip (SoC). As the first Tier 1 company to adopt the NVIDIA DRIVE Thor platform, Lenovo will use it to develop a new generati
[Automotive Electronics]
Joining forces with NVIDIA to build a “super brain” for automotive computing. Why Lenovo?
It is reported that Nvidia is expanding its "non-TSMC supply chain" to triple its monthly production capacity of "silicon interposer" to 10,000 pieces
According to news on August 25, NVIDIA and TSMC have a close cooperative relationship. IT House has previously reported that driven by NVIDIA’s increase in AI chip production, TSMC’s advanced process capacity utilization has increased significantly recently, but NVIDIA is currently looking for more supply chains The s
[Semiconductor design/manufacturing]
Nvidia launches supercomputer for self-driving car AI training
According to foreign media reports, NVIDIA has released the supercomputer DGX SuperPOD, which ranks 22nd among the world's fastest supercomputers. It provides AI infrastructure to meet the massive demand for the deployment of autonomous vehicles. The system was built in just three weeks using Mellanox interconnect tec
[Automotive Electronics]
Nvidia launches supercomputer for self-driving car AI training
Economic Daily: What does the Nvidia financial report "avalanche" indicate?
On August 8, local time, semiconductor giant Nvidia disclosed a financial report that was far below expectations before the US stock market opened. Some media described the difference in this financial report as an "avalanche". Why do you say that? According to the announcement, Nvidia's revenue forecast for the secon
[Semiconductor design/manufacturing]
NVIDIA posts job ad suggesting Nintendo is developing a new console?
There have been reports that Nintendo is developing an iteration of the Switch. Recently, NVIDIA seems to have begun preparing for a new next-generation chip, which may be related to Nintendo. Recently, NVIDIA released a job advertisement for a "Game Console Developer Tools Engineer". The title almost says it all -
[Mobile phone portable]
Detailed explanation of NVIDIA chips in the software transplantation design and development of autonomous driving
As a universal SOC chip, the NIVIDIA DRIVE Orin series can be used for a variety of perception and general computing tasks. Its high-quality computing power, operating performance, complete compatibility, and rich I/O interfaces can reduce the complexity of system development. These features make the Orin series of
[Embedded]
Detailed explanation of NVIDIA chips in the software transplantation design and development of autonomous driving
Latest Automotive Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号