Why Tesla "abandoned" millimeter-wave radar

Publisher:upsilon30Latest update time:2021-06-01 Keywords:autopilot Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Recently, Tesla's official blog announced that Autopilot is transitioning to the camera-based Tesla Vision system.

 

Starting from May 2021, Model 3 and Model Y manufactured in North America will no longer be equipped with millimeter-wave radar. These models will use Tesla's camera vision and deep neural networks to support Autopilot, FSD fully autonomous driving and certain active safety features.

 

Tesla, Tesla

 

The forward-facing radar has a unit price of about RMB 300 and a sales volume of over 450,000 vehicles per year (2020 data). For Continental, Tesla's millimeter-wave radar supplier and top Tier 1 supplier, it is not pleasant news to lose an order worth over 100 million yuan a year.

 

Remove millimeter wave radar


Although Tesla explicitly stated that computer vision and deep neural network processing will enable the perception needs of active safety/Autopilot/FSD, all parties immediately responded as soon as the blog was released.

 

The official website of the U.S. National Highway Traffic Safety Administration (NHTSA) has modified the active safety feature page for the 2021 Model 3 and Model Y, including forward collision warning (FCW), collision avoidance automatic braking (CIB), and dynamic brake assist (DBS), which are clearly stated and will no longer be equipped on models produced after April 27, 2021.

 

Tesla, Tesla

Tesla, Tesla

 

At the same time, Consumer Reports announced that it would suspend the 2021 Model 3 from being listed as "recommended," and the Insurance Institute for Highway Safety (IIHS) rescinded the Model 3's previous highest safety rating of Top Safety Pick +.

 

To sum up, Tesla said we removed the millimeter-wave radar and achieved the capabilities before the radar through the camera, but everyone only heard the first half of the sentence.

 

In my opinion, all the major private and regulatory safety agencies are now somewhat allergic to Tesla. In fact, if we review the efforts of Mobileye, the world's largest visual perception supplier, over the years, it is a history of gradually moving radar out of the scope of active automotive safety.

 

In 2007, Mobileye active safety technology entered the automotive industry for the first time.


In 2010, Mobileye AEB, which integrates radar and camera, was put into mass production on Volvo brand vehicles.


In 2011, Mobileye's pure vision forward collision warning (FCW) was mass-produced for BMW, GM and Opel brands.


In 2013, Mobileye's pure vision vehicle and pedestrian automatic emergency braking (AEB) went into mass production for BMW and Nissan brands.


In 2013, Mobileye's pure vision adaptive cruise control (ACC) went into mass production at the BMW brand.


In 2015, Mobileye's pure vision full-function AEB entered multiple OEMs.

 

But things are getting worse. Tesla CEO Elon Musk had to dispel the rumor through Electrek: All active safety features are effective in new models, NHTSA will retest new models next week, and the current models that have removed radars are standard with these features.

 

Tesla, Tesla

 

But the public's doubts have not been dispelled. For example, radar is good at measuring the distance and speed of obstacles, which is exactly the traditional weakness of cameras. How does Tesla solve this problem?

 

Or, how can two sensors be better than one? Even if the camera can do the work of radar, wouldn’t it be better to use both sensors together?

 

Let’s talk about these issues below.

 

Computer Vision + RNN > Radar?


We need to first understand the technical principles of radar and the role it plays in autonomous driving.

 

Millimeter-Wave radar transmits electromagnetic wave signals and receives target reflected signals to obtain the relative speed, relative distance, angle, movement direction, etc. of other obstacles around the vehicle body.

 

Tesla, Tesla

 

By processing the above information, cars can be equipped with a series of active safety features, such as adaptive cruise control (ACC), forward collision warning (FCW), lane change assistance (LCA), automatic following car (S&G) and even blind spot detection (BSD).

 

So, how does Tesla obtain the above information through the camera, for example, the judgment of the distance to the car in front?

 

On August 21, 2020, Elon said on Twitter that accurate distance calculation through pure vision is the basis, and other sensors can help, but that is not the basis. The blog post he replied to introduced a Tesla patent called "Estimating object Properties Using Image Data".

 

Tesla, Tesla

 

On April 13, Tesla Model 3 owner and Facebook distributed AI and machine learning software engineer Tristan Rice hacked into Autopilot's firmware and revealed the technical details of how Tesla replaced radar with machine learning.

 

Tesla, Tesla

 

According to Tristan, from the binary file of the new firmware, it can be seen that Autopilot's deep neural network has added many new outputs, including many traditional radar output data such as distance, speed, acceleration, etc. in addition to the existing xyz output.

 

Can a deep neural network read velocity and acceleration from a static image? Of course not.

 

Tesla trained a highly accurate RNN to predict the velocity and acceleration of obstacles based on time-series videos at 15 frames per second.

 

What is RNN? The keyword of RNN is prediction. Recurrent Neural Network, as the name implies, is based on the transmission and processing of information in a circular neural network, and processes input sequences of any time sequence through "internal memory" to accurately predict what will happen next.

 

Tesla, Tesla

 

Nvidia's AI blog once gave a classic example: suppose the restaurant's menu offerings remain the same: burgers on Monday, tacos on Tuesday, pizza on Wednesday, sushi on Thursday, and pasta on Friday.

 

For RNN, if we input sushi and ask for the answer to "what to eat on Friday", it will output the prediction result: pasta. Because RNN already knows that this is a sequence, and Thursday's dishes have just been completed, the next dish for Friday is - pasta.

 

For Autopilot's RNN, given the movement paths of pedestrians, vehicles and other obstacles around the current car, the RNN can predict the next movement trajectory, including position, speed and acceleration.

 

In fact, a few months before the official announcement of the removal of radar on May 25, Tesla had been running its RNN in parallel with radars in its global fleet, improving the accuracy of RNN predictions by calibrating the correct data output by the radar and the RNN output results.

 

By the way, for the very classic lane-cutting handling under China's traffic conditions, Tesla has also achieved better performance through similar route changes.

 

Andrej Karpathy, senior director of Tesla AI, revealed in an online speech at CVPR 2021 that Tesla has replaced traditional rule-based algorithms with deep neural networks for cut-ins recognition.

 

Tesla, Tesla

 

Specifically, Autopilot previously detected lane-cutting based on a hard-coded rule: first, it had to identify the lane lines, and at the same time identify and track the vehicle in front (bounding box), and only execute the lane-cutting instruction when it detected that the speed of the vehicle in front met the threshold horizontal speed for lane-cutting.

 

Today, Autopilot's lane-cutting recognition has removed these rules and completely relies on RNN to predict the behavior of the vehicle in front based on massive labeled data. If the RNN predicts that the vehicle in front will cut in, the lane-cutting instruction will be executed.

 

This is the technology behind Tesla’s dramatic improvements to lane-cutting recognition over the past few months.

 

The Tesla patent mentioned above explains in detail how Tesla trains RNN.

 

Tesla will associate the correct data output by radar and lidar (non-production fleet, Tesla's internal Luminar lidar fleet) with the objects identified by the RNN to accurately estimate object properties, such as object distance.

 

Tesla, Tesla

Tesla, Tesla

Tesla, Tesla

(Pictures from the Internet)

 

During this process, Tesla developed tools to automate the collection and association of auxiliary data with visual data without manual labeling. In addition, after association, training data can be automatically generated to train RNN, thereby achieving highly accurate prediction of object attributes.

 

As Tesla's global fleet size has exceeded 1 million vehicles, Tesla is able to quickly improve the performance of its RNN through training with massive scene data.

 

Once RNN improves the prediction accuracy to the same level as radar output results, it will have a huge advantage over millimeter-wave radar.

 

This is because Tesla Autopilot is only equipped with forward-facing radar, which makes it difficult to accurately predict pedestrians, cyclists and motorcyclists running around in all directions of the vehicle in urban conditions. Even if there is an obstacle directly in front of it within its 45° detection range, the radar previously equipped with Autopilot cannot distinguish between two obstacles as long as they are at the same distance and speed.

 

The eight cameras on Autopilot provide 360-degree coverage around the vehicle, and its full-vehicle BEV bird's-eye view neural network can seamlessly predict the next movement trajectory of multiple obstacles in any direction of the vehicle.

 

Tesla, Tesla

 

Then why doesn't Tesla keep the radar and use both radar and camera sensors for double verification?

 

Elon Musk has explained in detail his views on radar and cameras: At radar wavelengths, the real world looks like a strange ghost world. Almost everything is translucent except metal. When radar and visual perception disagree, which one do you trust? Vision has higher accuracy, so investing twice as much energy in improving vision is wiser than betting on the fusion of the two sensors.

 

Sensors are essentially bit streams. Cameras have several orders of magnitude more information in bits/second than radar and lidar. Radar must meaningfully increase the signal/noise of the bit stream to make it worth integrating.

 

As visual processing capabilities improve, camera performance will far surpass current radars.

 

This statement seems very subtle. In our previous article "Tesla: I endorse LiDAR", we wrote about Elon Musk's attitude towards millimeter-wave radar. In the above remarks, he also did not "sentence" radar to death in Tesla.

[1] [2]
Keywords:autopilot Reference address:Why Tesla "abandoned" millimeter-wave radar

Previous article:TDK upgrades its Hall Effect Sensor Series HAL 37xy for automotive functional safety applications
Next article:ROHM develops SerDes IC "BU18xMxx-C" and camera PMIC "BD86852MUF-C"

Recommended ReadingLatest update time:2024-11-16 13:29

Interpreting the algorithm and model progress of Tesla FSD
Tesla is a typical AI company. In the past year, it has trained 75,000 neural networks, which means that a new model is released every 8 minutes. A total of 281 models are used in Tesla cars. Next, we will interpret the progress of Tesla FSD's algorithms and models in several aspects. Perceiving Occupancy
[Embedded]
Interpreting the algorithm and model progress of Tesla FSD
Two fatal accidents involving autonomous driving reveal the shortcomings of Tesla's Autopilot technology
A Tesla sedan crashed into a truck, killing the driver. The investigation report shows that neither the driver nor the autopilot system took evasive action. Jeremy Banner was driving his Model 3 on a four-lane highway in Palm Beach, Florida. As the car approached a driveway, a semitrailer pulled in front of it and t
[Automotive Electronics]
Two fatal accidents involving autonomous driving reveal the shortcomings of Tesla's Autopilot technology
The first in the country! Jiyue displays pure visual city NOA, facing off against Tesla FSD
On October 17, Jiyue announced that its high-end intelligent driving solution based on pure vision has implemented urban navigation assistance functions in the core urban areas of Shanghai. At the same time, the official also announced for the first time the Occupancy grid network technology jointly developed with Bai
[Automotive Electronics]
The first in the country! Jiyue displays pure visual city NOA, facing off against Tesla FSD
Who is the ultimate "full responsible party" in the autonomous driving system?
New technology can indeed bring us convenience, but at the same time, don’t we also need to have a little more respect for it?   "Smart driving" is an important dimension of automotive intelligence, especially since 2019. From the million-level to the hundreds of thousands, more and more new cars are equipped with L2-
[Embedded]
Who is the ultimate
Latest Automotive Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号