Why did the Audi A8 flagship model give up L3 autonomous driving?

Publisher:tetsikaLatest update time:2020-05-07 Source: eefocus Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

With the rapid development of autonomous driving technology, it seems that now is the time to reap the rich fruits of the industry.

 

L2 assisted driving technology has been widely used, and the low-hanging fruits are about to be divided up. Automakers are climbing to the higher level of L3 autonomous driving. However, just as many automakers are eagerly looking forward to it, Audi, the German Volkswagen high-end brand that has always been in the leading position, has withdrawn from the L3 track first.

 

Recently, Audi's technical development director Hans said that Audi has abandoned the plan to introduce L3 autonomous driving technology in the next generation A8 flagship model. You should know that Audi started researching L3 autonomous driving as early as 2011, and took the lead in launching the L3 autonomous driving technology Traffic Jam Pilot (TJP) in Audi A8 in 2017.

However, this technology is rarely used. The reason is simple. Audi has been waiting for the relevant L3 autonomous driving policies of various countries. But until now, international regulators have not reached a consensus on the approval process for the most basic L3 autonomous driving functions, and several countries in its major markets have not issued relevant L3 road policies. And the new A8 will be launched next year, and it really can't wait.

 

Compared with Audi's hesitation, many peers are entering the L3 autonomous driving market. BMW and Mercedes-Benz, two other major car companies in the BBA camp, are stepping up their research and development and plan to launch their own L3 models this year and next. Due to Audi's "inaction", some domestic car companies have recently begun to quarrel over the title of "the world's first mass-produced L3".

 

You may wonder if car companies have any misunderstandings about L3?

 

Yes, but the car companies did it on purpose. Before the implementation of relevant national policies and regulations, most car companies' L3 technologies could only be sealed in brochures or internal systems. Therefore, we can see slogans such as L2.5 and L2.99 in the publicity of some car companies. They want to take advantage of the L3 concept marketing, but cannot break the red line of policy supervision, so they can only play these word games.

 

Why did Audi so "honestly" abandon L3 autonomous driving technology on the new generation A8? There may be many answers, but the reasons may ultimately point to one question: What problems can't L3 autonomous driving solve?

 

The national standard has been issued, but the L3 autonomous driving standard is still "in doubt"

 

On March 9 of this year, the Ministry of Industry and Information Technology issued my country's national standard "Automotive Driving Automation Classification", which is scheduled to be officially implemented on January 1, 2021; the new national standard is the same as the general standard formulated by the SAE in the United States, both of which position the level of autonomous driving from L0 to L5.

The definition of L3 in the national standard is the same as that of SAE, which is defined as "autonomous driving under limited conditions, that is, under the operating conditions specified by the autonomous driving system, the vehicle itself can complete the tasks of steering, acceleration and deceleration, as well as road condition detection and response; under some conditions, the driver can completely hand over the driving rights to the autonomous driving vehicle, but needs to take over when necessary.

 

That is to say, in L3 autonomous driving state, the driver does not need to monitor the vehicle at all times and can concentrate on playing games, working, or even resting his eyes, but he must be able to take over the driving task at any time when "necessary".

 

The problem lies in this "necessary takeover". This is indeed an awkward situation. If I were a driver, should I relax and rest? Or should I always wait for the "call" of this self-driving beast?

 

This "necessary" moment defined by self-driving cars is like the boot that falls at some unknown time in the classic sketch "Throwing the Boot". Drivers can only wait for this "critical moment". After all, people are moving in a space at a speed of dozens of kilometers per hour, and they are betting on their personal safety.

 

This goes against the original intention of L3 autonomous driving. This technology is supposed to free drivers from their work, but because they have to worry about "taking over" tasks, they become more cautious. This is obviously not worth the loss.

Of course, this "necessary takeover" moment may not be as thrilling as outsiders imagine. Car companies naturally have corresponding technical preparations and solutions. So in L3 autonomous driving, how should car companies ensure the personal safety of drivers and passengers, and how do they design this "necessary takeover" moment?

 

Can creating dual redundancy and strong reminders “safeguard” L3 autonomous driving?

 

If car companies want consumers to trust their personal safety to L3 cars, they must make the vehicle's autonomous driving system safe and reliable enough. The solution is not mysterious, that is, to put enough effort into the vehicle and provide enough "foundation".

 

The current common practice in the industry is that the vehicle's autonomous driving system must have a dual redundant design in terms of perception, decision-making and control. That is, all key links of autonomous driving, including sensing, decision-making and execution, are equipped with two sets of software and hardware to ensure that when one of the sets fails, the autonomous driving system can still operate normally.

 

For example, Waymo has already achieved dual redundancy in the power supply, positioning, perception, controller, and actuator of its autonomous driving system. As a Tier 1, Bosch's L3 solution has corresponding redundant designs in the four major technical links of perception, positioning, decision-making, and execution.

Of course, the scope of redundant design is very wide. Some radical car companies are designing the entire vehicle system with dual redundancy, but the cost is likely to get out of control. Considering the balance between system stability and cost control, not all car players will equip the entire system with dual redundancy design. Adding redundancy to the system by "stacking" will increase the cost of software and hardware, while also posing challenges to the entire vehicle architecture.

 

Currently, more car companies are mainly implementing dual redundancy in key links, such as implementing dual redundant systems in sensors and computing chips. The minimum goal is to ensure that no serious consequences will occur after the current system fails.

 

The dual-redundancy design can only be regarded as a post-event protection. If there is no problem with the current system, the driver will not be aware of the redundant design. Compared with worrying about the car breaking down, the driver is more worried that the self-driving car will suddenly encounter a situation that it cannot cope with under complex road conditions. If the driver does not respond in time at this time, an accident is likely to occur.

 

Strong reminders have become an even more necessary design. The strong reminder system includes intelligent reminder sounds, warning lights, on-board camera detection, seat belt warnings, etc. For example, in Audi's L3 autonomous driving system, once the vehicle asks you to take over driving, if you are watching a movie or talking on the phone at the time, the system will automatically pause for you and issue a takeover reminder sound, and the seat belt will automatically tighten; if you are resting or sleeping at the time, the vehicle will try to leave enough 15 seconds in advance for you to regain control of the steering wheel, but if you have not responded, the vehicle will automatically brake. If you have not responded, the vehicle will automatically call the emergency number for you and check your physical condition through the on-board camera.

 

Although in terms of technical safety, dual redundancy and strong reminder design have basically reached a level that exceeds the human driving level, L3 autonomous driving still has to respond to the "Schrödinger" takeover problem: Who should be held responsible for safety accidents?

 

Who is responsible? The "Schrödinger" problem of L3 implementation

 

From the perspective of the "Automatic Driving Classification Standard", the L3 level is a watershed for autonomous driving. In L0-L2 autonomous driving, human drivers are always the main actors and responsible parties, and the system plays an auxiliary driving role. Even if a traffic accident occurs, the human driver is responsible. However, at the L3 level, the autonomous driving system begins to become the main actor of the vehicle. At the same time, it is required that humans be responsible for the driving status and consequences after taking over the system. Then, this forms a dual-responsibility subject, which brings many uncertainties to the entire driving process.

 

In theory, of course, this question is very simple. When a traffic accident occurs, if the vehicle is in L3 autonomous driving state, the responsibility lies with the car manufacturer. If the vehicle is in human driving state, the responsibility lies with the human.

 

But what if it is in the middle? For example, the autonomous driving system failed to detect the sudden road condition and failed to fulfill its obligation to remind? What if the system is in the reminder state, but the driver did not respond in time and an accident occurred? If the driver found that the autonomous driving system made a misjudgment, and forced intervention still failed to avoid the accident, then is it the driver's fault or the problem of the autonomous driving?

 

This problem has already occurred in reality. In many Tesla traffic accidents involving casualties, the owners were generally using Tesla's autopilot system and were distracted when the accident occurred. However, because Tesla has marked in advance that the autopilot system only plays an auxiliary role, the driver must bear the final responsibility for the final driving behavior of the car.

[1] [2]
Reference address:Why did the Audi A8 flagship model give up L3 autonomous driving?

Previous article:In order to achieve the EU's "climate neutrality" goal, Europe has vigorously developed electric vehicle production.
Next article:Detailed explanation of BMW's autonomous driving hardware architecture

Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号