Article count:16428 Read by:87919360

Hottest Technical Articles
Exclusive: A senior executive of NetEase Games was taken away for investigation due to corruption
OPPO is going global, and moving forward
It is reported that Xiaohongshu is testing to directly direct traffic to personal WeChat; Luckin Coffee is reported to enter the US and hit Starbucks with $2, but the official declined to comment; It is reported that JD Pay will be connected to Taobao and Tmall丨E-commerce Morning News
Yu Kai of Horizon Robotics stands at the historical crossroads of China's intelligent driving
Lei Jun: Don't be superstitious about BBA, domestic brands are rising in an all-round way; Big V angrily criticized Porsche 4S store recall "sexy operation": brainless and illegal; Renault returns to China and is building a research and development team
A single sentence from an overseas blogger caused an overseas product to become scrapped instantly. This is a painful lesson. Amazon, Walmart, etc. began to implement a no-return and refund policy. A "civil war" broke out between Temu's semi-hosted and fully-hosted services.
Tmall 3C home appliances double 11 explosion: brands and platforms rush to
Shareholders reveal the inside story of Huayun Data fraud: thousands of official seals were forged, and more than 3 billion yuan was defrauded; Musk was exposed to want 14 mothers and children to live in a secret family estate; Yang Yuanqing said that Lenovo had difficulty recruiting employees when it went overseas in the early days
The app is coming! Robin Li will give a keynote speech on November 12, and the poster reveals a huge amount of information
It is said that Zhong Shanshan asked the packaged water department to sign a "military order" and the entire department would be dismissed if the performance did not meet the standard; Ren Zhengfei said that it is still impossible to say that Huawei has survived; Bilibili reported that employees manipulated the lottery丨Leifeng Morning News
Account Entry

Exclusive interview with Gill Pratt, head of Toyota Research Institute: Challenges and practical issues facing autonomous driving

Latest update time:2017-01-29
    Reads:

Leifeng.com: After the 2015 DRAPA (Defense Advanced Research Projects Agency) Robotics Challenge, Gill Pratt promoted the establishment of the Toyota Research Institute (TRI) in the United States. The institute will invest $1 billion in the next five years to research robots and artificial intelligence.

As expected, TRI will focus on autonomous driving technology: Toyota, like other automakers, is very interested in how to use autonomous systems to make cars safer, more efficient and more comfortable.

IEEE Spectrum caught up with Gill Pratt during CES to discuss the following topics:

  • No manufacturer’s autonomous driving technology is close to Level 5;

  • The dilemma of excessive trust in autonomous driving technology;

  • What makes a self-driving car good enough?

  • Machine learning and the “The Car Can Explain” project;

  • Simulate the crazy things a human driver might do;

  • Computer hardware needs a revolution;

  • Man and machine: who should guard whom?

Leifeng.com has compiled the interview content without changing its content.

1. No manufacturer’s autonomous driving technology is close to Level 5

Q: At the Toyota press conference, you said that Level 5 (SAE) autonomous driving is "just a beautiful goal, and no company in the automotive or IT industry has autonomous driving technology close to this level." This is completely different from what we heard and saw at the show, not to mention the various promotional demonstration videos we have seen recently.

Pratt: The most important thing to understand is that not all roads are the same. Most of the time, our roads are not complex, so the current autonomous driving technology allows us to take a nap from time to time, walk around, or chat. But some roads may be complex and the existing technology cannot handle it.

What we need to focus on is: the probability of this happening - whether we can ensure that the car can operate autonomously throughout the given route without any problems. Level 5 requires that the car does not require human intervention in any situation.

So when a company says, "We can do full self-driving in this mapped area, and we've mapped almost every area," that doesn't mean Level 5. It's actually Level 4. If I run into that, I'll keep asking, "Day or night, any weather, any traffic condition?"

You'll find that the concept of this level is a bit vague and not very clearly defined. The problem with the term Level 4 or "full self-driving" is that it is a very broad concept.

For example, my car can drive itself fully autonomously in a dedicated lane, which is not much different from a train driving on tracks. But driving it on the chaotic streets of Rome on a rainy day is a completely different matter because it is much more difficult.

The term "full self-driving" is too broad, and you have to ask yourself, "What does it really mean? What is the actual situation?" Usually you will find that the "full self-driving" he is talking about has many limitations, such as traffic, weather, day or night, etc.

2. Excessive trust in autonomous driving technology

Q: This information creates expectation problems for consumers, who hear these new words and concepts every day but don't know what they really mean.

Pratt: You're right. As consumers, we like to build a model in our minds about how good the self-driving technology is. This model is often emotional: we only pay attention to certain things, and the imperfections of self-driving technology may be selectively ignored, which makes us either over-trust or underestimate the capabilities of self-driving.

Toyota believes that it is very important to educate consumers and let them really understand the limitations of the technology. The entire industry needs to work together to ensure that they truly understand the benefits of self-driving cars from the customer's perspective, and what it can and cannot do. In this way, consumers will not blindly trust autonomous driving.

Over-trust is another big problem. Once you sell the car to consumers and they experience the benefits of autonomous driving in the right environment, it will be difficult to influence them.

As self-driving cars improve, the need for human intervention will decrease, which will exacerbate the problem of over-trust because users will say, "It didn't need me to take over before, it doesn't need me to take over now, and it won't need me to take over in the future."

In some ways, the worst-case scenario is that a self-driving car will only need a human to take over once every 200,000 miles, while an average driver with an average mileage of 100,000 miles will rarely encounter a situation where they need to take over. But sometimes, when the car suddenly issues a warning that the driver needs to take over, a driver who has not encountered such a situation for a long time may be unprepared because of complete trust in the car.

So we also worry: the better we do, the bigger the problem of over-trust will be.

3. Self-driving cars need to know how to drive better than human drivers, so what is good enough?

Q: Do you think Level 5 fully autonomous driving is realistic or even possible?

Pratt: I think it's possible, and our current standards are based on the standards of human drivers.

I also raised this question at the press conference: "What standards should we use to judge how good this system is?" I think this is very important, but there is no answer yet. At present, it will take a long time for self-driving cars to be "perfect", so it is impossible to completely avoid accidents.

If the driving level of the self-driving system is 10% better than that of humans, will society accept it? Or will it have to be 10 times better to be accepted by society? I don't know.

Honestly, as technologists, we can’t tell the public the answer to this question. That role belongs to the government and everyone who will be affected. “Saving one life is good enough.” Or, “We will only accept it if it’s 10 times better than human driving.” I’m not sure. But I think we have to be very careful until we have an answer: don’t introduce technology that doesn’t meet society’s expectations.

When talking about self-driving car safety, I sometimes do what others do and try to make the case that “even if self-driving cars improve safety by 1%, we should promote them.” This is correct from a rational perspective, but it is also an emotional thing.

Humans are not rational: If we use plane crashes as an analogy, there is no doubt that airplanes are the safest way to travel, and you should never have to worry about plane crashes. But planes can crash. When we see plane crashes, our fears are magnified, and we begin to worry about whether our own planes will crash.

Rationally, this worry makes no sense: cars have a much higher accident rate than airplanes, but whenever a plane crashes it makes headlines, and eventually we start worrying about planes rather than cars.

When it comes to accidents caused by human drivers, we might think, “This could happen to me, and I could make the same mistake.” If it’s a machine, I’m afraid people won’t have the same empathy because they’ll just want the machine to be perfect and not make mistakes. We know that AI systems, especially those based on machine learning, are not perfect and without flaws.

Because the dimensions of external information obtained through sensors are too large, the car will receive information that it has never been trained on before, and we expect it to make reasonable cognition of the surrounding environment based on this information.

Every once in a while, when we make new progress, there may be an accident due to the error of the cognitive system. When the accident happens, what can we say? Who should we blame? We don't know the answer to this, but it is a very important question.

4. Machine Learning and the “The Car Can Explain” Project

Q: James Kuffner, former head of Google's self-driving cars and current TRI CTO, talked about cloud robots at CES. Self-driving cars cannot completely avoid accidents, but every time an accident occurs, can the car manufacturer find out the cause of the accident and push software updates in time to prevent such accidents from happening again?

Pratt: I think it's very likely.

In fact, it would be surprising if we couldn't do that. We have very detailed driving logs that record what happened at the time of the accident. You ask an interesting question: Can we find out what actually caused the accident?

Why is it interesting? Because machine learning systems, especially deep learning, although they have powerful performance, they actually cannot get answers through analysis, which is why it is very difficult to find the cause of the accident.

We have research at MIT and elsewhere, hoping to make progress in this area. We are currently funding a project by MIT professor Gerald Sussman called "The Car Can Explain," which is doing research in this area.

The logs are there, but who is responsible for the error? That’s a harder question. What can we do to make sure this doesn’t happen again? “I fixed this bug today, and I fixed another one tomorrow…” But the system is so large, there are so many places where things could go wrong.

It turns out that testing, testing, testing is the most important thing. Globally, the total mileage of cars is about 10 trillion kilometers. So if you only test a few hundred kilometers, it is difficult to cover all situations. You need to improve its capabilities in another way to solve this problem - accelerated simulation testing is a key part of it. We don't simulate the perfect situation: the sun is shining and the traffic is smooth. We want to simulate bad weather and environment.

Rod Brooks said it right: "Simulation is the path to success." At the same time, we are well aware of the shortcomings of simulation, so we also do a lot of road testing to verify the results of simulation. We also use simulators to test some items that are not part of regular testing, which may be the cause of accidents.

For example, when an autonomous car encounters a road rage driver, even if the driver does not follow the rules and does not play by common sense, the autonomous driving system needs to make the correct decision.

We don’t test this situation over and over again in the real world, because most of the time it ends in a crash. We test it at intervals, and at the same time, enhance the performance of the autonomous driving system through simulation, but this simulation process is very difficult. Ultimately, this brings us to another field: formal methods (suitable for the description, development, and verification of software and hardware systems).

We hope to combine simulation and formal methods, but ultimately we need road testing. Deep learning is a great method, but it cannot guarantee that all inputs and decision-making behaviors are correct, and it is also very difficult to ensure that they are all correct.

5. Simulate the crazy behaviors that human drivers might do

Q: Regardless of the level of self-driving cars, humans are the most uncontrollable factor for them. Every time you test on the road, you will encounter various human behaviors. How do you simulate these crazy things that only humans can do?

Pratt: This is similar to how we simulate weather or traffic conditions. It's hard to simulate human behavior because everyone is different and there are so many possibilities, but we think it's possible to some extent.

As a driver, we can use theory of mind (the ability to understand our own mental states and those of the people around us) to imagine how other drivers are behaving while driving.

First, simulate it in your mind. For example, when we encounter a "four way stop" (an intersection with stop signs in each direction), what would I do if I were a driver? Theory of mind means that simulation is possible because we can predict how others will behave by building statistical models.

Q: Sometimes, human responses are not necessarily safe behaviors. How do you teach a car to make decisions in such situations?

Pratt: That’s an interesting question.

When you're driving on the highway and the speed limit is 55 mph, do you go 55 mph? Or do you go as fast as the other drivers around you? What's safest?

I don't want to give an official answer, but this is a difficult question and we have discussed this with Toyota's legal department. They also think it is difficult to give a definite answer.

6. Computer hardware needs a revolution

Q: After the press conference, you mentioned something about how to power the onboard computers in electric vehicles and how to keep them cool is a big problem. We’ve been focusing on the difficulty that autonomous cars have in decision making, but what else do we need to figure out?

Pratt: I like this field because I have a team behind me that specializes in hardware.

In the past, I studied neurophysiology. Computational efficiency is very important. Our brain consumes only 50 watts of power, while most autonomous driving systems consume several kilowatts of power. Our brain is not only processing driving, but also thinking about other things at the same time. Maybe only 10 watts are actually allocated to driving.

We don’t know how much computing power is appropriate for self-driving cars. It’s very likely that if we increase computing power by 10 times, the performance of the self-driving system will not increase by 10 times, but will only be significantly improved. If we increase computing power by 100 times, perhaps the self-driving system will still have room for improvement and will be improved.

The performance of autonomous driving systems will continue to improve, but I don’t know what the growth curve will be. Therefore, in addition to software, testing, etc., we also need to redesign computer hardware to make it as efficient as our brains.

As for the other problems you mentioned that need to be solved, I think there is still a lot of room for development in sensors.

LiDAR is great, but there is still a lot of room for improvement. For example, the sampling density is still low, and it cannot compare to human vision when detecting cars in the distance. For another example, for Level 3 self-driving cars, you need to reserve time for the driver to react - unless your sensors can detect and understand what is happening in the distance, it is impossible to warn you in advance.

The sensors also need to be cheap, shockproof, and last for 10 to 20 years. Most people think automotive quality is low, but it's actually very high. I used to do a lot of mil-spec stuff at DRAPA, but getting automotive quality is very difficult.

For example, cameras may need to be driven in different environments such as deserts or Alaska, where there will be corrosive substances such as salt and rust, or even the screen will be covered with dust. Therefore, it is quite difficult to ensure that the car sensors continue to work properly in different environments.

I think there is a common desire among technical people in this field that the media and especially the public can be better educated and truly understand what is happening in this industry. For example, the term "full self-driving" can be easily misunderstood. What does "full" mean? There are many meanings.

So it's better not to say, "We need self-driving cars because we want to save lives." There are many ways to assist human drivers. Most of the time these assistance systems are not activated, they only occasionally give a warning or prompt, and if necessary, take over the car from the driver. Such systems do not need to be competent in all situations, they just need to be able to handle the worst case.

7. Man and machine: Who should guard whom?

Q: Currently, machines are good at things that humans are not good at, such as the need to concentrate all the time and pay attention to the car in front or the lane. Is this correct?

Pratt: That's right. Humans and machines complement each other: machines are able to stay alert all the time and never get tired, while humans are good at dealing with complex situations, which is the weakness of machines. So how can we make the two complement each other instead of conflicting with each other? Another way to think about it is: Who should play the role of guardian? Is it the autonomous driving system or the human?

At present, the standard that needs to be met for machines to protect humans is still very high. Our goal is to make it so that humans don’t need to worry about anything in the end, and just leave everything to AI. This is why Toyota will adopt a model of going hand in hand.

Many companies that come to CES are actually here to sell cars and technology, but Toyota does not need to sell. We hope to use this opportunity to educate users.

Q: Currently NHTSA (National Highway Traffic Safety Administration) only divides vehicles into four levels, right?

Pratt: Yes. NHTSA only has 4 levels, but SAE (Society of Automotive Engineers) is very smart and they further divide Level 4 into Level 4 and Level 5. Level 5 does not require human intervention at any time and in any place, while Level 4 requires human intervention in certain situations. Other than that, they are essentially the same.

Although there is nothing wrong with the SAE classification, it has been misunderstood by the public. Currently, many Level 2s are called Level 3s, which is actually wrong. The key to Level 3 is that it does not require human supervision all the time.

Q: Is there a backup plan (Plan B) when the driver can't take over? I remember Freightliner said when introducing their self-driving trucks: If the driver doesn't take over, it will just stop on the highway because it doesn't know what to do.

Pratt: In the SAE classification, there is no mention of backup plans, such as where to park when the car must stop. I grew up in New Jersey, where many highways do not have places to park, so what do you do? This is also one of the difficulties.

Previously, some people suggested that this could be solved through remote control: when an autonomous car gets into trouble, it will automatically transfer to a remote driver in a call center to take over. First of all, this must have a fast and stable network, no hackers, and no natural disasters. Most importantly, can a staff member in a call center respond instantly, take over and handle the problem? Humans cannot always stay vigilant, which is a human weakness. Humans are not good at handling emergencies in a timely and correct manner.

I'm not saying it's impossible, but it can be very difficult.



Click on a keyword to view related historical articles


popular articles



Li Xiang's "Car and Home" announced some new progress on New Year's Eve

Ren Zhengfei's latest speech: Huawei will promote 4,000 to 5,000 outstanding employees

LeEco's suspicion: the internal battle behind "rumor-mongering" and "rumor-refuting"

The creator of Huawei P6 was detained: revealing the triangle entanglement between Huawei, Coolpad and LeTV

Fei-Fei Li: "Emotion" and "Feeling" are the next spring of artificial intelligence

Nokia 6 is just a cover, Nokia's ambition to rise is pinned on health devices

Ten years after the iPhone, a look back at the legendary birth of this great product


NVIDIA Da | Faraday Future | Hasselblad

Mini Programs | Zuckerberg's Development Notes | Shared Bikes

GoPro | How Spring Festival travel ticket swiping works | AI beauty

IoT Year-end Review | AI Medical Imaging Companies Review

Huawei 5G | Autopilot 2.0 | JD X Division

Commercial sex robots | Taobao Buy+ | Zhang Xiaolong's internal speech

Xiaomi Mi MIX | Xiaomi VR | Huawei Kirin 960

Hammer M1/M1L | Loongson 3A3000 | Samsung Note 7

DJI Mavic | Google Home

Domestic multi-line laser radar | Google Daydream VR helmet




Latest articles about

Database "Suicide Squad" 
Exclusive: Yin Shiming takes over as President of Google Cloud China 
After more than 150 days in space, the US astronaut has become thin and has a cone-shaped face. NASA insists that she is safe and healthy; it is reported that the general manager of marketing of NetEase Games has resigned but has not lost contact; Yuanhang Automobile has reduced salaries and laid off employees, and delayed salary payments 
Exclusive: Google Cloud China's top executive Li Kongyuan may leave, former Microsoft executive Shen Bin is expected to take over 
Tiktok's daily transaction volume is growing very slowly, far behind Temu; Amazon employees exposed that they work overtime without compensation; Trump's tariff proposal may cause a surge in the prices of imported goods in the United States 
OpenAI's 7-year security veteran and Chinese executive officially announced his resignation and may return to China; Yan Shuicheng resigned as the president of Kunlun Wanwei Research Institute; ByteDance's self-developed video generation model is open for use丨AI Intelligence Bureau 
Seven Swordsmen 
A 39-year-old man died suddenly while working after working 41 hours of overtime in 8 days. The company involved: It is a labor dispatch company; NetEase Games executives were taken away for investigation due to corruption; ByteDance does not encourage employees to call each other "brother" or "sister" 
The competition pressure on Douyin products is getting bigger and bigger, and the original hot-selling routines are no longer effective; scalpers are frantically making money across borders, and Pop Mart has become the code for wealth; Chinese has become the highest-paid foreign language in Mexico丨Overseas Morning News 
ByteDance has launched internal testing of Doubao, officially entering the field of AI video generation; Trump's return may be beneficial to the development of AI; Taobao upgrades its AI product "Business Manager" to help Double Eleven丨AI Intelligence Bureau 

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号