Article count:16428 Read by:87919360

Hottest Technical Articles
Exclusive: A senior executive of NetEase Games was taken away for investigation due to corruption
OPPO is going global, and moving forward
It is reported that Xiaohongshu is testing to directly direct traffic to personal WeChat; Luckin Coffee is reported to enter the US and hit Starbucks with $2, but the official declined to comment; It is reported that JD Pay will be connected to Taobao and Tmall丨E-commerce Morning News
Yu Kai of Horizon Robotics stands at the historical crossroads of China's intelligent driving
Lei Jun: Don't be superstitious about BBA, domestic brands are rising in an all-round way; Big V angrily criticized Porsche 4S store recall "sexy operation": brainless and illegal; Renault returns to China and is building a research and development team
A single sentence from an overseas blogger caused an overseas product to become scrapped instantly. This is a painful lesson. Amazon, Walmart, etc. began to implement a no-return and refund policy. A "civil war" broke out between Temu's semi-hosted and fully-hosted services.
Tmall 3C home appliances double 11 explosion: brands and platforms rush to
Shareholders reveal the inside story of Huayun Data fraud: thousands of official seals were forged, and more than 3 billion yuan was defrauded; Musk was exposed to want 14 mothers and children to live in a secret family estate; Yang Yuanqing said that Lenovo had difficulty recruiting employees when it went overseas in the early days
The app is coming! Robin Li will give a keynote speech on November 12, and the poster reveals a huge amount of information
It is said that Zhong Shanshan asked the packaged water department to sign a "military order" and the entire department would be dismissed if the performance did not meet the standard; Ren Zhengfei said that it is still impossible to say that Huawei has survived; Bilibili reported that employees manipulated the lottery丨Leifeng Morning News
Account Entry

It’s time to move on! The “trolley problem” should not be a hindrance to the development of autonomous driving technology

Latest update time:2018-04-04
    Reads:

Article | Daiso Travel

Report from Leiphone.com (leiphone-sz)

Leifeng.com Note: This article is translated from The Atlantic and the original title is Enough With the Trolley Problem.

When it comes to autonomous driving, many people will unconsciously think of the classic trolley problem that was born in the 1970s. This most famous thought experiment in the field of ethics is becoming a stumbling block for the entire industry to move forward. Therefore, should we look forward now and completely turn the page on this issue, and stop using this moral issue to question autonomous driving cars?

The so-called trolley problem is roughly as follows: a madman tied five innocent people to a trolley track. An out-of-control trolley is heading towards them and will run them over in a moment. Fortunately, you can pull a lever to make the trolley go to another track. However, the problem is that the madman has also tied a person to the other trolley track. Considering the above situation, should you pull the lever?

If you think divergently based on the trolley problem, you can indeed get a lot of difficult questions. For example, should an autonomous car give up the driver to save pedestrians? Then in the event of an accident, who should make the sacrifice, the elderly or the young? If the vehicle can obtain information about nearby drivers, should it selectively collide based on this information before the vehicle loses control?

In fact, the trolley problem has become an unavoidable topic in the autonomous driving circle. Engineers at MIT even crowdfunded a "moral machine" to compile a catalog of opinions to solve the problem of how future robots should respond in specific situations.

However, the trolley problem, which has remained unsolved for decades, actually has a loophole. This thought experiment does not pass the test when setting moral conditions for robots. If it is forced into this field, especially when making policies, guiding designs, or answering questions that people care about, it will definitely lead to some dangerous and incomplete conclusions about machine ethics.



Don't let utilitarianism blind you


Philosopher Judith Jarvis Thomson formally proposed the "trolley problem" in 1976, but similar thinking was born in 1967, but at that time another philosopher Philippa Foot explored the difference between people's intentions and their foresight.

At that time, Foot used abortion as an analogy. If a woman has difficulty giving birth, should we remove her uterus and try to save the child (which falls into the category of "foresight" and everyone is happy, but if it fails, there will be two deaths), or should we directly end the child's life (which falls into the category of "planning") to save the woman's life? When facing the same difficult birth, different ways of dealing with it will lead to different moral conclusions.

Foot also extrapolated many similar situations at the time, one of which was about the trolley driver. Another was about the problem faced by the judge. She imagined that if the judge did not sentence an innocent person to death, the mob threatened to storm the court. The second scenario was similar to the trolley problem. The judge had to make a choice, whether to sacrifice the lives of the majority for justice or to sacrifice the innocent for the benefit of the majority.

After some research, Foot said in his paper: "Most people who choose to sacrifice the switchman to save more lives would be shocked if they encountered a judge who distorted right and wrong and took the life of an innocent person."

So, she concluded, there is a difference between what one does and what one allows, and therefore the difference between avoiding harm and bringing help is important.

Although Foot's paper is brief, it provides today's readers with a fresh perspective on the ethical dilemmas faced by self-driving cars, rather than getting stuck in the "trolley problem" dead end.

In part, this is because Foot follows what is known as virtue ethics, a term coined by Aristotle in ancient Greece. Virtue ethicists believe that a person's morality is as important as their life itself.

However, when discussing self-driving cars, most people automatically ignore virtue ethics and pay more attention to the final results of self-driving cars. In moral philosophy, this approach is completely different from virtue ethics. Its scientific name is consequentialism, and there is a famous branch of consequentialism - utilitarianism. In other words, people who adhere to this theory will first consider the possible consequences of their actions before doing anything.

At present, utilitarianism has become a deep-rooted idea in the rhetoric of self-driving cars, and when promoting them, people always mention the comprehensive safety protection brought by self-driving cars.

Considering that 94% of the 37,000 people killed in car accidents in the United States in 2016 were caused by driver error, replacing fallible humans with reliable robots is a good thing with only benefits and no harm.

The problem is that focusing on the consequences alone can blind people to the fact that self-driving cars also have shortcomings. Just as Foot's case of the trolley driver and the judge is morally incomparable, self-driving cars may also cause different moral, legal, and civic consequences in the same situation.



Some assumptions in the trolley problem are difficult to make


Recently, an Uber self-driving test car hit and killed 49-year-old Elaine Hertzberg in Tempe, Arizona, as she was pushing her bicycle across the street. After I analyzed the legal implications of this accident, some readers mocked me as a utilitarian. What they don't know is that in 2015, the number of people killed in car accidents due to jaywalking in the United States reached 5,376, and the news does not report these accidents one by one.

Soon, self-driving cars will reduce or even eliminate pedestrian deaths. If you apply this idea to the trolley problem, the tracks become time, not space. While sacrificing one person on one track is still tragic, it may become more justifiable if it saves thousands of people.

The problem is that such a position requires the assumption that Hertzberg's death was like any other unfortunate pedestrian's. That may make sense statistically, but it may not make sense morally.

In the future, if nothing unexpected happens, self-driving cars will surely be able to avoid the tragedy in Temple City, after all, sensors and computers react much faster than humans. As details of the fatal accident involving an Uber self-driving test car gradually emerge, many experts say the accident was completely avoidable.

Waymo CEO also came out to add that his company's technology can completely avoid accidents. After all, their drivers only need to "touch" the steering wheel every 5,600 miles, while Uber drivers are much busier (every 13 miles).

On Arizona roads, the difference between a Waymo self-driving car and an Uber self-driving car is more important than the difference between a self-driving car and a human driver.

However, in order to attract more companies to Arizona to research, test and deploy self-driving vehicles, Governor Doug Ducey "turned a blind eye" and allowed them to be on the road without strict supervision.

Solving these problems cannot be done by trying to figure out the trolley problem scenario. In the case of the Uber fatal accident, it means that the test car saw Hertzberg first, and then it could make a corresponding choice, deciding whether to save the passerby or the driver.

In addition, to enter the scope of the trolley problem, it is also assumed that the self-driving car is reliable enough and the safety function is guaranteed, that is, the lever in the trolley problem is not immobilized by rust. However, the above two assumptions are not true in the Uber accident.

Foot had anticipated the lack of context in the scene. “In reality,” she wrote, “it’s hard to say that the man tied to the track alone was doomed. Perhaps he could have found a foothold and saved himself before the train passed.”

One way to explore these infinite possibilities is to experiment over and over again and collect patterns from the public's reactions. This is also the method of "moral machines". Just like the most popular machine learning now, a large data set is essential. However, another way is to consider specific issues in the most appropriate moral context.

Foot also offers a classic example that has more in common with Uber’s fatal crash than the trolley problem: He takes the scenario to a hospital with five patients whose illnesses can only be cured by a special gas, but when the gas is used, it releases poison that flies into the adjacent ward, and the patients in that ward cannot move.

In such cases, the effect is indeed very similar to the classic trolley problem, but many of the conclusions are less clear, not only because the purpose and foreseeable effects are different, but also because the moral desire to avoid causing harm works differently.

In the trolley problem, the driver has no choice at all, and no matter which step he takes, it will bring dire consequences. In the hospital example, the doctor faces a conflicting choice: he can either help or harm other patients.

In fact, the situation faced by Uber’s test cars was even more thorny because none of the parties (Uber, the safety driver, and the government) had a clear understanding of the vehicle’s status. This makes the moral scenario of the Uber fatal accident less about future vehicle casualties and more about government regulation, corporate disclosure, and transportation policy.



Don’t ignore the complexity of morality


If the Uber accident becomes a precedent for moral philosophy that can satisfy the needs of technologists, citizens, and policymakers, the disaster could become a proxy for moral luck: the same example of a drunk man who takes a risky drive home at night and makes it home safely, but who could have an accident under the same circumstances.

Generally speaking, the latter is more guilty, but in fact both were wrong first, the only difference is the result.

In this light, Uber’s fatal crash does not represent a value-neutral or justifiable sacrifice of a pedestrian for the safety of future passengers. Rather, it highlights the fact that positive outcomes (such as safer cars and pedestrians) can easily be a function of the moral luck of self-driving cars.

Moral luck also offers other ways to think about self-driving cars, where it’s hard to be sure of their autonomous behavior.

So, do Uber drivers know and understand the consequences of their actions? Can humans intervene in the machine's operation without actively controlling it? Does Arizona's invitation help Uber mitigate its culpability? These questions are now at the center of the incident, both in Arizona and elsewhere. However, for Elaine Hertzberg, this may be just a useless comfort.

The purpose of this article is not to place blame or praise, nor to celebrate or mourn the future of self-driving cars. Rather, it is to call for more moral complexity, which is necessary to solve the problems of self-driving cars now and in the future.

Ethics is not a simple calculus applied to a situation, nor is it a collection of human opinions about a typical case. When engineers, critics, journalists, or ordinary people think that the trolley problem is the ultimate challenge for self-driving cars, they automatically give up thinking about more complex moral situations.

For philosophers, while experimentation is a good way to consider unknown outcomes or reconsider accepted views, it is also just a thinking tool, not a ready-made recipe.

The trolley problem has led people to misunderstand that self-driving cars are a mature technology that already exists reliably and that certain hypothetical abstract moral behaviors can be specifically answered. In fact, we are still far from the era of true self-driving cars.

In the meantime, citizens, governments, automakers and technology companies must continue to explore and find out more about the complex ethical consequences of self-driving cars.

It's time to pump the brakes before the trolley problem crushes everyone's dreams.

- END -

Leifeng.com is recruiting editors, operators, part-timers , external translators and other positions

Click here for details Recruitment Notice


Recommended Reading


Follow Leiphone.com (leiphone-sz) and reply 2 to add the reader group and make a friend


Latest articles about

Database "Suicide Squad" 
Exclusive: Yin Shiming takes over as President of Google Cloud China 
After more than 150 days in space, the US astronaut has become thin and has a cone-shaped face. NASA insists that she is safe and healthy; it is reported that the general manager of marketing of NetEase Games has resigned but has not lost contact; Yuanhang Automobile has reduced salaries and laid off employees, and delayed salary payments 
Exclusive: Google Cloud China's top executive Li Kongyuan may leave, former Microsoft executive Shen Bin is expected to take over 
Tiktok's daily transaction volume is growing very slowly, far behind Temu; Amazon employees exposed that they work overtime without compensation; Trump's tariff proposal may cause a surge in the prices of imported goods in the United States 
OpenAI's 7-year security veteran and Chinese executive officially announced his resignation and may return to China; Yan Shuicheng resigned as the president of Kunlun Wanwei Research Institute; ByteDance's self-developed video generation model is open for use丨AI Intelligence Bureau 
Seven Swordsmen 
A 39-year-old man died suddenly while working after working 41 hours of overtime in 8 days. The company involved: It is a labor dispatch company; NetEase Games executives were taken away for investigation due to corruption; ByteDance does not encourage employees to call each other "brother" or "sister" 
The competition pressure on Douyin products is getting bigger and bigger, and the original hot-selling routines are no longer effective; scalpers are frantically making money across borders, and Pop Mart has become the code for wealth; Chinese has become the highest-paid foreign language in Mexico丨Overseas Morning News 
ByteDance has launched internal testing of Doubao, officially entering the field of AI video generation; Trump's return may be beneficial to the development of AI; Taobao upgrades its AI product "Business Manager" to help Double Eleven丨AI Intelligence Bureau 

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号