It’s time to move on! The “trolley problem” should not be a hindrance to the development of autonomous driving technology
Article | Daiso Travel
Report from Leiphone.com (leiphone-sz)
When it comes to autonomous driving, many people will unconsciously think of the classic trolley problem that was born in the 1970s. This most famous thought experiment in the field of ethics is becoming a stumbling block for the entire industry to move forward. Therefore, should we look forward now and completely turn the page on this issue, and stop using this moral issue to question autonomous driving cars?
The so-called trolley problem is roughly as follows: a madman tied five innocent people to a trolley track. An out-of-control trolley is heading towards them and will run them over in a moment. Fortunately, you can pull a lever to make the trolley go to another track. However, the problem is that the madman has also tied a person to the other trolley track. Considering the above situation, should you pull the lever?
If you think divergently based on the trolley problem, you can indeed get a lot of difficult questions. For example, should an autonomous car give up the driver to save pedestrians? Then in the event of an accident, who should make the sacrifice, the elderly or the young? If the vehicle can obtain information about nearby drivers, should it selectively collide based on this information before the vehicle loses control?
In fact, the trolley problem has become an unavoidable topic in the autonomous driving circle. Engineers at MIT even crowdfunded a "moral machine" to compile a catalog of opinions to solve the problem of how future robots should respond in specific situations.
However, the trolley problem, which has remained unsolved for decades, actually has a loophole. This thought experiment does not pass the test when setting moral conditions for robots. If it is forced into this field, especially when making policies, guiding designs, or answering questions that people care about, it will definitely lead to some dangerous and incomplete conclusions about machine ethics.
Don't let utilitarianism blind you
Philosopher Judith Jarvis Thomson formally proposed the "trolley problem" in 1976, but similar thinking was born in 1967, but at that time another philosopher Philippa Foot explored the difference between people's intentions and their foresight.
At that time, Foot used abortion as an analogy. If a woman has difficulty giving birth, should we remove her uterus and try to save the child (which falls into the category of "foresight" and everyone is happy, but if it fails, there will be two deaths), or should we directly end the child's life (which falls into the category of "planning") to save the woman's life? When facing the same difficult birth, different ways of dealing with it will lead to different moral conclusions.
Foot also extrapolated many similar situations at the time, one of which was about the trolley driver. Another was about the problem faced by the judge. She imagined that if the judge did not sentence an innocent person to death, the mob threatened to storm the court. The second scenario was similar to the trolley problem. The judge had to make a choice, whether to sacrifice the lives of the majority for justice or to sacrifice the innocent for the benefit of the majority.
After some research, Foot said in his paper: "Most people who choose to sacrifice the switchman to save more lives would be shocked if they encountered a judge who distorted right and wrong and took the life of an innocent person."
So, she concluded, there is a difference between what one does and what one allows, and therefore the difference between avoiding harm and bringing help is important.
Although Foot's paper is brief, it provides today's readers with a fresh perspective on the ethical dilemmas faced by self-driving cars, rather than getting stuck in the "trolley problem" dead end.
In part, this is because Foot follows what is known as virtue ethics, a term coined by Aristotle in ancient Greece. Virtue ethicists believe that a person's morality is as important as their life itself.
However, when discussing self-driving cars, most people automatically ignore virtue ethics and pay more attention to the final results of self-driving cars. In moral philosophy, this approach is completely different from virtue ethics. Its scientific name is consequentialism, and there is a famous branch of consequentialism - utilitarianism. In other words, people who adhere to this theory will first consider the possible consequences of their actions before doing anything.
At present, utilitarianism has become a deep-rooted idea in the rhetoric of self-driving cars, and when promoting them, people always mention the comprehensive safety protection brought by self-driving cars.
Considering that 94% of the 37,000 people killed in car accidents in the United States in 2016 were caused by driver error, replacing fallible humans with reliable robots is a good thing with only benefits and no harm.
The problem is that focusing on the consequences alone can blind people to the fact that self-driving cars also have shortcomings. Just as Foot's case of the trolley driver and the judge is morally incomparable, self-driving cars may also cause different moral, legal, and civic consequences in the same situation.
Some assumptions in the trolley problem are difficult to make
Recently, an Uber self-driving test car hit and killed 49-year-old Elaine Hertzberg in Tempe, Arizona, as she was pushing her bicycle across the street. After I analyzed the legal implications of this accident, some readers mocked me as a utilitarian. What they don't know is that in 2015, the number of people killed in car accidents due to jaywalking in the United States reached 5,376, and the news does not report these accidents one by one.
Soon, self-driving cars will reduce or even eliminate pedestrian deaths. If you apply this idea to the trolley problem, the tracks become time, not space. While sacrificing one person on one track is still tragic, it may become more justifiable if it saves thousands of people.
The problem is that such a position requires the assumption that Hertzberg's death was like any other unfortunate pedestrian's. That may make sense statistically, but it may not make sense morally.
In the future, if nothing unexpected happens, self-driving cars will surely be able to avoid the tragedy in Temple City, after all, sensors and computers react much faster than humans. As details of the fatal accident involving an Uber self-driving test car gradually emerge, many experts say the accident was completely avoidable.
Waymo CEO also came out to add that his company's technology can completely avoid accidents. After all, their drivers only need to "touch" the steering wheel every 5,600 miles, while Uber drivers are much busier (every 13 miles).
On Arizona roads, the difference between a Waymo self-driving car and an Uber self-driving car is more important than the difference between a self-driving car and a human driver.
However, in order to attract more companies to Arizona to research, test and deploy self-driving vehicles, Governor Doug Ducey "turned a blind eye" and allowed them to be on the road without strict supervision.
Solving these problems cannot be done by trying to figure out the trolley problem scenario. In the case of the Uber fatal accident, it means that the test car saw Hertzberg first, and then it could make a corresponding choice, deciding whether to save the passerby or the driver.
In addition, to enter the scope of the trolley problem, it is also assumed that the self-driving car is reliable enough and the safety function is guaranteed, that is, the lever in the trolley problem is not immobilized by rust. However, the above two assumptions are not true in the Uber accident.
Foot had anticipated the lack of context in the scene. “In reality,” she wrote, “it’s hard to say that the man tied to the track alone was doomed. Perhaps he could have found a foothold and saved himself before the train passed.”
One way to explore these infinite possibilities is to experiment over and over again and collect patterns from the public's reactions. This is also the method of "moral machines". Just like the most popular machine learning now, a large data set is essential. However, another way is to consider specific issues in the most appropriate moral context.
Foot also offers a classic example that has more in common with Uber’s fatal crash than the trolley problem: He takes the scenario to a hospital with five patients whose illnesses can only be cured by a special gas, but when the gas is used, it releases poison that flies into the adjacent ward, and the patients in that ward cannot move.
In such cases, the effect is indeed very similar to the classic trolley problem, but many of the conclusions are less clear, not only because the purpose and foreseeable effects are different, but also because the moral desire to avoid causing harm works differently.
In the trolley problem, the driver has no choice at all, and no matter which step he takes, it will bring dire consequences. In the hospital example, the doctor faces a conflicting choice: he can either help or harm other patients.
In fact, the situation faced by Uber’s test cars was even more thorny because none of the parties (Uber, the safety driver, and the government) had a clear understanding of the vehicle’s status. This makes the moral scenario of the Uber fatal accident less about future vehicle casualties and more about government regulation, corporate disclosure, and transportation policy.
Don’t ignore the complexity of morality
If the Uber accident becomes a precedent for moral philosophy that can satisfy the needs of technologists, citizens, and policymakers, the disaster could become a proxy for moral luck: the same example of a drunk man who takes a risky drive home at night and makes it home safely, but who could have an accident under the same circumstances.
Generally speaking, the latter is more guilty, but in fact both were wrong first, the only difference is the result.
In this light, Uber’s fatal crash does not represent a value-neutral or justifiable sacrifice of a pedestrian for the safety of future passengers. Rather, it highlights the fact that positive outcomes (such as safer cars and pedestrians) can easily be a function of the moral luck of self-driving cars.
Moral luck also offers other ways to think about self-driving cars, where it’s hard to be sure of their autonomous behavior.
So, do Uber drivers know and understand the consequences of their actions? Can humans intervene in the machine's operation without actively controlling it? Does Arizona's invitation help Uber mitigate its culpability? These questions are now at the center of the incident, both in Arizona and elsewhere. However, for Elaine Hertzberg, this may be just a useless comfort.
The purpose of this article is not to place blame or praise, nor to celebrate or mourn the future of self-driving cars. Rather, it is to call for more moral complexity, which is necessary to solve the problems of self-driving cars now and in the future.
Ethics is not a simple calculus applied to a situation, nor is it a collection of human opinions about a typical case. When engineers, critics, journalists, or ordinary people think that the trolley problem is the ultimate challenge for self-driving cars, they automatically give up thinking about more complex moral situations.
For philosophers, while experimentation is a good way to consider unknown outcomes or reconsider accepted views, it is also just a thinking tool, not a ready-made recipe.
The trolley problem has led people to misunderstand that self-driving cars are a mature technology that already exists reliably and that certain hypothetical abstract moral behaviors can be specifically answered. In fact, we are still far from the era of true self-driving cars.
In the meantime, citizens, governments, automakers and technology companies must continue to explore and find out more about the complex ethical consequences of self-driving cars.
It's time to pump the brakes before the trolley problem crushes everyone's dreams.
Leifeng.com is recruiting editors, operators, part-timers , external translators and other positions
Click here for details Recruitment Notice
◆ ◆ ◆
Recommended Reading
Follow Leiphone.com (leiphone-sz) and reply 2 to add the reader group and make a friend