The “good enough” ethical setting for self-driving cars
Opinion + AnalysisScience + Technology
BY Ryan Jenkins The Ethics Centre 19 JUL 2016
Plenty of electronic ink has been spilled over the benefits self-driving cars offer. We have good reason to believe they could greatly reduce the number of fatalities from car accidents – studies suggest upwards of 90 percent of road accidents are caused by driver error.
Avoiding a crash altogether is clearly the best option, but even in crash scenarios some believe autonomous cars might be preferable. Facing a “no win” situation, a driverless car may have the opportunity to “optimise” the crash by minimising harm to those involved. However, choices about how to direct or distribute harm in these cases (for example, hit that person instead of the other) are ethically fraught and demand extraordinary scrutiny of a number of distinctly philosophical issues.
Can we be punished for inaction?
It would be unfair to expect car manufacturers to program their products to ‘crash ethically’ when the outcomes might get them in legal trouble. The law typically errs on the side of not directly committing harm. This means there might be difficulties in developing algorithms that simply minimise harm.
Given this, the law might condemn an autonomous car that steered away from five people and into one person in order to minimise the harm resulting from an accident. A judge might argue that the car steered into someone and so it did harm. The alternative, merely running over five people, results in more harm, but at least the car did not aim at any one of them.
But is inaction in this case morally justified if it leads to more harm? Philosophers have long disputed this distinction between doing harm and merely allowing harm to occur. It is the basis for perhaps the most famous philosophical thought experiment – the trolley problem.
Some philosophers argue that we can still be held responsible for inaction because not doing something still involves making a decision. For example, a doctor may kill her patient by withholding treatment, or a diplomat may offend a foreign dignitary by not shaking her hand. If algorithms that minimise harm are problematic because of a legal preference for inaction over the active causing of harm, there might be reason to ask the law to change.
Should we always try to minimise harm?
Even if we were to assume autonomous cars should minimise the total amount of harm that comes about from an accident, there are complex issues to resolve. Should cars try to minimise the total number of people harmed? Or minimise the kinds of harms that come about?
For example, if a car must choose between hitting one person head-on (a high risk accident) and steering off the road, endangering several others to a less serious injury, which is preferable? Moral philosophers will disagree about which of these options is better.
Another complication arises when we consider that harm minimisation might require an autonomous car to allow its own passengers to be injured or even killed in cases where inaction wouldn’t have brought them to harm. Few consumers would buy a car they expected to behave this way, even if they would prefer everyone else’s car did.
Are people breaking the law more deserving of harm?
Minimising overall harm might in some cases lead to consequences many would find absurd. Imagine a driver who decided to play ‘chicken’ with an autonomous car – driving on the wrong side of the road and threatening to plough head-long into it. Should the passengers in the autonomous car be put at risk to try to avoid a crash that is only occurring because the other driving is breaking the law?
Perhaps self-driving cars need something like ‘legality-adjusted aggregate harm minimisation’ algorithms. Given the widely-held beliefs that people breaking the law are liable to greater harm, deserve a greater share of any harm and that it would be unjust to require law-abiding citizens to share in the harm equally, self-driving cars will need to reflect these values if they are to be commercially viable.
But this approach also faces problems. Engineers would need a reliable way to predict crash trajectories in a way that provided information about the severity of harms, which they aren’t yet able to do. Philosophers would also need a reliable way to assign weighted values to harms, for example, by assigning values to minor versus major injuries. And as a society we would need to determine how liable to harm someone becomes by breaking the law. For example, someone exceeding the speed limit by a small amount may not be as liable to harm as someone playing ‘chicken’.
None of these issues are easy and seeking sure-fire answers every stakeholder agrees to is likely impossible. Instead, perhaps we should seek overlapping consensus – narrowing down the domain of possible algorithms to those that are technically feasible, morally justified and legally defensible. Every proposal for autonomous car ethics is likely to generate some counterintuitive verdicts but ongoing engagement between various parties should continue in the hopes of finding a set of all-around acceptable algorithms.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Health + Wellbeing, Relationships, Science + Technology
Parent planning – we should be allowed to choose our children’s sex
Opinion + Analysis
Business + Leadership, Science + Technology
Big tech knows too much about us. Here’s why Australia is in the perfect position to change that
Opinion + Analysis
Business + Leadership, Science + Technology
Finance businesses need to start using AI. But it must be done ethically
Big thinker
Science + Technology