
Why ethics matters for autonomous cars
ArticleScience + Technology
BY The Ethics Centre 14 APR 2019
Whether a car is driven by a human or a machine, the choices to be made may have fatal consequences for people using the vehicle or others who are within its reach.
A self-driving car must play dual roles – that of the driver and of the vehicle. As such, there is a ‘de-coupling’ of the factors of responsibility that would normally link a human actor to the actions of a machine under his or her control. That is decision to act and the action itself are both carried out by the vehicle.
Autonomous systems are designed to make choices without regard to the personal preferences of human beings, those who would normally exercise control over decision-making.
Given this, people are naturally invested in understanding how their best interests will be assessed by such a machine (or at least the algorithms that shape – if not determine – its behaviour).
In-built ethics from the ground up
There is a growing demand that the designers, manufacturers and marketers of autonomous vehicles embed ethics into the core design – and then ensure that they are not weakened or neutralised by subsequent owners.
We can accept that humans make stupid decisions all the time, but, we hold autonomous systems to a higher standard.
This is easier said than done – especially when one understands that autonomous vehicles are unlikely ever to be entirely self-sufficient. For example, autonomous vehicles will often be integrated into a network (e.g. geospatial positioning systems) that complements their integrated, onboard systems.
A complicated problem
This will exacerbate the difficulty of assigning responsibility in an already complex network of interdependencies.
If there is a failure, will the fault lie with the designer of the hardware, or the software, or the system architecture…or some combination of these and others? What standard of care will count as being sufficient when the actions of each part affects the others and the whole?
This suggests that each design element needs to be informed by the same ethical principles – so as to ensure as much ethical integrity as possible. There is also a need to ensure that human beings are not reduced to the status of being mere ‘network’ elements.
What we mean by this is to ensure the complexity of human interests are not simply weighed in the balance by an expert system that can never really feel the moral weight of the decisions it must make.
For more insights on ethical technology, make sure you download our ‘Ethical by Design‘ guide where we take a detailed look at the principles companies need to consider when designing ethical technology.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
MOST POPULAR
ArticleLifestyle + Health
Vaccines: compulsory or conditional?
ArticleLaw + Human Rights
He said, she said: Investigating the Christian Porter Case
ArticleBeing Human
Free speech has failed us
ArticleBig Thinkers + Explainers
Ethics Explainer: Ethics, morality & law

BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
2 Comments
I think self-driving cars would be a disaster. It would give unimaginable power to those who control traffic. Whoever controlled the smart grid guiding the self-driving cars would be able to control the flow of human movement.
ReplyI think it is possible to go to autonomous travel but only if all vehicles on the road are autonomous. A mixture of human decision making and autonomous decision making would be disastrous in my opinion. Partly autonomous is equally disastrous as I think proven in the recent airline crashes.
Reply
Join the conversation
Would you travel in an autonomous car?