With great power comes great responsibility – but will tech companies accept it?
Opinion + AnalysisRelationshipsScience + Technology
BY The Ethics Centre 23 NOV 2018
Technology needs to be designed to a set of basic ethical principles. Designers need to show how. Matt Beard, co-author of a new report from The Ethics Centre, demands more from the technology we use every day.
In The Amazing Spider-Man, a young Peter Parker is coming to grips with his newly-acquired powers. Spider-Man in nature but not in name, he suddenly finds himself with increased reflexes, strength and extremely sticky hands.
Unfortunately, the subway isn’t the controlled environment for Parker to awaken as a sudden superhuman. His hands stick to a woman’s shoulders and he’s unable to move them. His powers are doing exactly what they were designed to do, but with creepy, unsettling effects.
Spider-Man’s powers aren’t amazing yet; they’re poorly understood, disturbing and dangerous. As other commuters move to the woman’s defence, shoving Parker away from the woman, his sticky hands inadvertently rip the woman’s top clean off. Now his powers are invading people’s privacy.
A fully-fledged assault follows, but Parker’s Spider-Man reflexes kick in. He beats his assailants off, sending them careening into subway seats and knocking them unconscious, apologising the whole time.
Parker’s unintended creepiness, apologetic harmfulness and head-spinning bewilderment at his own power is a useful metaphor to think about another set of influential nerds: the technological geniuses building the ‘Fourth Industrial Revolution’.
Sudden power, the inability to exercise it responsibly, collateral damage and a need for restraint – it all sounds pretty familiar when we think about ‘killer robots’, mass data collection tools and genetic engineering.
This is troubling, because we need tech designers to, like Spider-Man, recognise (borrowing from Voltaire) that “with great power comes great responsibility”. And unfortunately, it’s going to take more than a comic book training sequence for us to figure this out.
For one thing, Peter Parker didn’t seek and profit from his powers before realising he needed to use them responsibly. For another, it’s going to take something more specific and meaningful than a general acceptance of responsibility for us to see the kind of ethical technology we desperately need.
Because many companies do accept responsibility, they recognise the power and influence they have.
Just look at Mark Zuckerberg’s testimony before the US Congress:
It’s not enough to connect people, we need to make sure those connections are positive. It’s not enough to just give people a voice, we need to make sure people aren’t using it to harm other people or spread misinformation. It’s not enough to give people control over their information, we need to make sure the developers they share it with protect their information too. Across the board, we have a responsibility to not just build tools, but to make sure those tools are used for good.
Compare that to an earlier internal memo – which was intended to be a provocation more than a manifesto – in which a Facebook executive is far more flippant about their responsibility.
We connect people. That can be good if they make it positive. Maybe someone finds love. Maybe it even saves the life of someone on the brink of suicide. So we connect more people. That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools. And still we connect people.
We can expect more from tech companies. But to do that, we need to understand exactly what technology is. This starts by deconstructing one of the most pervasive ideas going around called “technological instrumentalism”, the idea that tech is just a “value-neutral” tool.
Instrumentalists think there’s nothing inherently good or bad about tech because it’s about the people who use it. It’s the ‘guns don’t kill people, people kill people’ school of thought – but it’s starting to run out of steam.
What instrumentalists miss are the values, instructions and suggestions technologies offer to us. People kill people with guns, and when someone picks up a gun, they have the ability to engage with other people in a different way – as a shooter. A gun carries a set of ethical claims within it – claims like ‘it’s sometimes good to shoot people’. That may indeed be true, but that’s not the point – the point is, there are values and ethical beliefs built into technology. One major danger is that we’re often not aware of them.
Encouraging tech companies to be transparent about the values, ethical motivations, justifications and choices that have informed their design is critical to ensure design honestly describes what a product is doing.
Likewise, knowing who built the technology, who owns it and from what part of the world they come from helps us understand whether there might be political motivations, security risks or other challenges we need to be aware of.
Alongside this general need for transparency, we need to get more specific. We need to know how the technology is going to do what it says it’ll do and provide the evidence to back it up. In Ethical by Design, we argue that technology designers need to commit to a set of basic ethical principles – lines they will never cross – in their work.
For instance, technology should do more good than harm. This seems straightforward, but it only works if we know when a product is harming someone. This suggests tech companies should track and measure both the good and bad effects their technology has. You can’t know if you’re doing your job unless you’re measuring it.
Once we do that, we need to remember that we – as a society and as a species – remain in control of the way technology develops. We cultivate a false sense of powerlessness when we tell each other how the future will be, when artificial intelligence will surpass human intelligence and how long it will be until we’ve all lost our jobs.
Technology is something we design – we shape it as much as it shapes us. Forgetting that is the ultimate irresponsibility.
As the Canadian philosopher and futurist Marshal McLuhan wrote, “There is absolutely no inevitability, so long as there is a willingness to contemplate what is happening.”
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Health + Wellbeing, Politics + Human Rights, Relationships
Ethics in a time of coronavirus
Opinion + Analysis
Relationships
We can help older Australians by asking them for help
Opinion + Analysis
Politics + Human Rights, Science + Technology
Is it right to edit the genes of an unborn child?
Opinion + Analysis
Politics + Human Rights, Relationships