To fix the problem of deepfakes we must treat the cause, not the symptoms

To fix the problem of deepfakes we must treat the cause, not the symptoms
Opinion + AnalysisRelationshipsScience + Technology
BY Matthew Beard The Ethics Centre 5 DEC 2019
This article was written for, and first published by The Guardian. Republished with permission.
Once technology is released, it’s like herding cats. Why do we continue to let the tech sector manage its own mess?
We haven’t yet seen a clear frontrunner emerge as the Democratic candidate for the 2020 US election. But I’ve been interested in another race – the race to see which buzzword is going to be a pivotal issue in political reporting, hot takes and the general political introspection that elections bring. In 2016 it was “fake news”. “Deepfake” is shoring up as one of the leading candidates for 2020.
This week the US House of Representatives intelligence committee asked Facebook, Twitter and Google what they were planning to do to combat deepfakes in the 2020 election. And it’s a fair question. With a bit of work, deepfakes could be convincing and misleading enough to make fake news look like child’s play.
Deepfake, a portmanteau of “deep learning” and “fake”, refers to AI software that can superimpose a digital composite face on to an existing video (and sometimes audio) of a person.
The term first rose to prominence when Motherboard reported on a Reddit user who was using AI to superimpose the faces of film stars on to existing porn videos, creating (with varying degrees of realness) porn starring Emma Watson, Gal Gadot, Scarlett Johansson and an array of other female celebrities.
However, there are also a range of political possibilities. Filmmaker Jordan Peele highlighted some of the harmful potential in an eerie video produced with Buzzfeed, in which he literally puts his words in Barack Obama’s mouth. Satisfying or not, hearing Obama call US president Trump a “total and complete dipshit” is concerning, given he never said it.
Just as concerning as the potential for deepfakes to be abused is that tech platforms are struggling to deal with them. For one thing, their content moderation issues are well documented. Most recently, a doctored video of Nancy Pelosi, slowed and pitch-edited to make her appear drunk, was tweeted by Trump. Twitter did not remove the video, YouTube did, and Facebook de-ranked it in the news feed.
For another, they have already tried, and failed, to moderate deepfakes. In a laudably fast response to the non-consensual pornographic deepfakes, Twitter, Gfycat, Pornhub and other platforms quickly acted to remove them and develop technology to help them do it.
However, once technology is released it’s like herding cats. Deepfakes are a moving feast and as soon as moderators find a way of detecting them, people will find a workaround.
But while there are important questions about how to deal with deepfakes, we’re making a mistake by siloing it off from broader questions and looking for exclusively technological solutions. We made the same mistake with fake news, where the prime offender was seen to be tech platforms rather than the politicians and journalists who had created an environment where lies could flourish.
The furore over deepfakes is a microcosm for the larger social discussion about the ethics of technology. It’s pretty clear the software shouldn’t have been developed and has led – and will continue to lead – to disproportionately more harm than good. And the lesson wasn’t learned. Recently the creator of an app called “DeepNude”, designed to give a realistic approximation of how a woman would look naked based on a clothed image, cancelled the launch fearing “the probability that people will misuse it is too high”.
What the legitimate use for this app is, I don’t know, but the response is revealing in how predictable it is. Reporting triggers some level of public outcry, at which suddenly tech developers realise the error of their ways. Theirs is the conscience of hindsight: feeling bad after the fact rather than proactively looking for ways to advance the common good, treat people fairly and minimise potential harm. By now we should know better and expect more.
“Technology is a way of seeing the world. It’s a kind of promise – that we can bring the world under our control and bend it to our will.”
Why then do we continue to let the tech sector manage its own mess? Partly it’s because it is difficult, but it’s also because we’re still addicted to the promise of technology even as we come to criticise it. Technology is a way of seeing the world. It’s a kind of promise – that we can bring the world under our control and bend it to our will. Deepfakes afford us the ability to manipulate a person’s image. We can make them speak and move as we please, with a ready-made, if weak, moral defence: “No people were harmed in the making of this deepfake.”
But in asking for a technological fix to deepfakes, we’re fuelling the same logic that brought us here. Want to solve Silicon Valley? There’s an app for that! Eventually, maybe, that app will work. But we’re still treating the symptoms, not the cause.
The discussion around ethics and regulation in technology needs to expand to include more existential questions. How should we respond to the promises of technology? Do we really want the world to be completely under our control? What are the moral costs of doing this? What does it mean to see every unfulfilled desire as something that can be solved with an app?
Yes, we need to think about the bad actors who are going to use technology to manipulate, harm and abuse. We need to consider the now obvious fact that if a technology exists, someone is going to use it to optimise their orgasms. But we also need to consider what it means when the only place we can turn to solve the problems of technology is itself technological.
Big tech firms have an enormous set of moral and political responsibilities and it’s good they’re being asked to live up to them. An industry-wide commitment to basic legal standards, significant regulation and technological ethics will go a long way to solving the immediate harms of bad tech design. But it won’t get us out of the technological paradigm we seem to be stuck in. For that we don’t just need tech developers to read some moral philosophy. We need our politicians and citizens to do the same.
“At the moment we’re dancing around the edges of the issue, playing whack-a-mole as new technologies arise.”
At the moment we’re dancing around the edges of the issue, playing whack-a-mole as new technologies arise. We treat tech design and development like it’s inevitable. As a result, we aim to minimise risks rather than look more deeply at the values, goals and moral commitments built into the technology. As well as asking how we stop deepfakes, we need to ask why someone thought they’d be a good idea to begin with. There’s no app for that.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Big thinker
Relationships
Big Thinker: Thomas Nagel
Opinion + Analysis
Politics + Human Rights, Relationships
Intimate relationships matter: The need for a fairer family migration system in Australia
Opinion + Analysis
Relationships
Beyond consent: The ambiguity surrounding sex
Opinion + Analysis
Relationships, Society + Culture
Inside The Mind Of FODI Festival Director Danielle Harvey

BY Matthew Beard
Matt is a moral philosopher with a background in applied and military ethics. In 2016, Matt won the Australasian Association of Philosophy prize for media engagement. Formerly a fellow at The Ethics Centre, Matt is currently host on ABC’s Short & Curly podcast and the Vincent Fairfax Fellowship Program Director.

BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
The new rules of ethical design in tech

The new rules of ethical design in tech
Opinion + AnalysisBusiness + LeadershipScience + Technology
BY Matthew Beard The Ethics Centre 26 SEP 2019
This article was written for, and first published by Atlassian.
Because tech design is a human activity, and there’s no such thing as human behaviour without ethics.
One of my favourite memes for the last few years is This is Fine. It’s a picture of a dog sitting in a burning building with a cup of coffee. “This is fine,” the dog announces. “I’m okay with what’s happening. It’ll all turn out okay.” Then the dog takes a sip of coffee and melts into the fire.
Working in ethics and technology, I hear a lot of “This is fine.” The tech sector has built (and is building) processes and systems that exclude vulnerable users by designing “nudges” that influence users, users who end up making privacy concessions they probably shouldn’t. Or, designing by hardwiring preconceived notions of right and wrong into technologies that will shape millions of people’s lives.
But many won’t acknowledge they could have ethics problems.

This is partly because, like the dog, they don’t concede that the fire might actually burn them in the end. Lots of people working in tech are willing to admit that someone else has a problem with ethics, but they’re less likely to believe is that they themselves have an issue with ethics.
And I get it. Many times, people are building products that seem innocuous, fun, or practical. There’s nothing in there that makes us do a moral double-take.
The problem is, of course, that just because you’re not able to identify a problem doesn’t mean you won’t melt to death in the sixth frame of the comic. And there are issues you need to address in what you’re building, because tech design is a human activity, and there’s no such thing as human behaviour without ethics.
Your product probably already has ethical issues
To put it bluntly: if you think you don’t need to consider ethics in your design process because your product doesn’t generate any ethical issues, you’ve missed something. Maybe your product is still fine, but you can’t be sure unless you’ve taken the time to consider your product and stakeholders through an ethical lens.
Look at it this way: If you haven’t made sure there are no bugs or biases in your design, you haven’t been the best designer you could be. Ethics is no different – making people (and their products) the best they can be.
Take Pokémon Go, for example. It’s an awesome little mobile game that gives users the chance to feel like Pokémon trainers in the real world. And it’s a business success story, recording a profit of $3 billion at the end of 2018. But it’s exactly the kind of innocuous-seeming app most would think doesn’t have any ethical issues.
But it does. It distracted drivers, brought users to dangerous locations in the hopes of catching Pokémon, disrupted public infrastructure, didn’t seek the consent of the sites it included in the game, unintentionally excluded rural neighbourhoods (many populated by racial minorities), and released Pokémon in offensive locations (for instance, a poison gas Pokémon in the Holocaust Museum in Washington DC).
Quite a list, actually.
This is a shame, because all of this meant that Pokemon Go was not the best game it could be. And as designers, that’s the goal – to make something great. But something can’t be great unless it’s good, and that’s why designers need to think about ethics.
Here are a few things you can embed within your design processes to make sure you’re not going to burn to death, ethically speaking, when you finally launch.
1. Start with ethical pre-mortems
When something goes wrong with a product, we know it’s important to do a postmortem to make sure we don’t repeat the same mistakes. Postmortems happen all the time in ethics. A product is launched, a scandal erupts, and ethicists wind up as talking heads on the news discussing what went wrong.
As useful as postmortems are, they can also be ways of washing over negligent practices. When something goes wrong and a spokesperson says, “We’re going to look closely at what happened to make sure it doesn’t happen again.” I want to say, “Why didn’t you do that before you launched?” That’s what an ethical premortem does.
Sit down with your team and talk about what would make this product an ethical failure. Then work backwards to the root causes of that possible failure. How could you mitigate that risk? Can you reduce the risk enough to justify going forward with the project? Are your systems, processes and teams set up in a way that enables ethical issues to be identified and addressed?
Tech ethicist Shannon Vallor provides a list of handy premortem questions:
- How Could This Project Fail for Ethical Reasons?
- What Would be the Most Likely Combined Causes of Our Ethical Failure/Disaster?
- What Blind Spots Would Lead Us Into It?
- Why Would We Fail to Act?
- Why/How Would We Choose the Wrong Action?
What Systems/Processes/Checks/Failsafes Can We Put in Place to Reduce Failure Risk?
2. Ask the Death Star question
The book Rogue One: Catalyst tells the story of how the galactic empire managed to build the Death Star. The strategy was simple: take many subject matter experts and get them working in silos on small projects. With no team aware of what other teams were doing, only a few managers could make sense of what was actually being built.
Small teams, working in a limited role on a much larger project, with limited connection to the needs, goals, objectives or activities of other teams. Sound familiar? Siloing is a major source of ethical negligence. Teams whose workloads, incentives, and interests are limited to their particular contribution seldom can identify the downstream effects of their contribution, or what might happen when it’s combined with other work.
While it’s unlikely you’re secretly working for a Sith Lord, it’s still worth asking:
- What’s the big picture here? What am I actually helping to build?
- What contribution is my work making and are there ethical risks I might need to know about?
- Are there dual-use risks in this product that I should be designing against?
- If there are risks, are they worth it, given the potential benefits?
3. Get red teaming
Anyone who has worked in security will know that one of the best ways to know if a product is secure is to ask someone else to try to break it. We can use a similar concept for ethics. Once we’ve built something we think is great, ask some people to try to prove that it isn’t.
Red teams should ask:
- What are the ethical pressure points here?
- Have you made trade-offs between competing values/ideals? If so, have you made them in the right way?
- What happens if we widen the circle of possible users to include some people you may not have considered?
- Was this project one we should have taken on at all? (If you knew you were building the Death Star, it’s unlikely you could ever make it an ethical product. It’s a WMD.)
- Is your solution the only one? Is it the best one?
4. Decide what your product’s saying
Ever seen a toddler discover a new toy? Their first instinct is to test the limits of what they can do. They’re not asking What was the intention of the designer, they’re testing how the item can satisfy their needs, whatever they may be. In this case they chew it, throw it, paint with it, push it down a slide… a toddler can’t access the designer’s intention. The only prompts they have are those built into the product itself.
It’s easy to think about our products as though they’ll only be used in the way we want them to be used. In reality, though, technology design and usage is more like a two-way conversation than a set of instructions. Given this, it’s worth asking: if the user had no instructions on how to use this product, what would they infer purely from the design?
For example, we might infer from the hidden-away nature of some privacy settings on social media platforms that we shouldn’t tweak our privacy settings. Social platforms might say otherwise, but their design tells a different story. Imagine what your product would be saying to a user if you let it speak for itself.
This is doubly important, because your design is saying something. All technology is full of affordances – subtle prompts that invite the user to engage with it in some ways rather than others. They’re there whether you intend them to be or not, but if you’re not aware of what your design affords, you can’t know what messages the user might be receiving.
Design teams should ask:
- What could a infer from the design about how a product can/should be used?
- How do you want people to use this?
- How don’t you want people to use this?
- Do your design choices and affordances reflect these expectations?
- Are you unnecessarily preventing other legitimate uses of the technology?
5. Don’t forget to show your work
One of the (few) things I remember from my high school math classes is this: you get one mark for getting the right answer, but three marks for showing the working that led you there.
It’s also important for learning: if you don’t get the right answer, being able to interrogate your process is crucial (that’s what a post-mortem is).
For ethical design, the process of showing your work is about being willing to publicly defend the ethical decisions you’ve made. It’s a practical version of The Sunlight Test – where you test your intentions by asking if you’d do what you were doing if the whole world was watching.
Ask yourself (and your team):
- Are there any limitations to this product?
- What trade-offs have you made (e.g. between privacy and user-customisation)?
- Why did you build this product (what problems are you solving?)
- Does this product risk being misused? If so, what have you done to mitigate those risks?
- Are there any users who will have trouble using this product (for instance, people with disabilities)? If so, why can’t you fix this and why is it worth releasing the product, given it’s not universally accessible?
- How probable is it that the good and bad effects are likely to happen?
Ethics is an investment
I’m constantly amazed at how much money, time and personnel organisations are willing to invest in culture initiatives, wellbeing days and the like, but who haven’t spent a penny on ethics. There’s a general sense that if you’re a good person, then you’ll build ethical stuff, but the evidence overwhelmingly proves that’s not the case. Ethics needs to be something you invest in learning about, building resources and systems around, recruiting for, and incentivising.
It’s also something that needs to be engaged in for the right reasons. You can’t go into this process because you think it’s going to make you money or recruit the best people, because you’ll abandon it the second you find a more effective way to achieve those goals. A lot of the talk around ethics in technology at the moment has a particular flavour: anti-regulation. There is a hope that if companies are ethical, they can self-regulate.
I don’t see that as the role of ethics at all. Ethics can guide us toward making the best judgements about what’s right and what’s wrong. It can give us precision in our decisions, a language to explain why something is a problem, and a way of determining when something is truly excellent. But people also need justice: something to rely on if they’re the least powerful person in the room. Ethics has something to say here, but so do law and regulation.
If your organisation says they’re taking ethics seriously, ask them how open they are to accepting restraint and accountability. How much are they willing to invest in getting the systems right? Are they willing to sack their best performer if that person isn’t conducting themselves the way they should?
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Business + Leadership
Volt Bank: Creating a lasting cultural impact
Explainer
Business + Leadership
Ethics Explainer: Consent
Opinion + Analysis
Business + Leadership
Getting the job done is not nearly enough
Opinion + Analysis
Business + Leadership
Shadow values: What really lies beneath?

BY Matthew Beard
Matt is a moral philosopher with a background in applied and military ethics. In 2016, Matt won the Australasian Association of Philosophy prize for media engagement. Formerly a fellow at The Ethics Centre, Matt is currently host on ABC’s Short & Curly podcast and the Vincent Fairfax Fellowship Program Director.

BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
MIT Media Lab: look at the money and morality behind the machine

MIT Media Lab: look at the money and morality behind the machine
Opinion + AnalysisBusiness + LeadershipScience + Technology
BY Matthew Beard 18 SEP 2019
When convicted sex offender, alleged sex trafficker and financier to the rich and famous Jeffrey Epstein was arrested and subsequently died in prison, there was a sense that some skeletons were about to come out of the closet.
However, few would have expected that the death of a well-connected, social high-flying predator would call into disrepute one of the world’s most reputable AI research labs. But this is 2019, so anything can happen. And happen it has.
Two weeks ago, New Yorker magazine’s Ronan Farrow reported that Joi Ito, the director of MIT’s prestigious Media Lab, which aims to “focus on the study, invention, and creative use of digital technologies to enhance the ways that people think, express, and communicate ideas, and explore new scientific frontiers,” had accepted $7.5 million in anonymous funding from Epstein, despite knowing MIT had him listed as a “disqualified donor” – presumably because of his previous convictions for sex offences.
Emails obtained by Farrow suggest Ito wrote to Epstein asking for funding to continue to pay staff salaries. Epstein allegedly procured donations from other philanthropists – including Bill Gates – for the Media Lab, but all record of Epstein’s involvement was scrubbed.
Since this has been made public, Ito – who lists one of his areas of expertise as “the ethics and governance of technology” – has resigned. The funding director who worked with Ito at MIT, Peter Cohen, now working at another university, has been placed on administrative leave. Staff at MIT Media Lab have resigned in protest and others are feeling deeply complicit, betrayed and disenchanted at what has transpired.
What happened at MIT’s Media Lab is an important case study in how the public conversation around the ethics of technology needs to expand to consider more than just the ethical character of systems themselves. We need to know who is building these systems, why they’re doing so and who is benefitting. In short, ethical considerations need to include a supply chain analysis of how the technology came to be created.
This is important is because technology ethics – especially AI ethics – is currently going through what political philosopher Annette Zimmerman calls a “gold rush”. A range of groups, including The Ethics Centre, are producing guides, white papers, codes, principles and frameworks to try to respond to the widespread need for rigorous, responsive AI ethics. Some of these parties genuinely want to solve the issues; others just want to be able to charge clients and have retail products ready to go. In either case, the underlying concern is that the kind of ethics that gets paid gets made.
For instance, funding is likely to dictate where the world’s best talent is recruited and what problems they’re asked to solve. Paying people to spend time thinking about these issues, providing the infrastructure for multidisciplinary (or in MIT Media Lab’s case, “anti disciplinary”) groups to collaborate is expensive. Those with money will have a much louder voice in public and social debates around AI ethics and have considerable power to shape the norms that will eventually shape the future.
This is not entirely new. Academic research – particularly in the sciences – has always been fraught. It often requires philanthropic support, and it’s easy to rationalise the choice to take this from morally questionable people and groups (and, indeed, the downright contemptible). Vox’s Kelsey Piper summarised the argument neatly: “Who would you rather have $5 million: Jeffrey Epstein, or a scientist who wants to use it for research? Presumably the scientist, right?”
What this argument misses, as Piper points out, is that when it comes to these kinds of donations, we want to know where they’re coming from. Just as we don’t want to consume coffee made by slave labour, we don’t want to chauffeured around by autonomous vehicles whose AI was paid for by money that helped boost the power and social standing of a predator.
More significantly, it matters that survivors of sexual violence – perhaps even Epstein’s own – might step into vehicles, knowingly or not, whose very existence stemmed from the crimes whose effects they now live with.
Paying attention to these concerns is simply about asking the same questions technology ethicists already ask in a different context. For instance, many already argue that the provenance of a tech product should be made transparent. In Ethical by Design: Principles for Good Technology, we argue that:
The complete history of artefacts and devices, including the identities of all those who have designed, manufactured, serviced and owned the item, should be freely available to any current owner, custodian or user of the device.
It’s a natural extension of this to apply the same requirements to the funding and ownership of tech products. We don’t just need to know who built them, perhaps we also need to know who paid for them to be built, and who is earning capital (financial or social) as a result.
AI and data ethics have recently focused on concerns around the unfair distribution of harms. It’s not enough, many argue, that an algorithm is beneficial 95% of the time, if the 5% who don’t benefit are all (for example) people with disabilities or from another disadvantaged, minority group. We can apply the same principle to the Epstein funding: if the moral costs of having AI funded by a repeated sex offender are borne by survivors of sexual violence, then this is an unacceptable distribution of risks.
MIT Media Lab, like other labs around the world, literally wants to design the future for all of us. It’s not unreasonable to demand that MIT Media Lab and other groups in the business of designing the future, design it on our terms – not those of a silent, anonymous philanthropist.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Relationships, Science + Technology
Making friends with machines
Opinion + Analysis
Business + Leadership
Corporate tax avoidance: you pay, why won’t they?
Opinion + Analysis
Business + Leadership, Politics + Human Rights
Australia is no longer a human rights leader
Opinion + Analysis
Business + Leadership, Science + Technology
Finance businesses need to start using AI. But it must be done ethically

BY Matthew Beard
Matt is a moral philosopher with a background in applied and military ethics. In 2016, Matt won the Australasian Association of Philosophy prize for media engagement. Formerly a fellow at The Ethics Centre, Matt is currently host on ABC’s Short & Curly podcast and the Vincent Fairfax Fellowship Program Director.
Should you be afraid of apps like FaceApp?

Should you be afraid of apps like FaceApp?
Opinion + AnalysisRelationshipsScience + Technology
BY Matthew Beard 30 JUL 2019
Until last week, you would have been forgiven for thinking a meme couldn’t trigger fears about international security.
But since the widespread concerns over FaceApp last week, many are asking renewed questions about privacy, data ownership and transparency in the tech sector. But most of the reportage hasn’t gotten to the biggest ethical risk the FaceApp case reveals.
What is FaceApp?
In case you weren’t in the know, FaceApp is a ‘neural transformation filter’.
Basically, it uses AI to take a photo of your face and make it look different. The recent controversy centred on its ability to age people, pretty realistically, in just a short photo. Use of the app was widespread, creating a viral trend – there were clicks and engagements to be made out of the app, so everyone started to hop on board.
Where does your data go?
With the increasing popularity comes increasing scrutiny. A number of people soon noticed that FaceApp’s terms of use seemed to give them a huge range of rights to access and use the photos they’d collected. There were fears the app could access all the photos in your photo stream, not just the one you chose to upload.
There were questions about how you could delete your data from the service. And worst of all for many, the makers of the app, Wireless Labs, are based in Russia. US Minority Leader Chuck Schumer even asked the FBI to investigate the app.
The media commentary has been pretty widespread, suggesting that the app sends data back to Russia, lacks transparency about how it will or won’t be used and has no accessible data ethics principles. At least two of those are true. There isn’t much in FaceApp’s disclosure that would give a user any sense of confidence in the app’s security or respect for privacy.
Unsurprisingly, this hasn’t amounted to much. Giving away our data in irresponsible ways has become a bit like comfort eating. You know it’s bad, but you’re still going to do it.
The reasons are likely similar to the reasons we indulge other petty vices: the benefits are obvious and immediate; the harms are distant and abstract. And whilst we’d all like to think we’ve got more self-control than the kids in those delayed gratification psychology experiments, more often than not our desire for fun or curiosity trumps any concern we have over how our data is used.
Should you be worried?
Is this a problem? To the extent that this data – easily accessed – can be used for a range of goals we likely don’t support, yes. It also gives rise to a range of complex ethical questions concerning our responsibility.
Let’s say I willingly give my data to FaceApp. This data is then aggregated and on-sold in a data marketplace. A dataset comprising of millions of facial photos is then used to train facial recognition AI, which is used to track down political dissidents in Russia. To what extent should I consider myself responsible for political oppression on the other side of the world?
In climate change ethics, there is a school of thought that suggests even if our actions can’t change an outcome – for instance, by making a meaningful reduction to emissions – we still have a moral obligation not to make the problem worse.
It might be true that a dataset would still be on sold without our input, but that alone doesn’t seem to justify adding our information or throwing up our arms and giving up. In this hypothetical, giving up – or not caring – means abandoning my (admittedly small) role in human rights violations and political injustice.
A troubling peek into the future
In reality, it’s really unlikely that’s what FaceApp is actually using your data to do. It’s far more likely, according to the MIT Technology Review, that your face might be used to train FaceApp to get even better at what it does.
It might use your face to help improve software that analyses faces to determine age and gender. Or it might be used – perhaps most scarily – to train AI to create deepfakes or faces of people who don’t exist. All of this is a far cry from the nightmare scenario sketched out above.
But even if my horror story was accurate, would it matter? It seems unlikely.
By the time tech journalists were talking about the potential data issues with FaceApp, millions had already uploaded their photos into the app. The ship had sailed, and it set off with barely a question asked of it. It’s also likely that plenty of people read about the data issues and then installed the app just to see what all the fuss is about.
Who is responsible?
I’m pulled in two directions when I wonder who we should hold responsible here. Of course, designers are clever and intentionally design their apps in ways that make them smooth and easy to use. They eliminate the friction points that facilitate serious thinking and reflection.
But that speed and efficiency is partly there because we want it to be there. We don’t want to actually read the terms of use agreement, and the company willingly give us a quick way to avoid doing so (whilst lying, and saying we have).
This is a Faustian pact – we let tech companies sell us stuff that’s potentially bad for us, so long as it’s fun.
The important reflection around FaceApp isn’t that the Russians are coming for us – a view that, as Kaitlyn Tiffany noted for Vox, smacks slightly of racism and xenophobia. The reflection is how easily we give up our principled commitments to ethics, privacy and wokeful use of technology as soon as someone flashes some viral content at us.
In Ethical by Design: Principles for Good Technology, Simon Longstaff and I made the point that technology isn’t just a thing we build and use. It’s a world view. When we see the world technologically, our central values are things like efficiency, effectiveness and control. That is, we’re more interesting in how we do things than what we’re doing.
Two sides of the story
For me, that’s the FaceApp story. The question wasn’t ‘is this app safe to use?’ (probably no less so than most other photo apps), but ‘how much fun will I have?’ It’s a worldview where we’re happy to pay any price for our kicks, so long as that price is hidden from us. FaceApp might not have used this impulse for maniacal ends, but it has demonstrated a pretty clear vulnerability.
Is this how the world ends, not with a bang, but with a chuckle and a hashtag?
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Relationships
We live in an opinion economy, and it’s exhausting
LISTEN
Relationships, Society + Culture
Little Bad Thing
Opinion + Analysis
Relationships
Who are you? Why identity matters to ethics
Opinion + Analysis
Politics + Human Rights, Relationships
Why we should be teaching our kids to protest

BY Matthew Beard
Matt is a moral philosopher with a background in applied and military ethics. In 2016, Matt won the Australasian Association of Philosophy prize for media engagement. Formerly a fellow at The Ethics Centre, Matt is currently host on ABC’s Short & Curly podcast and the Vincent Fairfax Fellowship Program Director.
Injecting artificial intelligence with human empathy

Injecting artificial intelligence with human empathy
Opinion + AnalysisRelationshipsScience + Technology
BY Allan Waddell 27 JUN 2019
The great promise of artificial intelligence is efficiency. The finely tuned mechanics of AI will free up societies to explore new, softer skills while industries thrive on automation.
However, if we’ve learned anything from the great promise of the Internet – which was supposed to bring equality by leveling the playing field – it’s clear new technologies can be rife with complications unwittingly introduced by the humans who created them.
The rise of artificial intelligence is exciting, but the drive toward efficiency must not happen without a corresponding push for strong ethics to guide the process. Otherwise, the advancements of AI will be undercut by human fallibility and biases. This is as true for AI’s application in the pursuit of social justice as it is in basic business practices like customer service.
Empathy
The ethical questions surrounding AI have long been the subject of science fiction, but today they are quickly becoming real-world concerns. Human intelligence has a direct relationship to human empathy. If this sensitivity doesn’t translate into artificial intelligence the consequences could be dire. We must examine how humans learn in order to build an ethical education process for AI.
AI is not merely programmed – it is trained like a human. If AI doesn’t learn the right lessons, ethical problems will inevitably arise. We’ve already seen examples, such as the tendency of facial recognition software to misidentify people of colour as criminals.
Biased AI
In the United States, a piece of software called Correctional Offender Management Profiling for Alternative Sanctions (Compas) was used to assess the risk of defendants reoffending and had an impact on their sentencing. Compas was found to be twice as likely to misclassify non-white defendants as higher risk offenders, while white defendants were misclassified as lower risk much more often than non-white defendants. This is a training issue. If AI is predominantly trained in Caucasian faces, it will disadvantage minorities.
This example might seem far removed from us here in Australia but consider the consequences if it were in place here. What if a similar technology was being used at airports for customs checks, or part of a pre-screening process used by recruiters and employment agencies?
“Human intelligence has a direct relationship to human empathy.”
If racism and other forms of discrimination are unintentionally programmed into AI, not only will it mirror many of the failings of analog society, but it could magnify them.
While heightened instances of injustice are obviously unacceptable outcomes for AI, there are additional possibilities that don’t serve our best interests and should be avoided. The foremost example of this is in customer service.
AI vs human customer service
Every business wants the most efficient and productive processes possible but sometimes better is actually worse. Eventually, an AI solution will do a better job at making appointments, answering questions, and handling phone calls. When that time comes, AI might not always be the right solution.
Particularly with more complex matters, humans want to talk to other humans. Not only do they want their problem resolved, but they want to feel like they’ve been heard. They want empathy. This is something AI cannot do.
AI is inevitable. In fact, you’re probably already using it without being aware of it. There is no doubt that the proper application of AI will make us more efficient as a society, but the temptation to rely blindly on AI is unadvisable.
We must be aware of our biases when creating new technologies and do everything in our power to ensure they aren’t baked into algorithms. As more functions are handed over to AI, we must also remember that sometimes there’s no substitute for human-to-human interaction.
After all, we’re only human.
Allan Waddell is founder and Co-CEO of Kablamo, an Australian cloud based tech software company.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Business + Leadership, Relationships
Employee activism is forcing business to adapt quickly
Explainer
Science + Technology
Ethics Explainer: The Turing Test
Opinion + Analysis
Health + Wellbeing, Relationships
How to deal with people who aren’t doing their bit to flatten the curve
Opinion + Analysis
Business + Leadership, Politics + Human Rights, Science + Technology
Not too late: regaining control of your data

BY Allan Waddell
Allan Waddell is founder and Co-CEO of Kablamo, an Australian cloud based tech software company.
How to build good technology
How to build good technology
WATCHBusiness + LeadershipClimate + EnvironmentScience + Technology
BY Matthew Beard 2 MAY 2019
Dr Matthew Beard explains the key principles to guide the development of ethical technology at the Atlassian 2019 conference in Las Vegas.
Find out why technology designers have a moral responsibility to design ethically, the unintended ethical consequences of designs such as Pokemon Go, and the the seven guiding principles designers need to consider when building new technology.
Whether editing a genome, building a driverless car or writing a social media algorithm, Dr Beard says these principles offer the guidance and tools to do so ethically.
Download ‘Ethical By Design: Principles For Good Technology ‘
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Business + Leadership
Despite codes of conduct, unethical behaviour happens: why bother?
Big thinker
Science + Technology
Seven Influencers of Science Who Helped Change the World
Opinion + Analysis
Business + Leadership
Georg Kell on climate and misinformation
Opinion + Analysis
Science + Technology
Australia, we urgently need to talk about data ethics

BY Matthew Beard
Matt is a moral philosopher with a background in applied and military ethics. In 2016, Matt won the Australasian Association of Philosophy prize for media engagement. Formerly a fellow at The Ethics Centre, Matt is currently host on ABC’s Short & Curly podcast and the Vincent Fairfax Fellowship Program Director.
Why ethics matters for autonomous cars

Why ethics matters for autonomous cars
Opinion + AnalysisScience + Technology
BY The Ethics Centre 14 APR 2019
Whether a car is driven by a human or a machine, the choices to be made may have fatal consequences for people using the vehicle or others who are within its reach.
A self-driving car must play dual roles – that of the driver and of the vehicle. As such, there is a ‘de-coupling’ of the factors of responsibility that would normally link a human actor to the actions of a machine under his or her control. That is decision to act and the action itself are both carried out by the vehicle.
Autonomous systems are designed to make choices without regard to the personal preferences of human beings, those who would normally exercise control over decision-making.
Given this, people are naturally invested in understanding how their best interests will be assessed by such a machine (or at least the algorithms that shape – if not determine – its behaviour).
In-built ethics from the ground up
There is a growing demand that the designers, manufacturers and marketers of autonomous vehicles embed ethics into the core design – and then ensure that they are not weakened or neutralised by subsequent owners.
We can accept that humans make stupid decisions all the time, but, we hold autonomous systems to a higher standard.
This is easier said than done – especially when one understands that autonomous vehicles are unlikely ever to be entirely self-sufficient. For example, autonomous vehicles will often be integrated into a network (e.g. geospatial positioning systems) that complements their integrated, onboard systems.
A complicated problem
This will exacerbate the difficulty of assigning responsibility in an already complex network of interdependencies.
If there is a failure, will the fault lie with the designer of the hardware, or the software, or the system architecture…or some combination of these and others? What standard of care will count as being sufficient when the actions of each part affects the others and the whole?
This suggests that each design element needs to be informed by the same ethical principles – so as to ensure as much ethical integrity as possible. There is also a need to ensure that human beings are not reduced to the status of being mere ‘network’ elements.
What we mean by this is to ensure the complexity of human interests are not simply weighed in the balance by an expert system that can never really feel the moral weight of the decisions it must make.
For more insights on ethical technology, make sure you download our ‘Ethical by Design‘ guide where we take a detailed look at the principles companies need to consider when designing ethical technology.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Climate + Environment, Relationships, Science + Technology
From NEG to Finkel and the Paris Accord – what’s what in the energy debate
Opinion + Analysis
Relationships, Science + Technology
To fix the problem of deepfakes we must treat the cause, not the symptoms
Opinion + Analysis
Science + Technology
Australia, we urgently need to talk about data ethics
Opinion + Analysis
Science + Technology, Society + Culture
Where did the wonder go – and can AI help us find it?

BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
The Ethics of In Vitro Fertilization (IVF)

The Ethics of In Vitro Fertilization (IVF)
Opinion + AnalysisScience + Technology
BY The Ethics Centre 8 APR 2019
To understand the ethics of IVF (In vitro fertilisation) we must first consider the ethical status of an embryo.
This is because there is an important distinction to be made between when a ‘human life’ begins and when a ‘person’ begins.
The former (‘human life’) is a biological question – and our best understanding is that human life begins when the human egg is fertilised by sperm or otherwise stimulated to cause cell division to begin.
The latter is an ethical question – as the concept of ‘person’ relates to a being capable of bearing the full range of moral rights and responsibilities.
There are a range of other ethical issues IVF gives rise to:
- the quality of consent obtained from the parties
- the motivation of the parents
- the uses and implications of pre-implantation genetic diagnosis
- the permissibility of sex-selection (or the choice of embryos for other traits)
- the storage and fate of surplus embryos.
For most of human history, it was held that a human only became a person after birth. Then, as the science of embryology advanced, it was argued that personhood arose at the moment of conception – a view that made sense given the knowledge of the time.
However, more recent advances in embryology have shown that there is a period (of up to about 14 days after conception) during which it is impossible to ascribe identity to an embryo as the cells lack differentiation.
Given this, even the most conservative ethical position (such as those grounded in religious conviction) should not disallow the creation of an embryo (and even its possible destruction if surplus to the parents’ needs) within the first 14 day window.
Let’s further explore the grounds of some more common objections. Some people object to the artificial creation of a life that would not be possible if left entirely to nature. Or they might object on the grounds that ‘natural selection’ should be left to do its work. Others object to conception being placed in the hands of mortals (rather than left to God or some other supernatural being).
When covering these objection it’s important to draw attention existing moral values and principles. For example, human beings regularly intervene with natural causes – especially in the realm of medicine – by performing surgery, administering pharmaceuticals and applying other medical technologies.
A critic of IVF would therefore need to demonstrate why all other cases of intervention should be allowed – but not this.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Explainer
Science + Technology
Thought experiment: “Chinese room” argument
Opinion + Analysis
Relationships, Science + Technology
We are being saturated by knowledge. How much is too much?
Big thinker
Science + Technology
Seven Influencers of Science Who Helped Change the World
Opinion + Analysis
Science + Technology
We can raise children who think before they prompt

BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Is technology destroying your workplace culture?

Is technology destroying your workplace culture?
Opinion + AnalysisBusiness + LeadershipScience + Technology
BY Matthew Beard 6 APR 2019
If you were to put together a list of all the buzzwords and hot topics in business today, you’d be hard pressed to leave off culture, innovation or disruption.
They might even be the top three. In an environment of constant technological change, we’re continuously promised a new edge. We can have sleeker service, faster communication or better teamwork.
This all makes sense. Technology is the future of work. Whether it’s remote work, agile work flows or AI enhanced research, we’re going to be able to do more with less, and do it better.
For organisations who are doing good work, that’s great. And if those organisations are working for the good of society (as they should), that’s great for us all.
Without looking a gift horse in the mouth though, we should be careful technology enhances our work rather than distracting us from it.
Most of us can probably think of a time when our office suddenly had to work with a totally new, totally pointless bit of software. Out of nowhere, you’ve got a new chatbot, all your info has been moved to ‘the cloud’ or customer emails are now automated.
This is usually the result of what the comedian Eddie Izzard calls “techno-joy”. It’s the unthinking optimism that technology is a cure for all woes.
Unfortunately, it’s not. Techno-joyful managers are more headache than helper. But more than that, they can also put your culture – or worse, your ethics – in a tricky spot.
Here’s the thing about technology. It’s more than hardware or code. Technology carries a set of values with it. This happens in a few ways.
Techno-logic
All technology works through a worldview we call ‘techno-logic’. Basically, technology aims to help us control things by making the world more efficient and effective. As we explained in our recent publication, Ethical by Design:
Techno-logic sees the world as though it is something we can shape, control, measure, store and ultimately use. According to this view, techno-logic is the ‘logic of control’. No matter the question, techno-logic has one overriding concern: how can we measure, alter, control or use this to serve our goals?
Whenever you’re engaging with technology, you’re being invited and encouraged to see the world in a really narrow way. That can be useful – problem solving happens by ignoring what doesn’t matter and focussing on what’s important. But it can also mean we ignore stuff that matters more than just getting the job done as fast or effectively as we can.
A great example of this comes from Up in the Air, a film in which Ryan Bingham (George Clooney) works for a company who specialise in sacking people. When there are mass layoffs to be made, Bingham is there. Until technology comes to call. Research suggests video conferencing would be cheaper and more effective. Why fly people around America when you can sack someone from the comfort of your own office?
As Bingham points out, you do it because sometimes making something efficient destroys it. Imagine going on an efficient date or keeping every conversation as efficient as possible. We’d lose something essential, something rich and human.
With so much technology available to help with recruitment, performance management and customer relations, we need to be mindful that technology is fit for purpose. It’s very easy for us to be sucked into the logic of technology until suddenly, it’s not serving us, we’re serving it. Just look at journalism.
Drinking the affordance Kool-Aid
Journalism has always evolved alongside media. From newspaper to radio, podcasting and online, it’s a (sometimes) great example of an industry adapting to technological change. But at times, it over adapts, and the technological cart starts to pull the journalistic horse.
Today, online articles are ‘optimised’ to drive engagement and audience. This means stories are designed to hit a sweet spot in word count to ensure people don’t tune out, they’re given titles that are likely to generate clicks and traffic, and the kinds of things people are likely to read tend to get more attention.
A lot of that is common sense, but when it turns out that what drives engagement is emotion and conflict, this can put journalists in a bind. Are they impartial reporters of truth, lacking an audience, or do they massage journalistic principles a little so they can get the most readers they can?
I’ll leave it to you to decide which way journalism as an industry has gone. What’s worth noting is that many working in media weren’t aware of some of these changes whilst they were happening. That’s partly because they’re so close to the day-to-day work, but it can also be explained by something called ‘affordance theory’.
Affordance theory suggests that technological design contains little prompts, suggesting to users how they should interact with it. They invite users to behave in certain ways and not others. For example, Facebook makes it easier for you to respond to an article with feelings than thinking. How? All you need to do to ‘like’ a post is click a button but typing out a thought requires work.
Worse, Facebook doesn’t require you to read an article at all before you respond. It encourages quick, emotional, instinctive reactions and discourages slow thinking (through features like automatic updates to feeds and infinite scroll).
These affordances are the water we swim in when we’re using technology. As users, we need to be aware of them, but we also need to be mindful of how they can affect purpose.
Technology isn’t just a tool, it’s loaded with values, invitations and ethical judgements. If organisations don’t know what kind of ethical judgements are in the tools they’re using, they shouldn’t be surprised when they end up building something they don’t like.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Business + Leadership
Who are corporations willing to sacrifice in order to retain their reputation?
Opinion + Analysis
Business + Leadership, Relationships
The transformative power of praise
Opinion + Analysis
Business + Leadership, Politics + Human Rights
No justice, no peace in healing Trump’s America
Opinion + Analysis
Business + Leadership
Productivity isn’t working, so why not try being more ethical?

BY Matthew Beard
Matt is a moral philosopher with a background in applied and military ethics. In 2016, Matt won the Australasian Association of Philosophy prize for media engagement. Formerly a fellow at The Ethics Centre, Matt is currently host on ABC’s Short & Curly podcast and the Vincent Fairfax Fellowship Program Director.
Can robots solve our aged care crisis?

Can robots solve our aged care crisis?
Opinion + AnalysisBusiness + LeadershipHealth + WellbeingScience + Technology
BY Fiona Smith 21 MAR 2019
Would you trust a robot to look after the people who brought you into this world?
While most of us would want our parents and grandparents to have the attention of a kindly human when they need assistance, we may have to make do with technology.
The reason is: there are just not enough flesh-and-blood carers around. We have more seniors entering aged care than ever before, living longer and with complex needs, and we cannot adequately staff our aged care facilities.
The percentage of the Australian population aged over 85 is expected to double by 2066 and the aged care workforce would need to increase between two and three times before 2050 to provide care.
The looming dilemma
With aged care workers among the worst-paid in our society, there is no hope of filling that kind of demand. The Royal Commission into aged care quality and safety is now underway and we are facing a year of revelations about the impacts of understaffing, underfunding and inadequate training.
Some of the complaints already aired in the commission include unacceptably high rates of malnutrition among residents, lack of individualised care and cost-cutting that results in rationing necessities such as incontinence pads.
While the development of “assistance robots” promises to help improve services and the quality of life for those in aged care facilities, there are concerns that technology should not be used as a substitute for human contact.
Connection and interactivity
Human interaction is a critical source of intangible value for the development of human beings, according to Dr Costantino Grasso, Assistant Professor in Law at Coventry University and Global Module Leader for Corporate Governance and Ethics at the University of London.
“Such form of interaction is enjoyed by patients on every occasion in which a nurse interacts with them. The very presence of a human entails the patient value recognising him or her as a unique individual rather than an impersonal entity.
“This cannot be replaced by a robot because of its ‘mechanical’, ‘pre-programmed’ and thus ‘neutral’ way to interact with patients,” Grasso writes in The Corporate Social Responsibility And Business Ethics Blog.
The loss of privacy and autonomy?
An overview of research into this area by Canada’s McMaster University shows older adults worry the use of socially assistive robots may lead to a dehumanised society and a decrease in human contact.
“Also, despite their preference for a robot capable of interacting as a real person, they perceived the relationship with a humanoid robot as counterfeit, a deception,” according to the university.
Older adults also perceived the surveillance function of socially assistive robots as a threat to their autonomy and privacy.
A potential solution to the crisis
The ElliQ, a “home robot” now on the market, is a device that looks like a lamp (with a head that nods and moves) that is voice activated and can be the interface between the owner and their computer or mobile phone.
It can be used to remind people to take their medication or go for a walk, it can read out emails and texts, make phone calls and video calls and its video surveillance camera can trigger calls for assistance if the resident falls or has a medical problem.
The manufacturer, Intuition Robotics, says issues of privacy are sorted out “well in advance”, so that the resident decides whether family or anyone else should be notified about medical matters, such as erratic behaviour.
Despite having a “personality” of a helpful friend (who willingly shoulders the blame for any misunderstandings, such as unclear instructions from the user), it is not humanoid in appearance.
While ElliQ does not pretend to be anything but “technology”, other assistance robots are humanoid in appearance or may take the form of a cuddly animal. There are particular concerns about the use of assistance robots for people who are cognitively impaired, affected by dementia, for instance.
While it is a guiding principle in the artificial intelligence community that the robots should not be deceptive, some have argued that it should not matter if someone with dementia believes their cuddly assistance robot is alive, if it brings them comfort.
Ten tech developments in Aged Care
1. Robotic transport trolleys:
The Lamson RoboCart delivers meals, medication, laundry, waste and supplies.
2. Humanoid companions:
AvatarMind’s iPal is a constant companion that supplements personal care services and provides security with alerts for many medical emergencies such as falling down. Zora, a robot the size of a big doll, is overseen by a nurse with a laptop. Researchers in Australia found that it improved the mood of some patients, and got them more involved in activities, but required significant technical support.
3. Emotional support:
Paro is an interactive robotic baby seal that responds to touch, noise, light and temperature by moving its head and legs or making sounds. The robot has helped to improve the mood of its users, as well as offers some relief from the strains of anxiety and depression. It is used in Australia by RSL LifeCare.
4. Memory recovery:
Dthera Sciences has built a therapy that uses music and images to help patients recover memories. It analyses facial expressions to monitor the emotional impact on patients.
5. Korongee village:
This is a $25 million Tasmania facility for people with dementia, comprising 15 homes set within a small town context, with streets, a supermarket, cinema, café, beauty salon and gardens. Inspired by the dementia village of De Hogeweyk in the Netherlands, where residents have been found to live longer, eat better, and take fewer medications.
6. Pub for people with dementia:
Derwen Ward, part of Cefn Coed Hospital in Wales, opened the Derwen Arms last year to provide residents with a safe, but familiar, environment. The pub serves (non-alcoholic) beer, and has a pool table, and a dart board.
7. Pain detection:
PainChek is a facial recognition software that can detect pain in the elderly and people living with dementia. The tool has provided a significant improvement in data handling and simplification of reporting.
8. Providing sight:
IrisVision involves a Samsung smartphone and a virtual reality (VR) headset to help people with vision impairment see more clearly.
9. Holographic doctors:
Community health provider Silver Chain has been working on technology that uses “holographic doctors” to visit patients in their homes, creating a virtual clinic where healthcare professionals can have access to data and doctors.
10. Robotic suit:
A battery-powered soft exoskeleton helps people walk to restore mobility and independence.
MOST POPULAR
ArticleBUSINESS + LEADERSHIP
Hunger won’t end by donating food waste to charity
EssayBUSINESS + LEADERSHIP
Has passivity contributed to the rise of corrupt lawyers?
ArticleSCIENCE + TECHNOLOGY
Ask an ethicist: Should I use AI for work?
