Until last week, you would have been forgiven for thinking a meme couldn’t trigger fears about international security.

But since the widespread concerns over FaceApp last week, many are asking renewed questions about privacy, data ownership and transparency in the tech sector. But most of the reportage hasn’t gotten to the biggest ethical risk the FaceApp case reveals.

What is FaceApp?

In case you weren’t in the know, FaceApp is a ‘neural transformation filter’.

Basically, it uses AI to take a photo of your face and make it look different. The recent controversy centred on its ability to age people, pretty realistically, in just a short photo. Use of the app was widespread, creating a viral trend – there were clicks and engagements to be made out of the app, so everyone started to hop on board.

Where does your data go?

With the increasing popularity comes increasing scrutiny. A number of people soon noticed that FaceApp’s terms of use seemed to give them a huge range of rights to access and use the photos they’d collected. There were fears the app could access all the photos in your photo stream, not just the one you chose to upload.

There were questions about how you could delete your data from the service. And worst of all for many, the makers of the app, Wireless Labs, are based in Russia. US Minority Leader Chuck Schumer even asked the FBI to investigate the app.

The media commentary has been pretty widespread, suggesting that the app sends data back to Russia, lacks transparency about how it will or won’t be used and has no accessible data ethics principles. At least two of those are true. There isn’t much in FaceApp’s disclosure that would give a user any sense of confidence in the app’s security or respect for privacy.

Unsurprisingly, this hasn’t amounted to much. Giving away our data in irresponsible ways has become a bit like comfort eating. You know it’s bad, but you’re still going to do it.

The reasons are likely similar to the reasons we indulge other petty vices: the benefits are obvious and immediate; the harms are distant and abstract. And whilst we’d all like to think we’ve got more self-control than the kids in those delayed gratification psychology experiments, more often than not our desire for fun or curiosity trumps any concern we have over how our data is used.

Should you be worried?

Is this a problem? To the extent that this data – easily accessed – can be used for a range of goals we likely don’t support, yes. It also gives rise to a range of complex ethical questions concerning our responsibility.

Let’s say I willingly give my data to FaceApp. This data is then aggregated and on-sold in a data marketplace. A dataset comprising of millions of facial photos is then used to train facial recognition AI, which is used to track down political dissidents in Russia. To what extent should I consider myself responsible for political oppression on the other side of the world?

In climate change ethics, there is a school of thought that suggests even if our actions can’t change an outcome – for instance, by making a meaningful reduction to emissions – we still have a moral obligation not to make the problem worse.

It might be true that a dataset would still be on sold without our input, but that alone doesn’t seem to justify adding our information or throwing up our arms and giving up. In this hypothetical, giving up – or not caring – means abandoning my (admittedly small) role in human rights violations and political injustice.

A troubling peek into the future

In reality, it’s really unlikely that’s what FaceApp is actually using your data to do. It’s far more likely, according to the MIT Technology Review, that your face might be used to train FaceApp to get even better at what it does.

It might use your face to help improve software that analyses faces to determine age and gender. Or it might be used – perhaps most scarily – to train AI to create deepfakes or faces of people who don’t exist. All of this is a far cry from the nightmare scenario sketched out above.

But even if my horror story was accurate, would it matter? It seems unlikely.

By the time tech journalists were talking about the potential data issues with FaceApp, millions had already uploaded their photos into the app. The ship had sailed, and it set off with barely a question asked of it. It’s also likely that plenty of people read about the data issues and then installed the app just to see what all the fuss is about.

Who is responsible?

I’m pulled in two directions when I wonder who we should hold responsible here. Of course, designers are clever and intentionally design their apps in ways that make them smooth and easy to use. They eliminate the friction points that facilitate serious thinking and reflection.

But that speed and efficiency is partly there because we want it to be there. We don’t want to actually read the terms of use agreement, and the company willingly give us a quick way to avoid doing so (whilst lying, and saying we have).

This is a Faustian pact – we let tech companies sell us stuff that’s potentially bad for us, so long as it’s fun.

 

 

The important reflection around FaceApp isn’t that the Russians are coming for us – a view that, as Kaitlyn Tiffany noted for Vox, smacks slightly of racism and xenophobia. The reflection is how easily we give up our principled commitments to ethics, privacy and wokeful use of technology as soon as someone flashes some viral content at us.

In Ethical by Design: Principles for Good Technology, Simon Longstaff and I made the point that technology isn’t just a thing we build and use. It’s a world view. When we see the world technologically, our central values are things like efficiency, effectiveness and control. That is, we’re more interesting in how we do things than what we’re doing.

Two sides of the story

For me, that’s the FaceApp story. The question wasn’t ‘is this app safe to use?’ (probably no less so than most other photo apps), but ‘how much fun will I have?’ It’s a worldview where we’re happy to pay any price for our kicks, so long as that price is hidden from us. FaceApp might not have used this impulse for maniacal ends, but it has demonstrated a pretty clear vulnerability.

Is this how the world ends, not with a bang, but with a chuckle and a hashtag?