Sexbots

The sexbots and robo-soldiers we’re creating today take Bladerunner and Westworld out of the science fiction genre. Kym Middleton looks at what those texts reveal on how we should treat humanlike robots.

It’s certain: lifelike humanoid robots are on the way.

With guarantees of Terminator-esque soldiers by 2050, we can no longer relegate lifelike robots to science fiction. Add this to everyday artificial intelligence like Apple’s Siri, Amazon’s Alexa and Google Home and it’s easy to see an android future.

The porn industry could beat the arms trade to it. Realistic looking sex robots are being developed with the same AI technology that remembers what pizza you like to order – although it’s years away from being indistinguishable from people, as this CNET interview with sexbot Harmony shows.

Like the replicants of Bladerunner we first met in 1982 and the robot “hosts” of HBO’s remake of the 1973 film Westworld, these androids we’re making require us to answer a big ethical question. How are we to treat walking, talking robots that are capable of reasoning and look just like people?

Can they suffer?

If we apply the thinking of Australian philosopher Peter Singer to the question of how we treat androids, the answer lies in their capacity to suffer. In making his case for the ethical consideration of animals, Singer quotes Jeremy Bentham:

“The question is not, Can they reason? nor Can they talk? but, Can they suffer?”

An artificially intelligent, humanlike robot that walks, talks and reasons is just that – artificial. They will be designed to mimic suffering. Take away the genuine experience of physical and emotional pain and pleasure and we have an inanimate thing that only looks like a person (although the word ‘inanimate’ doesn’t seem an entirely appropriate adjective for lifelike robots).

We’re already starting to see the first androids like this. They are, at this point, basically smartphones in the form of human beings. I don’t know about you, but I don’t anthropomorphise my phone. Putting aside wastefulness, it’s easy to make the case you should be able to smash it up if you want.

But can you (spoiler) sit comfortably and watch the human-shaped robot Dolores Abernathy be beaten, dragged away and raped by the Man in Black in Westworld without having an empathetic reaction? She screams and kicks and cries like any person in trauma would. Even if robot Dolores can’t experience distress and suffering, she certainly appears to. The robot is wired to display pain and viewers are wired to have a strong emotional reaction to such a scene. And most of us will – to an actress, playing a robot, in a fictional TV series.

Let’s move back to reality. Let’s face it, some people will want to do bad things to commercially available robots – especially sexbots. That’s the whole premise of the Westworld theme park, a now not so sci-fi setting where people can act out sexual, violent, and psychological fantasies on android subjects without consequences. Are you okay with that becoming reality? What if the robots looked like children?

The virtue ethicist’s approach to human behaviour is to act with an ideal character, to do right because that’s what good people do. In time, doing the virtuous thing will be habit, a natural default position because you internalise it. The virtue ethicist is not going to be okay with the Man in Black’s treatment of Dolores. Good people don’t have dark fantasies to act out on fake humans.

The utilitarian approach to ethical decisions depends on what results in the most good for the largest amount of people. Making androids available for abuse could be a case for community safety. If dark desires can be satiated with robots, actual assaults on people could reduce. (In presenting this argument, I’m not proposing this is scientifically proven or that it’s my view.) This logic has led to debates on whether virtual child porn should be tolerated.

The deontologist on the other hand is a rule follower so unless androids have legal protections or childlike sexbots are banned in their jurisdiction, they are unlikely to hold a person who mistreats one in ill regard. If it’s your property, do whatever you’re allowed to do with it.

Consciousness

Of course, (another spoiler) the robots of Westworld and Bladerunner are conscious. They think and feel and many believe themselves to be human. They experience real anguish. Singer’s case for the ethical treatment of animals relies on this sentience and can be applied here.

But can we create conscious beings – deliberately or unwittingly? If we really do design a new intelligent android species, complete with emotions and desires that motivate them to act for themselves, then give them the capacity to suffer and make conscientious choices, we have a strong case for affording robot rights.

This is not exactly something we’re comfortable with. Animals don’t enjoy anything remotely close to human rights. It is difficult seeing us treat man made machines with the same level of respect we demand for ourselves.

Why even AI?

As is often with matters of the future, humanlike robots bring up all sorts of fascinating ethical questions. Today they’re no longer fun hypotheticals. It is important stuff we need to work out.

Let’s assume for now we can’t develop the free thinking and feeling replicants of Bladerunner and hosts of Westworld. We still have to consider how our creation and treatment of androids reflects on us. What purpose – other than sexbots and soldiers – will we make them for? What features will we design into a robot that is so lifelike it masterfully mimics a human? Can we avoid designing our own biases into these new humanoids? How will they impact our behaviour? How will they change our workplaces and societies? How do we prevent them from being exploited for terrible things?

Maybe Elon Musk is right to be cautious about AI. But if we were “summoning the demon”, it’s the one inside us that’ll be the cause of our unease.

Follow The Ethics Centre on Twitter, Facebook, Instagram and LinkedIn.