How to help your kid flex their ethical muscle

Kids can be cruel. When they are arbitrarily mean to their own friends, ethical reflection can help. Victoria Whitaker talks us through building your child’s ethical muscle in testing times.

My daughter came home to me in tears last night. She shuddered, eyes wet, and a waterfall connected her nose to her mouth as she explained to me that her best friend had decided she didn’t like her anymore and she was no longer allowed to play in their small group of friends.

“Mummy, I am so sad. Who am I going to play with? Why doesn’t she like me?”

It’s cruelty at its peak. Of course, there is no reason. It’s a power play that seems to happen far too early.

“I know”, she said. “Can I invite her over to play? Then she will like me again”.

What do I do? Maybe having her over will help. But who wants friends like that?

I want my daughter to know her worth. I wanted her to consider this dilemma through the different ethical lenses. We talked.

I asked her to think about the consequences of having who she considered her best friend over. Yes, you might reconnect. But she might also find out how to push you around. And is this how you want your friends to treat you? Will you let all your friends do this? These questions relate to consequentialism, a mode of ethical thought that considers outcomes and consequences.

I also asked her what rights she had in this friendship. What things could she expect in friendship? And what duties do we have to our friends – in all friendships, not just this one? These questions relate to deontology, an ethical theory that prioritises our promises, as well as codes and rules over considering outcomes.

I asked her about the types of relationships she wants. Which relationships were most important to her and why? These questions relate to an ethics of care.

I asked her what sort of friend she would like to be. She told me she liked to have fun, to explore and play together, and valued being kind and caring. This question relates to virtue ethics, a type of thinking that values character and the type of person we aspire to be.

I asked her about the purpose of friendship and why it existed. What things were important about friendship to her? These questions relate to teleology, an ethical theory that considers the purpose of things.

And then we discussed in reflection of all these questions, if her friend was actually the friend she wanted. We talked about whether this little girl had the qualities she wanted from friendship. And we talked about her other friends and which ones did have those qualities she wanted. We also discussed what type of friend my daughter wanted to be… what sort of person she wanted to be. You don’t need a degree in ethics to have to have these conversations with your kids. We are all more expert in this stuff than we give ourselves credit for – our children too.

Ethics isn’t just thinking and talking. It requires action. My daughter and I discussed what steps she could take next. She was still keen to be friends, but her view of the friendship had changed. Her view of herself had changed too. And as such the friendship would change – and we discussed how that was okay.

This morning as we packed her bag and got her ready for the walk to school, the world didn’t seem as heavy as it was last night. And she seemed to carry herself just a little bit taller.


Ethics Explainer: The Turing Test

Much was made of a recent video of Duplex – Google’s talking AI – calling up a hair salon to make a reservation. The AI’s way of speaking was uncannily human, even pausing at moments to say “um”.

Some suggested Duplex had managed to pass the Turing test, a standard for machine intelligence that was developed by Alan Turing in the middle of the 20th century. But what exactly is the story behind this test and why are people still using it to judge the success of cutting edge algorithms?

Mechanical brains and emotional humans

In the late 1940s, when the first digital computers had just been built, a debate took place about whether these new “universal machines” could think. While pioneering computer scientists like Alan Turing and John von Neumann believed that their machines were “mechanical brains”, others felt that there was an essential difference between human thought and computer calculation.

Sir Geoffrey Jefferson, a prominent brain surgeon of the time, argued that while a computer could simulate intelligence, it would always be lacking:

“No mechanism could feel … pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or miserable when it cannot get what it wants.”

In a radio interview a few weeks later, Turing responded to Jefferson’s claim by arguing that as computers become more intelligent, people like him would take a “grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine.”

The following year, Turing wrote a paper called ‘Computing Machinery and Intelligence’ in which he devised a simple method by which to test whether machines can think.

The test was a proposed a situation in which a human judge talks to both a computer and a human through a screen. The judge cannot see the computer or the human but can ask them questions via the computer. Based on the answers alone, the human judge had to determine which is which. If the computer was able to fool 30 percent of judges that it was human, then the computer was said to have passed the test.

Turing claimed that he intended for the test to be a conversation stopper, a way of preventing endless metaphysical speculation about the essence of our humanity by positing that intelligence is just a type of behaviour, not an internal quality. In other words, intelligence is as intelligence does, regardless of whether it done by machine or human.

Does Google Duplex pass?

Well, yes and no. In Google’s video, it is obvious that the person taking the call believes they are talking to human. So, it does satisfy this criterion. But an important thing about Turing’s original test was that to pass, the computer had to be able to speak about all topics convincingly, not just one.

 

 

In fact, in Turing’s paper, he plays out an imaginary conversation with an advanced future computer and human judge, with the judge asking questions and the computer providing answers:

Q: Please write me a sonnet on the subject of the Forth Bridge.

A: Count me out on this one. I never could write poetry.

Q: Add 34957 to 70764.

A: (Pause about 30 seconds and then give as answer) 105621.

Q Do you play chess?

A: Yes.

Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?

A: (After a pause of 15 seconds) R-R8 mate.

The point Turing is making here is that a truly smart machine has to have general intelligence in a number of different areas of human interest. As it stands, Google’s Duplex is good within the limited domain of making a reservation but would probably not be able to do much beyond this unless reprogrammed.

The boundaries around the human

While Turing intended for his test to be a conversation stopper for questions of machine intelligence, it has had the opposite effect, fuelling half a century of debate about what the test means, whether it is a good measure of intelligence, or if it should still be used as a standard.

Most experts have come to agree, over time, that the Turing test is not a good way to prove machine intelligence, as the constraints of the test can easily be gamed, as was the case with the bot Eugene Goostman, who allegedly passed the test a few years ago.

But the Turing test is nevertheless still considered a powerful philosophical tool to re-evaluate the boundaries around what we consider normal and human. In his time, Turing used his test as a way to demonstrate how people like Jefferson would never be willing to accept a machine as being intelligence not because it couldn’t act intelligently, but because wasn’t “like us”.

Turing’s desire to test boundaries around what was considered “normal” in his time perhaps sprung from his own persecution as a gay man. Despite being a war hero, he was persecuted for his homosexuality, and convicted in 1952 for sleeping with another man. He was punished with chemical castration and eventually took his own life.

During these final years, the relationship between machine intelligence and his own sexuality became interconnected in Turing’s mind. He was concerned the same bigotry and fear that hounded his life would ruin future relationships between humans and intelligent computers. A year before he took his life he wrote the following letter to a friend:

“I’m afraid that the following syllogism may be used by some in the future.

Turing believes machines think

Turing lies with men

Therefore machines do not think

– Yours in distress,

Alan”