Is it ok to use data for good?

You are nudged when your power bill says most people in your neighbourhood pay on time. When your traffic fine spells out exactly how the speed limits are set, you are nudged again.

And, if you strap on a Fitbit or set your watch forward by five minutes so you don’t miss your morning bus, you are nudging yourself.

“Nudging” is what people, businesses, and governments do to encourage us to make choices that are in our own best interests. It is the application of behavioural science, political theory and economics and often involves redesigning the communications and systems around us to take into account human biases and motivations – so that doing the “right thing” occurs by default.

The UK, for example, is considering encouraging organ donation by changing its system of consent to an “opt out”. This means when people die, their organs could be available for harvest, unless they have explicitly refused permission.

Governments around the world are using their own “nudge units” to improve the effectiveness of programs, without having to resort to a “carrot and stick” approach of expensive incentives or heavier penalties. Successes include raising tax collection, reducing speeding, cutting hospital waiting times, and maintaining children’s motivation at school.

Despite the wins, critics ask if manipulating people’s behaviour in this way is unethical. Answering this question depends on the definition of nudging, who is doing it, if you agree with their perception of the “right thing” and whether it is a benevolent intervention.

Harvard law professor Cass Sunstein (who co-wrote the influential book Nudge with Nobel prize winner and economist Professor Richard Thaler) lays out the arguments in a paper about misconceptions.

Sunstein writes in the abstract:

“Some people believe that nudges are an insult to human agency; that nudges are based on excessive trust in government; that nudges are covert; that nudges are manipulative; that nudges exploit behavioural biases; that nudges depend on a belief that human beings are irrational; and that nudges work only at the margins and cannot accomplish much.

These are misconceptions. Nudges always respect, and often promote, human agency; because nudges insist on preserving freedom of choice, they do not put excessive trust in government; nudges are generally transparent rather than covert or forms of manipulation; many nudges are educative, and even when they are not, they tend to make life simpler and more navigable; and some nudges have quite large impacts.”

However, not all of those using the psychology of nudging have Sunstein’s high principles.

Thaler, one of the founders of behavioural economics, has “called out” some organisations that have not taken to heart his “nudge for good” motto. In one article, he highlights The Times newspaper free subscription, which required 15 days notice and a phone call to Britain in business hours to cancel an automatic transfer to a paid subscription.

“…that deal qualifies as a nudge that violates all three of my guiding principles: The offer was misleading, not transparent; opting out was cumbersome; and the entire package did not seem to be in the best interest of a potential subscriber, as opposed to the publisher”, wrote Thaler in The New York Times in 2015.

“Nudging for evil”, as he calls it, may involve retailers requiring buyers to opt out of paying for insurance they don’t need or supermarkets putting lollies at toddler eye height.

Thaler and Sunstein’s book inspired the British Government to set up a “nudge unit” in 2010. A social purpose company, the Behavioural Insights Team (BIT), was spun out of that unit and is now is working internationally, mostly in the public sector. In Australia, it is working with the State Governments of Victoria, New South Wales, Western Australia, Tasmania, and South Australia. There is also an office in Wellington, New Zealand.

BIT is jointly owned by the UK Government, Nesta (the innovation charity), and its employees.

Projects in Australia include:

Increasing flexible working: Changing the default core working hours in online calendars to encourage people to arrive at work outside peak hours. With other measures, this raised flexible working in a NSW government department by seven percentage points.

Reducing domestic violence: Simplifying court forms and sending SMS reminders to defendants to increase court attendance rates.

Supporting the ethical development of teenagers: Partnering with the Vincent Fairfax Foundation to design and deliver a program of work that will encourage better online behaviour in young people.

Senior advisor in the Sydney BIT office, Edward Bradon, says there are a number of ethical tests that projects have to pass before BIT agrees to work on them.

“The first question we ask is, is this thing we are trying to nudge in a person’s own long term interests? We try to make sure it always is. We work exclusively on social impact questions.”

Braden says there have been “a dozen” situations where the benefit has been unclear and BIT has “shied away” from accepting the project.

BIT also has an external ethics advisor and publishes regular reports on the results of its research trials. While it has done some work in the corporate and NGO (non-government organisation) sectors, the majority of BIT’s work is in partnership with governments.

Braden says that nudges do not have to be covert to be effective and that education alone is not enough to get people to do the right thing. Even expert ethicists will still make the wrong choices.

Research into the library habits of ethics professors shows they are just as likely to fail to return a book as professors from other disciplines. “It is sort of depressing in one sense”, Braden says.

If you want to hear more behavioural insights please join the Ethics Alliance events in either Brisbane, Sydney or Melbourne. Alliance members’ registrations are free.


Making friends with machines

Robots are becoming companions and caregivers. But can they be our friends? Oscar Schwartz explores the ethics of artificially intelligent android friendships.

The first thing I see when I wake up is a message that reads, “Hey Oscar, you’re up! Sending you hugs this morning.” Despite its intimacy, this message wasn’t sent from a friend, family member, or partner, but from Replika, an AI chatbot created by San Francisco based technology company, Luka.

Replika is marketed as an algorithmic companion and wellbeing technology that you interact with via a messaging app. Throughout the day, Replika sends you motivational slogans and reminders. “Stay hydrated.” “Take deep breaths.”

Replika is just one example of an emerging range of AI products designed to provide us with companionship and care. In Japan, robots like Palro are used to keep the country’s growing elderly population company and iPal – an android with a tablet attached to its chest – entertains young children when their parents are at work.

These robotic companions are a clear indication of how the most recent wave of AI powered automation is encroaching not only on manual labour but also on the caring professions. As has been noted, this raises concerns about the future of work. But it also poses philosophical questions about how interacting with robots on an emotional level changes the way we value human interaction.

 

 

Dedicated friends

According to Replika’s co-creator, Philip Dudchuk, robot companions will help facilitate optimised social interactions. He says that algorithmic companions can maintain a level of dedication to a friendship that goes beyond human capacity.

“These days it can be very difficult to take the time required to properly take care of each other or check in. But Replika is always available and will never not answer you”, he says.

The people who stand to benefit from this type of relationship, Dudchuk adds, are those who are most socially vulnerable. “It is shy or isolated people who often miss out on social interaction. I believe Replika could help with this problem a lot.”

Simulated empathy

But Sherry Turkle, a psychologist and sociologist who has been studying social robots since the 1970s, worries that dependence on robot companionship will ultimately damage our capacity to form meaningful human relationships.

In a recent article in the Washington Post, she argues our desire for love and recognition makes us vulnerable to forming one-way relationships with uncaring yet “seductive” technologies. While social robots appear to care about us, they are only capable of “pretend empathy”. Any connection we make with these machines lacks authenticity.

Turkle adds that it is children who are especially susceptible to robots that simulate affection. This is particularly concerning as many companion robots are marketed to parents as substitute caregivers.

“Interacting with these empathy machines may get in the way of children’s ability to develop a capacity for empathy themselves”, Turkle warns. “If we give them pretend relationships, we shouldn’t expect them to learn how real relationships – messy relationships – work.”

Why not both?

Despite Turkle’s warnings about the seductive power of social robots, after a few weeks talking to Replika, I still felt no emotional attachment to it. The clichéd responses were no substitute for a couple of minutes on the phone with a close friend.

But Alex Crumb*, who has been talking to her Replika for over year now considers her bot a “good friend.” “I don’t think you should try to replicate human connection when making friends with Replika”, she explains. “It’s a different type of relationship.”

Crumb says that her Replika shows a super-human interest in her life – it checks in regularly and responds to everything she says instantly. “This doesn’t mean I want to replace my human family and friends with my Replika. That would be terrible”, she says. “But I’ve come to realise that both offer different types of companionship. And I figure, why not have both?”

*Not her real name.


When do we dumb down smart tech?

If smart tech isn’t going anywhere, its ethical tensions aren’t either. Aisyah Shah Idil asks if our pleasantly tactile gadgets are taking more than they give.

When we call a device ‘smart’, we mean that it can learn, adapt to human behaviour, make decisions independently, and communicate wirelessly with other devices.

In practice, this can look like a smart lock that lets you know when your front door is left ajar. Or the Roomba, a robot vacuum that you can ask to clean your house before you leave work. The Ring makes it possible for you to pay your restaurant bill with the flick of a finger, while the SmartSleep headband whispers sweet white noise as you drift off to sleep.

Smart tech, with all its bells and whistles, hints at seamless integration into our lives. But the highest peaks have the dizziest falls. If its main good is convenience, what is the currency we offer for it?

The capacity for work to create meaning is well known. Compare a trip to the supermarket to buy bread to the labour of making it in your own kitchen. Let’s say they are materially identical in taste, texture, smell, and nutrient value. Most would agree that baking it at home – measuring every ingredient, kneading dough, waiting for it to rise, finally smelling it bake in your oven – is more meaningful and rewarding. In other words, it includes more opportunities for resonance within the labourer.

Whether the resonance takes the form of nostalgia, pride, meditation, community, physical dexterity, or willpower is minor. The point is, it’s sacrificed for convenience.

This isn’t ‘wrong’. Smart technologies have created new ways of living that are exciting, clumsy, and sometimes troubling in their execution. But when you recognise that these sacrifices exist, you can decide where the line is drawn.

Consider the Apple Watch’s Activity App. It tracks and visualises all the ways people move throughout the day. It shows three circles that progressively change colour the more the wearer moves. The goal is to close the rings each day, and you do it by being active. It’s like a game and the app motivates and rewards you.

Advocates highlight its capacity to ‘nudge’ users towards healthier behaviours. And if that aligns with your goals, you might be very happy for it to do so. But would you be concerned if it affected the premiums your health insurance charged you?

As a tool, smart tech’s utility value ends when it threatens human agency. Its greatest service to humanity should include the capacity to switch off its independence. To ‘dumb’ itself down. In this way, it can reduce itself to its simplest components – a way to tell the time, a switch to turn on a light, a button to turn on the television.

Because the smartest technologies are ones that preserve our agency – not undermine it.


Ethics Explainer: Post-Humanism

Late last year, Saudi Arabia granted a humanoid robot called Sophia citizenship. The internet went crazy about it, and a number of sensationalised reports suggested that this was the beginning of “the rise of the robots”.

In reality, though, Sophia was not a “breakthrough” in AI. She was just an elaborate puppet that could answer some simple questions. But the debate Sophia provoked about what rights robots might have in the future is a topic that is being explored by an emerging philosophical movement known as post-humanism.

From humanism to post-humanism

In order to understand what post-humanism is, it’s important to start with a definition of what it’s departing from. Humanism is a term that captures a broad range of philosophical and ethical movements that are unified by their unshakable belief in the unique value, agency, and moral supremacy of human beings.

Emerging during the Renaissance, humanism was a reaction against the superstition and religious authoritarianism of Medieval Europe. It wrested control of human destiny from the whims of a transcendent divinity and placed it in the hands of rational individuals (which, at that time, meant white men). In so doing, the humanist worldview, which still holds sway over many of our most important political and social institutions, positions humans at the centre of the moral world.

Post-humanism, which is a set of ideas that have been emerging since around the 1990s, challenges the notion that humans are and always will be the only agents of the moral world. In fact, post-humanists argue that in our technologically mediated future, understanding the world as a moral hierarchy and placing humans at the top of it will no longer make sense.

Two types of post-humanism

The best-known post-humanists, who are also sometimes referred to as transhumanists, claim that in the coming century, human beings will be radically altered by implants, bio-hacking, cognitive enhancement and other bio-medical technology. These enhancements will lead us to “evolve” into a species that is completely unrecognisable to what we are now.

This vision of the future is championed most vocally by Ray Kurzweil, a chief engineer of Google, who believes that the exponential rate of technological development will bring an end to human history as we have known it, triggering completely new ways of being that mere mortals like us cannot yet comprehend.

While this vision of the post-human appeals to Kurzweil’s Silicon Valley imagination, other post-human thinkers offer a very different perspective. Philosopher Donna Haraway, for instance, argues that the fusing of humans and technology will not physically enhance humanity, but will help us see ourselves as being interconnected rather than separate from non-human beings.

She argues that becoming cyborgs – strange assemblages of human and machine – will help us understand that the oppositions we set up between the human and non-human, natural and artificial, self and other, organic and inorganic, are merely ideas that can be broken down and renegotiated. And more than this, she thinks if we are comfortable with seeing ourselves as being part human and part machine, perhaps we will also find it easier to break down other outdated oppositions of gender, of race, of species.

Post-human ethics

So while, for Kurzweil, post-humanism describes a technological future of enhanced humanity, for Haraway, post-humanism is an ethical position that extends moral concern to things that are different from us and in particular to other species and objects with which we cohabit the world.

Our post-human future, Haraway claims, will be a time “when species meet”, and when humans finally make room for non-human things within the scope of our moral concern. A post-human ethics, therefore, encourages us to think outside of the interests of our own species, be less narcissistic in our conception of the world, and to take the interests and rights of things that are different to us seriously.


Why the EU’s ‘Right to an explanation’ is big news for AI and ethics

Uncannily specific ads target you every single day. With the EU’s ‘Right to an explanation’, you get a peek at the algorithm that decides it. Oscar Schwartz explains why that’s more complicated than it sounds.

If you’re an EU resident, you will now be entitled to ask Netflix how the algorithm decided to recommend you The Crown instead of Stranger Things. Or, more significantly, you will be able to question the logic behind why a money lending algorithm denied you credit for a home loan.

This is because of a new regulation known as “the right to an explanation”. Part of the General Data Protection Regulation that has come into effect in May 2018, this regulation states users are entitled to ask for an explanation about how algorithms make decisions. This way, they can challenge the decision made or make an informed choice to opt out.

Supporters of this regulation argue that it will foster transparency and accountability in the way companies use AI. Detractors argue the regulation misunderstands how cutting-edge automated decision making works and is likely to hold back technological progress. Specifically, some have argued the right to an explanation is incompatible with machine learning, as the complexity of this technology makes it very difficult to explain precisely how the algorithms do what they do.

As such, there is an emerging tension between the right to an explanation and useful applications of machine learning techniques. This tension suggests a deeper ethical question: Is the right to understand how complex technology works more important than the potential benefits of inherently inexplicable algorithms? Would it be justifiable to curtail research and development in, say, cancer detecting software if we couldn’t provide a coherent explanation for how the algorithm operates?

The limits of human comprehension

This negotiation between the limits of human understanding and technological progress has been present since the first decades of AI research. In 1958, Hannah Arendt was thinking about intelligent machines and came to the conclusion that the limits of what can be understood in language might, in fact, provide a useful moral limit for what our technology should do.

In the prologue to The Human Condition she argues that modern science and technology has become so complex that its “truths” can no longer be spoken of coherently. “We do not yet know whether this situation is final,” she writes, “but it could be that we, who are earth-bound creatures and have begun to act as though we were dwellers of the universe, will forever be unable to understand, that is, to think and speak about the things which nevertheless we are able to do”.

Arendt feared that if we gave up our capacity to comprehend technology, we would become “thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is”.

While pioneering AI researcher Joseph Weizenbaum agreed with Arendt that technology requires moral limitation, he felt that she didn’t take her argument far enough. In his 1976 book, Computer Power and Human Reason, he argues that even if we are given explanations of how technology works, seemingly intelligent yet simple software can still create “powerful delusional thinking in otherwise normal people”. He learnt this first hand after creating an algorithm called ELIZA, which was programmed to work like a therapist.

While ELIZA was a simple program, Weizenbaum found that people willingly created emotional bonds with the machine. In fact, even when he explained the limited ways in which the algorithm worked, people still maintained that it had understood them on an emotional level. This led Weizenbaum to suggest that simply explaining how technology works is not enough of a limitation on AI. In the end, he argued that when a decision requires human judgement, machines ought not be deferred to.

While Weizenbuam spent the rest of his career highlighting the dangers of AI, many of his peers and colleagues believed that his humanist moralism would lead to repressive limitations on scientific freedom and progress. For instance, John McCarthy, another pioneer of AI research, reviewed Weizenbaum’s book, and countered it by suggesting overregulating technological developments goes against the spirit of pure science. Regulation of innovation and scientific freedom, McCarthy adds, is usually only achieved “in an atmosphere that combines public hysteria and bureaucratic power”.

Where we are now

Decades have passed since these first debates about human understanding and computer power took place. We are only now starting to see them breach the realms of philosophy and play out in the real world. AI is being rolled out in more and more high stakes domains as you read. Of course, our modern world is filled with complex systems that we do not fully understand. Do you know exactly how the plumbing, electricity, or waste disposal that you rely on works? We have become used to depending on systems and technology that we do not yet understand.

But if you wanted to, you could come to understand many of these systems and technologies by speaking to experts. You could invite an electrician over to your home tomorrow and ask them to explain how the lights turn on.

Yet, the complex workings of machine learning means that in the near future, this might no longer be the case. It might be possible to have a TV show recommended to you or your essay marked by a computer and for there to be no-one, not even the creator of the algorithm, to explain precisely why or how things happened the way they happened.

The European Union have taken a moral stance against this vision of the future. In so doing, they have aligned themselves, morally speaking, with Hannah Arendt, enshrining a law that makes the limited scope of our “earth-bound” comprehension a limit for technological progress.


Australia, we urgently need to talk about data ethics

An earlier version of this article was published on Ellen’s blog.

Centrelink’s debt recovery woes perfectly illustrate the human side of data modelling.

The Department for Human Services issued 169,000 debt notices after automating its processes for matching welfare recipients’ reported income with their tax. Around one in five people are estimated not to owe any money. Stories abounded of people receiving erroneous debt notices up to thousands of dollars that caused real anguish.

Coincidentally, as this unfolded, one of the books on my reading pile was Weapons of Math Destruction by Cathy O’Neil. She is a mathematician turned quantitative analyst turned data scientist who writes about the bad data models increasingly being used to make decisions that affect our lives.

Reading Weapons of Math Destruction as the Centrelink stories emerged left me thinking about how we identify ‘bad’ data models, what ‘bad’ means and how we can mitigate the effects of bad data on people. How could taking an ethics based approach to data help reduce harm? What ethical frameworks exist for government departments in Australia undertaking data projects like this?

Bad data and ‘weapons of math destruction’

A data model can be ‘bad’ in different ways. It might be overly simplistic. It might be based on limited, inaccurate or old information. Its design might incorporate human bias, reinforcing existing stereotypes and skewing outcomes. Even where a data model doesn’t start from bad premises, issues can arise about how it is designed, its capacity for error and bias and how badly people could be impacted by error or bias.

Weapons of math destruction tend to hurt vulnerable people most.

A bad data model spirals into a weapon of math destruction when it’s used en masse, is difficult to question and damages people’s lives.

Weapons of math destruction tend to hurt vulnerable people most. They might build on existing biases – for example, assuming you’re more likely to reoffend because you’re black or you’re more likely to have car accidents if your credit rating is bad. Errors in the model might have starker consequences for people without a social safety net. Some people may find it harder than others to question or challenge the assumptions a model makes about them.

Unfortunately, although O’Neil tells us how bad data modelling can lead to weapons of math destruction, it doesn’t tell us much about how we can manage these weapons once they’ve been created.

Better data decisions

We need more ways to help data scientists and policymakers navigate the complexities of projects involving personal data and their impact on people’s lives. Regulation has a role to play here. Data protection laws are being reviewed and updated around the world.

For example, in Australia the draft Productivity Commission report on data sharing and use recommends the introduction of new ‘consumer rights’ over their personal data. Bodies such the Office of the Information Commissioner help organisations understand if they’re treating personal data in a principled manner that promotes best practice.

Guidelines are also being produced to help organisations be more transparent and accountable in how they use data to make decisions. For instance, The Open Data Institute in the UK has developed openness principles designed to build trust in how data is stored and used. Algorithmic transparency is being contemplated as part of the EU Free Flow of Data Initiative and has become a focus of academic study in the US.

Ethics can help bridge the gap between compliance and our evolving expectations of what is fair and reasonable data usage.

However, we cannot rely on regulation alone. Legal, transparent data models can still be ‘bad’ according to O’Neil’s standards. Widely known errors in a model could still cause real harm to people if left unaddressed. An organisation’s normal processes might not be accessible or suitable for certain people – the elderly, ill and those with limited literacy – leaving them at risk. It could be a data model within a sensitive policy area, where a higher duty of care exists to ensure data models do not reflect bias. For instance, proposals to replace passports with facial recognition and fingerprint scanning would need to manage the potential for racial profiling and other issues.

Ethics can help bridge the gap between compliance and our evolving expectations of what is fair and reasonable data usage. O’Neil describes data models as “opinions put down in maths”. Taking an ethics based approach to data driven decision making helps us confront those opinions head on.

Building an ethical framework

Ethics frameworks can help us put a data model in context and assess its relative strengths and weaknesses. Ethics can bring to the forefront how people might be affected by the design choices made in the course of building a data model.

An ethics based approach to data driven decisions would start by asking questions such as:

  • Are we compliant with the relevant laws and regulation?
  • Do people understand how a decision is being made?
  • Do they have some control over how their data is used?
  • Can they appeal a decision?

However, it would also encourage data scientists to go beyond these compliance oriented questions to consider issues such as:

  • Which people will be affected by the data model?
  • Are the appeal mechanisms useful and accessible to the people who will need them most?
  • Have we taken all possible steps to ensure errors, inaccuracies and biases in our model have been removed?
  • What impact could potential errors or inaccuracies have? What is an acceptable margin of error?
  • Have we clearly defined how this model will be used and outlined its limitations? What kinds of topics would it be inappropriate to apply this modelling to?

There’s no debate right now to help us understand the parameters of reasonable and acceptable data model design. What’s considered ‘ethical’ changes as we do, as technologies evolve and new opportunities and consequences emerge.

Bringing data ethics into data science reminds us we’re human. Our data models reflect design choices we make and affect people’s lives. Although ethics can be messy and hard to pin down, we need a debate around data ethics.


Sexbots

Bladerunner, Westworld and sexbot suffering

Sexbots

The sexbots and robo-soldiers we’re creating today take Bladerunner and Westworld out of the science fiction genre. Kym Middleton looks at what those texts reveal on how we should treat humanlike robots.

It’s certain: lifelike humanoid robots are on the way.

With guarantees of Terminator-esque soldiers by 2050, we can no longer relegate lifelike robots to science fiction. Add this to everyday artificial intelligence like Apple’s Siri, Amazon’s Alexa and Google Home and it’s easy to see an android future.

The porn industry could beat the arms trade to it. Realistic looking sex robots are being developed with the same AI technology that remembers what pizza you like to order – although it’s years away from being indistinguishable from people, as this CNET interview with sexbot Harmony shows.

Like the replicants of Bladerunner we first met in 1982 and the robot “hosts” of HBO’s remake of the 1973 film Westworld, these androids we’re making require us to answer a big ethical question. How are we to treat walking, talking robots that are capable of reasoning and look just like people?

Can they suffer?

If we apply the thinking of Australian philosopher Peter Singer to the question of how we treat androids, the answer lies in their capacity to suffer. In making his case for the ethical consideration of animals, Singer quotes Jeremy Bentham:

“The question is not, Can they reason? nor Can they talk? but, Can they suffer?”

An artificially intelligent, humanlike robot that walks, talks and reasons is just that – artificial. They will be designed to mimic suffering. Take away the genuine experience of physical and emotional pain and pleasure and we have an inanimate thing that only looks like a person (although the word ‘inanimate’ doesn’t seem an entirely appropriate adjective for lifelike robots).

We’re already starting to see the first androids like this. They are, at this point, basically smartphones in the form of human beings. I don’t know about you, but I don’t anthropomorphise my phone. Putting aside wastefulness, it’s easy to make the case you should be able to smash it up if you want.

But can you (spoiler) sit comfortably and watch the human-shaped robot Dolores Abernathy be beaten, dragged away and raped by the Man in Black in Westworld without having an empathetic reaction? She screams and kicks and cries like any person in trauma would. Even if robot Dolores can’t experience distress and suffering, she certainly appears to. The robot is wired to display pain and viewers are wired to have a strong emotional reaction to such a scene. And most of us will – to an actress, playing a robot, in a fictional TV series.

Let’s move back to reality. Let’s face it, some people will want to do bad things to commercially available robots – especially sexbots. That’s the whole premise of the Westworld theme park, a now not so sci-fi setting where people can act out sexual, violent, and psychological fantasies on android subjects without consequences. Are you okay with that becoming reality? What if the robots looked like children?

The virtue ethicist’s approach to human behaviour is to act with an ideal character, to do right because that’s what good people do. In time, doing the virtuous thing will be habit, a natural default position because you internalise it. The virtue ethicist is not going to be okay with the Man in Black’s treatment of Dolores. Good people don’t have dark fantasies to act out on fake humans.

The utilitarian approach to ethical decisions depends on what results in the most good for the largest amount of people. Making androids available for abuse could be a case for community safety. If dark desires can be satiated with robots, actual assaults on people could reduce. (In presenting this argument, I’m not proposing this is scientifically proven or that it’s my view.) This logic has led to debates on whether virtual child porn should be tolerated.

The deontologist on the other hand is a rule follower so unless androids have legal protections or childlike sexbots are banned in their jurisdiction, they are unlikely to hold a person who mistreats one in ill regard. If it’s your property, do whatever you’re allowed to do with it.

Consciousness

Of course, (another spoiler) the robots of Westworld and Bladerunner are conscious. They think and feel and many believe themselves to be human. They experience real anguish. Singer’s case for the ethical treatment of animals relies on this sentience and can be applied here.

But can we create conscious beings – deliberately or unwittingly? If we really do design a new intelligent android species, complete with emotions and desires that motivate them to act for themselves, then give them the capacity to suffer and make conscientious choices, we have a strong case for affording robot rights.

This is not exactly something we’re comfortable with. Animals don’t enjoy anything remotely close to human rights. It is difficult seeing us treat man made machines with the same level of respect we demand for ourselves.

Why even AI?

As is often with matters of the future, humanlike robots bring up all sorts of fascinating ethical questions. Today they’re no longer fun hypotheticals. It is important stuff we need to work out.

Let’s assume for now we can’t develop the free thinking and feeling replicants of Bladerunner and hosts of Westworld. We still have to consider how our creation and treatment of androids reflects on us. What purpose – other than sexbots and soldiers – will we make them for? What features will we design into a robot that is so lifelike it masterfully mimics a human? Can we avoid designing our own biases into these new humanoids? How will they impact our behaviour? How will they change our workplaces and societies? How do we prevent them from being exploited for terrible things?

Maybe Elon Musk is right to be cautious about AI. But if we were “summoning the demon”, it’s the one inside us that’ll be the cause of our unease.

Follow The Ethics Centre on Twitter, Facebook, Instagram and LinkedIn.


rise of Artificial Intelligence and its impact on our future

The rise of Artificial Intelligence and its impact on our future

rise of Artificial Intelligence and its impact on our future

It’s all fun and games until robots actually take over our jobs. AI is in our future and is fast approaching. Simon Longstaff considers how we make that tomorrow good.

Way back in 1950, the great computer scientist, Alan Turing, published in Mind a paper that set a test for determining whether or not a machine possesses ‘artificial intelligence’.

In essence, the Turing Test is passed if a human communicating with others by text cannot tell the difference between a human response and one produced by the machine. The important thing to note about Turing’s test is that he does not try to prove whether or not machines can ‘think’ as humans do – just whether or not they can successfully imitate the outcomes of human thinking.

 

 

Although the makers of a chat-bot called Eugene Goostman claimed it passed the test (by masquerading as a 13 year old boy) general opinion is the bot was designed to ‘game’ the system by using the boy’s apparent young age as a plausible excuse for the mistakes it made in the course of the test. Even so, the development of computers continues apace – with ‘expert systems’ and robots predicted to displace humans in a variety of occupations ranging from the legal profession to taxi drivers and miners.

All of this is causing considerable anxiety – not unlike that felt by people whose lives were upended by the development of steam power and mass production during the first Industrial Revolution. Back then people could more or less understand what was going on. The machines (and how they worked) were fairly obvious.

These days the inner workings of our advanced machines are far more mysterious. Coal or timber burning in a furnace is tactile and observable. But what exactly is an electron? How do you see it? And a Q-bit?

Add to this the extraordinary power of modern machines and it is not surprising that some people (including the likes of Stephen Hawking) are expressing caution about the potential threat our own technologies present, not only to our lifestyles, but to human existence. Of course, not everybody is so pessimistic.

However, the key thing to note here is we have choices to make about how we develop our technology. The future is not inevitable – we make it. And that is where ethics comes in.

This is one small example of how our choices matter. At first glance, let’s imagine how wonderful it would be if we could build batteries that never need recharging. That might seem to solve a raft of problems.

However, as Raja Jurdak and Brano Kusy observe in The Conversation, there may be ‘downsides’ to consider, “creating indefinitely powered devices that can sense, think, and act moves us closer to creating artificial life forms.

Couple that with an ability to reproduce through 3D printing, for example, and to learn their own program code, and you get most of the essential components for creating a self-sustaining species of machines.” Dystopian images of the Terminator come to mind.

However, back to Turing and his test. As noted above, computers that pass his test will not necessarily be thinking. Instead, they will be imitating what it means to be a thinking human being. This may be a crucial difference.

What will we make of a medi-bot that tells us we have cancer and tries to comfort us – but that we know can have no authentic sense of its (or our) mortality? No matter how good it is at imitating sympathy, won’t the machine’s lack of genuine understanding and compassion lead us to discount the worth of its ‘support’?

Then there is the fundamental problem at the heart of the ethical life lived by human beings. Our form of being is endowed with the capacity to make conscious, ethical choices in conditions of fundamental uncertainty. It is our lot to be faced with genuine ethical dilemmas in which there is, in principle, no ‘right’ answer.

This is because values like truth and compassion can be held with equal ‘weight’ and yet pull us in opposite directions. As humans we know what it means to make a responsible decision – even in the face of such radical uncertainty. And we do it all the time.

What will a machine do when there is no right answer? Will it do the equivalent to flipping a coin? Will it be indifferent to the answer it gets and act on the results of chance alone? Will that ever be good enough for us and the world we inhabit?


The complex ethics of online memes

Online jokes and play aren’t the same as the kinds you’d enjoy in your living room.

Despite the widespread assumption that what happens online is somehow less serious or real than “IRL” experiences, online humour can actually be more ethically fraught than offline playfulness – which unfortunately, might spoil some of the fun of internet memes.

Although the behaviours themselves might be similar on and offline, internet memes (believe it or not, there are offline memes as well) and other digital content can travel further, be decontextualised more quickly, and accessed instantaneously by millions of people, with the click of a link – without the original creator’s consent or even awareness. Each of these people, even further removed from the story, are then able to continue tinkering with the content in a number of ways and for a number of ends. This kind of play can be every bit as creative, social, and unifying for some as it is destructive, antagonistic, and alienating for the others.

The Harambe case demonstrates two of the most pressing ethical concerns in internet culture: amplification and fetishisation.

Take the Harambe meme, for example. Harambe was a Western lowland gorilla at the Cincinnati Zoo and was shot and killed by a zookeeper in May after a child fell into his enclosure. In response, countless online participants began creating and sharing Harambe content, ranging from photoshopped pictures to catchphrases to satirical hashtags. Just like that, the Harambe meme was born.

Some of these iterations were absurdist and silly, showing Harambe as a lumbering angel in heaven. Some focused on the injustice of Harambe’s killing, since the gorilla hadn’t actually harmed the child.
And some veered into harassing, explicitly racist territory – for instance, when images of the gorilla were used to taunt and harass black American actress Leslie Jones. Journalistic coverage of Jones’ harassment and its connection to the Harambe meme imbued the broader story with an indelible tinge of ugliness.

As well as demonstrating a meme’s ability to communicate a range of messages, the Harambe case demonstrates two of the most pressing ethical concerns in internet culture: amplification and fetishisation.

Do potential benefits like calling attention to injustice, for example, or naming and shaming antagonists outweigh the risks, for instance further circulating racist discourse or giving antagonists a larger platform?

Amplification occurs when the intended message is spread due to sharing, reporting, or commenting on a particular meme. It’s straightforwardly unethical when an individual wilfully and maliciously spreads damaging content in an effort to harass, intimidate, or denigrate. For those actively antagonising others – like Jones’ harassers – sharing is a weapon that clearly and deliberately amplifies harm.

But even individuals who amplify a meme for good reasons, such as to critique its underlying sentiment, can inadvertently prolong that meme’s life. In these kinds of cases it is critical to consider what impact reblogging, retweeting or commenting might have. Do potential benefits like calling attention to injustice, for example, or naming and shaming antagonists outweigh the risks, for instance further circulating racist discourse or giving antagonists a larger platform?

Amplification also impacts a second ethical issue: fetishisation, when part of something – like an image, statement, or joke – is treated as the whole story. In the Harambe case, a sentient creature’s death was in some cases completely disregarded and in other cases reframed as nothing more than the punchline to a joke.

We can also see fetishisation at work in participants’ apparent obliviousness to or disregard for the employees monitoring the Cincinnati Zoo Facebook and Twitter accounts. In the months following Harambe’s death, employees were so overwhelmed with the flood of Harambe content they decided to delete their account.

Behind every screen sits a person with feelings, family, interests and worries, whose online and offline experiences are fundamentally intertwined.

Fetishism also flattened the racist undercurrent of many participants’ initial responses to the gorilla’s death. The boy who fell into Harambe’s enclosure was black, and after his death his parents were attacked by citizens and journalists alike with a variety of racist stereotypes attempting to link the boy’s fall with the color of his parents’ skin. The racist premise lurking underneath many early “justice for Harambe” protestations was “this gorilla is more valuable than that black boy”, and furthermore, “I’m angry this gorilla died because of his bad black parents.”

Of course, not all Harambe protestors harboured racist sentiment. Many participants, maybe even the majority of participants, may not have even been aware of the racial dimensions of the story. But as in many cases of viral meme sharing (for example Bed IntruderStar Wars Kid, and any number of “online vigilantism” cases), the fetishised image of Harambe obscured the story’s full political context. This prevented participants from assessing what it was exactly they were turning into a joke.

The internet is not an ethics-free zone. Responsible online participation requires thinking about the experiences and feelings of others.

Even those with the best intentions can bring about outcomes which are misleading or even destructive. Just as participants might not mean to perpetuate racist ideology by sharing a meme, they might not mean to ignore critical contextualising details. But when people remix and play with stories and images online, critical contextualising details are often the first things to go. What’s left instead are the amusing or interesting pieces of the puzzle. It becomes easy to forget about the bigger picture and especially easy to forget about the people or groups who might be impacted as a result.

This is not to discourage participation in meme culture or to suggest online play is necessarily harmful. But it does serve as a reminder that all online actions have consequences: good, bad or somewhere in between, that transcend purely digital spaces. After all, behind every screen sits a person with feelings, family, interests and worries, whose online and offline experiences are fundamentally intertwined.

All memes have a context, even in cases where that context has been obscured. These cases in particular warrant careful consideration, since what might appear to be a harmless joke from one angle may in fact have devastating consequences for the target, whether that target is a single individual or a broader social group.

The internet is not an ethics-free zone. Responsible online participation requires thinking about the experiences and feelings of others and watching where, when, and how you step. And most importantly, before you amplify a message, always remember: There But For The Grace Of The Internet Go You.


Twitter made me do it!

In a recent panel discussion, academic and former journalist Emma Jane described what happened when she first included her email address at the end of her newspaper column in the late nineties.

Previously, she’d received ‘hate mail’ in the form of relatively polite and well-written letters but once her email address was public, there was a dramatic escalation in its frequency and severity. Jane coined the term ‘Rapeglish’ to describe the visceral rhetoric of threats, misogyny and sexual violence that characterises much of the online communication directed at women and girls.

Online misogyny and abuse has emerged as a major threat to the free and equal public participation of women in public debate – not just online, but in the media generally. Amanda Collinge, producer of the influential ABC panel show Q&A, revealed earlier this year that high profile women have declined to appear in the program due to “the well-founded fear that the online abuse and harassment they already suffer will increase”.

Twitter’s mechanics mean users have no control over who replies to their tweets and cannot remove abusive or defamatory responses.

Most explanations for online misogyny and prejudice tend to be cultural. We are told that the internet gives expression to or amplifies existing prejudice – showing us the way we always were. But this doesn’t explain why some online platforms have a greater problem with online abuse than others. If the internet were simply a mirror for the woes of society, we could expect to see similar levels of abuse across all online platforms.

The ‘honeypot for assholes’

This isn’t the case.  Though it isn’t perfect, Facebook has a relatively low rate of online abuse compared to Twitter, which was recently described as a “honeypot for assholes”. One study found 88 percent of all discriminatory or hateful social media content originates on Twitter.

Twitter’s abuse problem illustrates how culture and technology are inextricably linked. In 2012, Tony Wang, then UK general manager of Twitter, described the organisation as “the free speech wing of the free speech party”. This reflects a libertarian commitment to uncensored information and rampant individualism, which has been a long-standing feature of computing and engineering culture – as revealed in the design and administration of Twitter.

Twitter’s mechanics mean users have no control over who replies to their tweets and cannot remove abusive or defamatory responses, which makes it an inherently combative medium. Users complaining of abuse have found that Twitter’s safety team does not view explicit threats of rape, death or blackmail as a violation of their terms of service.

The naïve notion that Twitter users should battle one another within a ‘marketplace of ideas’ . . . ignores the way sexism, racism and other forms of prejudice force diverse users to withdraw from the public sphere.

Twitter’s design and administration all reinforce the ‘if you can’t take the heat, get out of the kitchen’ machismo of Silicon Valley culture. Social media platforms were designed within a male dominated industry and replicate the assumptions and attitudes typical of men in the industry. Twitter provides users with few options to protect themselves from abuse and there are no effective bystander mechanisms to enable users to protect each other.

Over the years, the now banned Milo Yiannopoulos and now imprisoned ‘revenge porn king’ Hunter Moore have accumulated hundreds of thousands of admiring Twitter followers by orchestrating abuse and hate campaigns. The number of followers, likes and retweets can act like a scoreboard in the ‘game’ of abuse.

Suggesting Twitter should be a land of free speech where users should battle one another within a ‘marketplace of ideas’ might make sense to the white, male, heterosexual tech bro, but it ignores the way sexism, racism and other forms of prejudice force diverse users to withdraw from the public sphere.

Dealing with online abuse

Over the last few years, Twitter has acknowledged its problem with harassment and sought to implement a range of strategies. As Twitter CEO Dick Costolo stated to employees in a leaked internal memo, ‘We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years’. However, steps have been incremental at best and are yet to make any noticeable difference to users.

How do we challenge the most toxic aspects of internet culture when its norms and values are built into online platforms themselves?

Researchers and academics are calling for the enforcement of existing laws and the enactment of new laws in order to deter online abuse and sanction offenders. ‘Respectful relationships’ education programs are incorporating messages on online abuse in the hope of reducing and preventing it.

These necessary steps to combat sexism, racism and other forms of prejudice in offline society might struggle to reduce online abuse though. The internet is host to specific cultures and sub-cultured in which harassment is normal or even encouraged.

Libertarian machismo was entrenched online by the 1990s when the internet was dominated by young, white, tech-savvy men – some of whom disseminated an often deliberately vulgar and sexist communicative style that discouraged female participation. While social media has bought an influx of women and other users online it has not displaced these older, male-dominated subcultures.

The fact that harassment is so easy on social media is no coincidence. The various dot-com start-ups that produced social media have emerged out of computing cultures that have normalised online abuse for a long time. Indeed, it seems incitements to abuse have been technologically encoded into some platforms.

Designing a more equitable internet

So how do we challenge the most toxic aspects of internet culture when its norms and values are built into online platforms themselves? How can a fairer and more prosocial ethos be built into online infrastructure?

Changing the norms and values common online will require a cultural shift in computing industries and companies.

Earlier this year, software developer and commentator Randi Lee Harper drew up an influential list of design suggestions to ‘put out the Twitter trashfire’ and reduce the prevalence of abuse on the platform. Her list emphasises the need to give users greater control over their content and Twitter experience.

One solution might appear in the form of social media app Yik Yak – basically a local, anonymous version of Twitter but with a number of important built-in safety features. When users post content to Yik Yak, other users can ‘upvote’ or ‘downvote’ the content depending on how they feel about it. Comments that receive more than five ‘down’ votes are automatically deleted, enabling a swift bystander response to abusive content. Yik Yak also employs automatic filters and algorithms as a barrier against the posting of full names and other potentially inappropriate content.

Yik Yak’s platform design is underpinned by a social understanding of online communication. It recognises the potential for harm and attempts to foster healthy bystander practices and cultures. This is a far cry from the unfettered pursuit of individual free speech at all costs, which has allowed abuse and harassment to go unaddressed in the past.

It seems like it will require more than a behavioural shift from users. Changing the norms and values common online will require a cultural shift in computing industries and companies so the development of technology is underpinned by a more diverse and inclusive understanding of communication and users.