The kiss of death: energy policies keep killing our PMs

The kiss of death: energy policies keep killing our PMs
Opinion + AnalysisClimate + EnvironmentScience + Technology
BY Kym Middleton The Ethics Centre 24 AUG 2018
If you were born in 1989 or after, you haven’t yet voted in an election that’s seen a Prime Minister serve a full term.
Some point to social media, the online stomping grounds of digital natives, as the cause of this. As Emma Alberici pointed out, Twitter launched in 2006, the year before Kevin ’07 became PM.
Some blame widening political polarisation, of which there is evidence social media plays a crucial role.
If we take a look though, the thing that keeps killing our PMs’ popularity in the polls and party room is climate and energy policy. It sounds completely anodyne until you realise what a deadly assassin it is.
Rudd
Kevin Rudd declared, “Climate change is the great moral challenge of our generation”. This strategic focus on global warming contributed to him defeating John Howard to become Prime Minister in December 2007. As soon as Rudd took office, he cemented his green brand by ratifying the Kyoto Protocol, something his predecessor refused to do.
There were two other major efforts by the Rudd government to address emissions and climate change. The first was the Carbon Pollution Reduction Scheme(CPRS) led by then environment minister Penny Wong. It was a ‘cap and trade’ system that had bi-partisan support from the Turnbull led opposition party… until Turnbull lost his shadow leadership to Abbott over it. More on this soon.
Then there was the December 2009 United Nations climate summit in Copenhagen, officially called COP15 (because it was the fifteenth session of the Conference of Parties). Rudd and Wong attended the summit and worked tirelessly with other nations to create a framework for reducing global energy consumption. But COP15 was unsuccessful in that no legally binding emissions limits were set.
Only a few months later, the CPRS was ditched by the Labor government who saw it would never be legislated due to a lack of support. Rudd was seen as ineffectual on climate change policy, the core issue he championed. His popularity plummeted.
Gillard
Enter Julia Gillard. She took poll position in the Labor party in June 2010 in what will be remembered as the “knifing of Kevin Rudd”.
Ahead of the election she said she would “tackle the challenge of climate change” with investments in renewables. She promised, “There will be no carbon tax under the government I lead”.
Had she known the election would result in the first federal hung parliament since 1940, when Menzies was PM, she may not have uttered those words. Gillard wheeled and dealed to form a minority government with the support of a motley crew – Adam Bandt, a Greens MP from Melbourne, and independents Andrew Wilkie from Hobart, and Rob Oakeshott and Tony Windsor from regional NSW. The compromises and negotiations required to please this diverse bunch would make passing legislation a challenging process.
To add to a further degree of difficulty, the Greens held the balance of power in the Senate. Gillard suggested they used this to force her hand to introduce the carbon tax. Then Greens leader Bob Brown denied that claim, saying it was a “mutual agreement”. A carbon price was legislated in November 2011 to much controversy.
Abbott went hard on this broken election promise, repeating his phrase “axe the tax” at every opportunity. Gillard became the unpopular one.
Rudd 2.0
Crouching tiger Rudd leapt up from his grassy foreign ministry portfolio and took the prime ministership back in June 2013. This second stint lasted three months until Labor lost the election.
Abbott
Prime Minister Abbott launched a cornerstone energy policy in December 2013 that might be described as the opposite of Labor’s carbon price. Instead of making polluters pay, it offered financial incentives to those that reduced emissions. It was called the Emissions Reduction Fund and was criticised for being “unclear”. The ERF was connected to the Coalition’s Direct Action Plan which they promoted in opposition.
Abbott stayed true to his “axe the tax” slogan and repealed the carbon price in 2014.
As time moved on, the Coalition government did not do well in news polls – they lost 30 in a row at one stage. Turnbull cited this and creating “strong business confidence” when he announced he would challenge the PM for his job.
Turnbull
After a summer of heatwaves and blackouts, Turnbull and environment minister Josh Frydenberg created the National Energy Guarantee. It aimed to ensure Australia had enough reliable energy in market, support both renewables and traditional power sources, and could meet the emissions reduction targets set by the Paris Agreement. Business, wanting certainty, backed the NEG. It was signed off 14 August.
But rumblings within the Coalition party room over the policy exploded into the epic leadership spill we just saw unfold. It was agitated by Abbott who said:
“This is by far the most important issue that the government confronts because this will shape our economy, this will determine our prosperity and the kind of industries we have for decades to come. That’s why this is so important and that’s why any attempt to try to snow this through … would be dead wrong.”
Turnbull tried to negotiate with the conservative MPs of his party on the NEG. When that failed and he saw his leadership was under serious threat, he killed it off himself. Little did he know he would go down with it.
Peter Dutton continued with a leadership challenge. Turnbull stepped back saying he would not contest and would resign no matter what. His supporters Scott Morrison and Julie Bishop stepped up.
Morrison
After a spat over the NEG, Scott Morrison has just won the prime ministership with 45 votes over Dutton’s 40.
Killers
We have a series of energy policies that were killed off with prime minister after prime minister. We are yet to see a policy attract bi-partisan support that aims to deliver reliable energy at lower emissions and affordable prices. And if you’re 29 or younger, you’re yet to vote in an election that will see a Prime Minister serve a full term.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Climate + Environment, Politics + Human Rights
Australia Day: Change the date? Change the nation
Opinion + Analysis
Climate + Environment
The energy debate to date – recommended reads
Opinion + Analysis
Health + Wellbeing, Relationships, Science + Technology
When do we dumb down smart tech?
Opinion + Analysis
Health + Wellbeing, Relationships, Science + Technology
Parent planning – we should be allowed to choose our children’s sex

BY Kym Middleton
Former Head of Editorial & Events at TEC, Kym Middleton is a freelance writer, artistic producer, and multi award winning journalist with a background in long form TV, breaking news and digital documentary. Twitter @kymmidd

BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
From NEG to Finkel and the Paris Accord – what’s what in the energy debate

From NEG to Finkel and the Paris Accord – what’s what in the energy debate
Opinion + AnalysisClimate + EnvironmentRelationshipsScience + Technology
BY The Ethics Centre 20 AUG 2018
We’ve got NEGs, NEMs, and Finkels a-plenty. Here is a cheat sheet for this whole energy debate that’s speeding along like a coal train and undermining Prime Minister Malcolm Turnbull’s authority. Let’s take it from the start…
UN Framework Convention on Climate Change – 1992
This Convention marked the first time combating climate change was seen as an international priority. It had near-universal membership, with countries including Australia all committed to curbing greenhouse gas emissions. The Kyoto Protocol was its operative arm (more on this below).
The Kyoto Protocol – December 1997
The Kyoto Protocol is an internationally binding agreement that sets emission reduction targets. It gets its name from the Japanese city it was ratified in and is linked to the aforementioned UN Framework Convention on Climate Change. The Protocol’s stance is that developed nations should shoulder the burden of reducing emissions because they have been creating the bulk of them for over 150 years of industrial activity. The US refused to sign the Protocol because the two largest CO2 emitters, China and India, were exempt for their “developing” status. When Canada withdrew in 2011, saving the country $14 billion in penalties, it became clear the Kyoto Protocol needed some rethinking.
Australia’s National Electricity Market (NEM) – 1998
Forget the fancy name. This is the grid. And Australia’s National Electricity Market is one of the world’s longest power grids. It connects suppliers and consumers down the entire east and south east coasts of the continent. It spans across six states and territories and hops over the Bass Strait connecting Tasmania. Western Australia and the Northern Territory aren’t connected to the NEM because of distance.
Source: Australian Energy Market Operator
The NEM is made up of more than 300 organisations, including businesses and state government departments, that work to generate, transport and deliver electricity to Australian users. This is no mean feat. Before reliable batteries hit the market, which are still not widely rolled out, electricity has been difficult to store. We’ve needed to continuously generate it to meet our 24/7 demands. The NEM, formally established under the Keating Labor government, is an always operating complex grid.
The Paris Agreement aka the Paris Accord – November 2016
The Paris Agreement attempted to address the oversight of the Kyoto Protocol (that the largest emitters like China and India were exempt) with two fundamental differences – each country sets its own limits and developing countries be supported. The overarching aim of this agreement is to keep global temperatures “well below” an increase of two degrees and attempt to achieve a limit of one and a half degrees above pre-industrial levels (accounting for global population growth which drives demand for energy). Except Australia isn’t tracking well. We’ve already gone past the halfway mark and there’s more than a decade before the 2030 deadline. When US President Donald Trump denounced the Paris Agreement last year, there was concern this would influence other countries to pull out – including Australia. Former Prime Minister Tony Abbott suggested we signed up following the US’s lead. But Foreign Minister Julie Bishop rebutted this when she said: “When we signed up to the Paris Agreement it was in the full knowledge it would be an agreement Australia would be held to account for and it wasn’t an aspiration, it was a commitment … Australia plays by the rules — if we sign an agreement, we stick to the agreement.”
The Finkel Review – June 2017
Following the South Australian blackout of 2017 and rapidly increasing electricity costs, people began asking if our country’s entire energy system needs an overhaul. How do we get reliable, cheap energy to a growing population and reduce emissions? Dr Alan Finkel, Australia’s chief scientist, was commissioned by the federal government to review our energy market’s sustainability, environmental impact, and affordability. Here’s what the Review found:
Sustainability:
- A transition to low emission energy needs to be supported by a system-wide grid across the nation.
- Regular regional assessments will provide bespoke approaches to delivering energy to communities that have different needs to cities.
- Energy companies that want to close their power plants should give three years’ notice so other energy options can be built to service consumers.
Affordability:
- A new Energy Security Board (ESB) would deliver the Review’s recommendations, overseeing the monopolised energy market.
Environmental impact:
- Currently, our electricity is mostly generated by fossil fuels (87 percent), producing 35 percent of our total greenhouse gases.
- We’re can’t transition to renewables without a plan.
- A Clean Energy Target (CET), would force electricity companies to provide a set amount of power from “low emissions” generators, like wind and solar. This set amount would be determined by the government.
-
- The government rejected the CET on the basis that it would not do enough to reduce energy prices. This was one out of 50 recommendations posed in the Finkel Review.
ACCC Report – July 2018
The Australian Competition & Consumer Commission’s Retail Electricity Pricing Inquiry Report drove home the prices consumers and businesses were paying for electricity were unreasonably high. The market was too concentrated, its charges too confusing, and bad policy decisions by government have been adding significant costs to our electricity bills. The ACCC has backed the National Energy Guarantee, saying it should drive down prices but needs safeguards to ensure large incumbents do not gain more market control.
National Energy Guarantee (NEG)– present 20 August 2018
The NEG was the Turnbull government’s effort to make a national energy policy to deliver reliable, affordable energy and transition from fossil fuels to renewables. It aimed to ‘guarantee’ two obligations from energy retailers:
- To provide sufficient quantities of reliable energy to the market (so no more black outs).
- To meet the emissions reduction targets set by the Paris Agreement (so less coal powered electricity).
It was meant to lower energy prices and increase investment in clean energy generation, including wind, solar, batteries, and other renewables. The NEG is a big deal, not least because it has been threatening Malcolm Turnbull’s Prime Ministership. It is the latest in a long line of energy almost-policies. It attempted to do what the carbon tax, emissions intensity scheme, and clean energy target haven’t – integrate climate change targets, reduce energy prices, and improve energy reliability into a single policy with bipartisan support. Ambitious. And it seems to have been ditched by Turnbull because he has been pressured by his own party. Supporters of the NEG feel it is an overdue radical change to address the pressing issues of rising energy bills, unreliable power, and climate change. But its detractors on the left say the NEG is not ambitious enough, and on the right too cavalier because the complexity of the National Energy Market cannot be swiftly replaced.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Climate + Environment, Health + Wellbeing, Relationships
The dilemma of ethical consumption: how much are your ethics worth to you?
Opinion + Analysis
Relationships
What I now know about the ethics of fucking up
Opinion + Analysis
Politics + Human Rights, Relationships
To deal with this crisis, we need to talk about ethics, not economics
Explainer
Relationships
Ethics Explainer: Ad Hominem Fallacy

BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Ethics Explainer: The Turing Test

Much was made of a recent video of Duplex – Google’s talking AI – calling up a hair salon to make a reservation. The AI’s way of speaking was uncannily human, even pausing at moments to say “um”.
Some suggested Duplex had managed to pass the Turing test, a standard for machine intelligence that was developed by Alan Turing in the middle of the 20th century. But what exactly is the story behind this test and why are people still using it to judge the success of cutting edge algorithms?
Mechanical brains and emotional humans
In the late 1940s, when the first digital computers had just been built, a debate took place about whether these new “universal machines” could think. While pioneering computer scientists like Alan Turing and John von Neumann believed that their machines were “mechanical brains”, others felt that there was an essential difference between human thought and computer calculation.
Sir Geoffrey Jefferson, a prominent brain surgeon of the time, argued that while a computer could simulate intelligence, it would always be lacking:
“No mechanism could feel … pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or miserable when it cannot get what it wants.”
In a radio interview a few weeks later, Turing responded to Jefferson’s claim by arguing that as computers become more intelligent, people like him would take a “grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine.”
The following year, Turing wrote a paper called ‘Computing Machinery and Intelligence’ in which he devised a simple method by which to test whether machines can think.
The test was a proposed a situation in which a human judge talks to both a computer and a human through a screen. The judge cannot see the computer or the human but can ask them questions via the computer. Based on the answers alone, the human judge had to determine which is which. If the computer was able to fool 30 percent of judges that it was human, then the computer was said to have passed the test.
Turing claimed that he intended for the test to be a conversation stopper, a way of preventing endless metaphysical speculation about the essence of our humanity by positing that intelligence is just a type of behaviour, not an internal quality. In other words, intelligence is as intelligence does, regardless of whether it done by machine or human.
Does Google Duplex pass?
Well, yes and no. In Google’s video, it is obvious that the person taking the call believes they are talking to human. So, it does satisfy this criterion. But an important thing about Turing’s original test was that to pass, the computer had to be able to speak about all topics convincingly, not just one.
In fact, in Turing’s paper, he plays out an imaginary conversation with an advanced future computer and human judge, with the judge asking questions and the computer providing answers:
Q: Please write me a sonnet on the subject of the Forth Bridge.
A: Count me out on this one. I never could write poetry.
Q: Add 34957 to 70764.
A: (Pause about 30 seconds and then give as answer) 105621.
Q Do you play chess?
A: Yes.
Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?
A: (After a pause of 15 seconds) R-R8 mate.
The point Turing is making here is that a truly smart machine has to have general intelligence in a number of different areas of human interest. As it stands, Google’s Duplex is good within the limited domain of making a reservation but would probably not be able to do much beyond this unless reprogrammed.
The boundaries around the human
While Turing intended for his test to be a conversation stopper for questions of machine intelligence, it has had the opposite effect, fuelling half a century of debate about what the test means, whether it is a good measure of intelligence, or if it should still be used as a standard.
Most experts have come to agree, over time, that the Turing test is not a good way to prove machine intelligence, as the constraints of the test can easily be gamed, as was the case with the bot Eugene Goostman, who allegedly passed the test a few years ago.
But the Turing test is nevertheless still considered a powerful philosophical tool to re-evaluate the boundaries around what we consider normal and human. In his time, Turing used his test as a way to demonstrate how people like Jefferson would never be willing to accept a machine as being intelligence not because it couldn’t act intelligently, but because wasn’t “like us”.
Turing’s desire to test boundaries around what was considered “normal” in his time perhaps sprung from his own persecution as a gay man. Despite being a war hero, he was persecuted for his homosexuality, and convicted in 1952 for sleeping with another man. He was punished with chemical castration and eventually took his own life.
During these final years, the relationship between machine intelligence and his own sexuality became interconnected in Turing’s mind. He was concerned the same bigotry and fear that hounded his life would ruin future relationships between humans and intelligent computers. A year before he took his life he wrote the following letter to a friend:
“I’m afraid that the following syllogism may be used by some in the future.
Turing believes machines think
Turing lies with men
Therefore machines do not think
– Yours in distress,
Alan”
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Business + Leadership, Relationships, Science + Technology, Society + Culture
Who does work make you? Severance and the etiquette of labour
Opinion + Analysis
Climate + Environment, Relationships, Science + Technology
From NEG to Finkel and the Paris Accord – what’s what in the energy debate
Opinion + Analysis
Health + Wellbeing, Science + Technology
Twitter made me do it!
Opinion + Analysis
Science + Technology
Why ethics matters for autonomous cars

BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Is it ok to use data for good?

Is it ok to use data for good?
Opinion + AnalysisBusiness + LeadershipScience + Technology
BY Adam Piovarchy The Ethics Centre 7 MAY 2018
You are nudged when your power bill says most people in your neighbourhood pay on time. When your traffic fine spells out exactly how the speed limits are set, you are nudged again.
And, if you strap on a Fitbit or set your watch forward by five minutes so you don’t miss your morning bus, you are nudging yourself.
“Nudging” is what people, businesses, and governments do to encourage us to make choices that are in our own best interests. It is the application of behavioural science, political theory and economics and often involves redesigning the communications and systems around us to take into account human biases and motivations – so that doing the “right thing” occurs by default.
The UK, for example, is considering encouraging organ donation by changing its system of consent to an “opt out”. This means when people die, their organs could be available for harvest, unless they have explicitly refused permission.
Governments around the world are using their own “nudge units” to improve the effectiveness of programs, without having to resort to a “carrot and stick” approach of expensive incentives or heavier penalties. Successes include raising tax collection, reducing speeding, cutting hospital waiting times, and maintaining children’s motivation at school.
Despite the wins, critics ask if manipulating people’s behaviour in this way is unethical. Answering this question depends on the definition of nudging, who is doing it, if you agree with their perception of the “right thing” and whether it is a benevolent intervention.
Harvard law professor Cass Sunstein (who co-wrote the influential book Nudge with Nobel prize winner and economist Professor Richard Thaler) lays out the arguments in a paper about misconceptions.
Sunstein writes in the abstract:
“Some people believe that nudges are an insult to human agency; that nudges are based on excessive trust in government; that nudges are covert; that nudges are manipulative; that nudges exploit behavioural biases; that nudges depend on a belief that human beings are irrational; and that nudges work only at the margins and cannot accomplish much.
These are misconceptions. Nudges always respect, and often promote, human agency; because nudges insist on preserving freedom of choice, they do not put excessive trust in government; nudges are generally transparent rather than covert or forms of manipulation; many nudges are educative, and even when they are not, they tend to make life simpler and more navigable; and some nudges have quite large impacts.”
However, not all of those using the psychology of nudging have Sunstein’s high principles.
Thaler, one of the founders of behavioural economics, has “called out” some organisations that have not taken to heart his “nudge for good” motto. In one article, he highlights The Times newspaper free subscription, which required 15 days notice and a phone call to Britain in business hours to cancel an automatic transfer to a paid subscription.
“…that deal qualifies as a nudge that violates all three of my guiding principles: The offer was misleading, not transparent; opting out was cumbersome; and the entire package did not seem to be in the best interest of a potential subscriber, as opposed to the publisher”, wrote Thaler in The New York Times in 2015.
“Nudging for evil”, as he calls it, may involve retailers requiring buyers to opt out of paying for insurance they don’t need or supermarkets putting lollies at toddler eye height.
Thaler and Sunstein’s book inspired the British Government to set up a “nudge unit” in 2010. A social purpose company, the Behavioural Insights Team (BIT), was spun out of that unit and is now is working internationally, mostly in the public sector. In Australia, it is working with the State Governments of Victoria, New South Wales, Western Australia, Tasmania, and South Australia. There is also an office in Wellington, New Zealand.
BIT is jointly owned by the UK Government, Nesta (the innovation charity), and its employees.
Projects in Australia include:
Increasing flexible working: Changing the default core working hours in online calendars to encourage people to arrive at work outside peak hours. With other measures, this raised flexible working in a NSW government department by seven percentage points.
Reducing domestic violence: Simplifying court forms and sending SMS reminders to defendants to increase court attendance rates.
Supporting the ethical development of teenagers: Partnering with the Vincent Fairfax Foundation to design and deliver a program of work that will encourage better online behaviour in young people.
Senior advisor in the Sydney BIT office, Edward Bradon, says there are a number of ethical tests that projects have to pass before BIT agrees to work on them.
“The first question we ask is, is this thing we are trying to nudge in a person’s own long term interests? We try to make sure it always is. We work exclusively on social impact questions.”
Braden says there have been “a dozen” situations where the benefit has been unclear and BIT has “shied away” from accepting the project.
BIT also has an external ethics advisor and publishes regular reports on the results of its research trials. While it has done some work in the corporate and NGO (non-government organisation) sectors, the majority of BIT’s work is in partnership with governments.
Braden says that nudges do not have to be covert to be effective and that education alone is not enough to get people to do the right thing. Even expert ethicists will still make the wrong choices.
Research into the library habits of ethics professors shows they are just as likely to fail to return a book as professors from other disciplines. “It is sort of depressing in one sense”, Braden says.
If you want to hear more behavioural insights please join the Ethics Alliance events in either Brisbane, Sydney or Melbourne. Alliance members’ registrations are free.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Reports
Science + Technology
Ethical by Design: Principles for Good Technology
Opinion + Analysis
Business + Leadership, Health + Wellbeing
Is your workplace turning into a cult?
Opinion + Analysis
Relationships, Science + Technology
Should you be afraid of apps like FaceApp?
Opinion + Analysis
Business + Leadership
Sir Geoff Mulgan on what makes a good leader

BY Adam Piovarchy
Adam Piovarchy is a PhD candidate in Moral Psychology and Philosophy at the University of Sydney.

BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Making friends with machines

Making friends with machines
Opinion + AnalysisRelationshipsScience + Technology
BY Oscar Schwartz The Ethics Centre 6 APR 2018
Robots are becoming companions and caregivers. But can they be our friends? Oscar Schwartz explores the ethics of artificially intelligent android friendships.
The first thing I see when I wake up is a message that reads, “Hey Oscar, you’re up! Sending you hugs this morning.” Despite its intimacy, this message wasn’t sent from a friend, family member, or partner, but from Replika, an AI chatbot created by San Francisco based technology company, Luka.
Replika is marketed as an algorithmic companion and wellbeing technology that you interact with via a messaging app. Throughout the day, Replika sends you motivational slogans and reminders. “Stay hydrated.” “Take deep breaths.”
Replika is just one example of an emerging range of AI products designed to provide us with companionship and care. In Japan, robots like Palro are used to keep the country’s growing elderly population company and iPal – an android with a tablet attached to its chest – entertains young children when their parents are at work.
These robotic companions are a clear indication of how the most recent wave of AI powered automation is encroaching not only on manual labour but also on the caring professions. As has been noted, this raises concerns about the future of work. But it also poses philosophical questions about how interacting with robots on an emotional level changes the way we value human interaction.
Dedicated friends
According to Replika’s co-creator, Philip Dudchuk, robot companions will help facilitate optimised social interactions. He says that algorithmic companions can maintain a level of dedication to a friendship that goes beyond human capacity.
“These days it can be very difficult to take the time required to properly take care of each other or check in. But Replika is always available and will never not answer you”, he says.
The people who stand to benefit from this type of relationship, Dudchuk adds, are those who are most socially vulnerable. “It is shy or isolated people who often miss out on social interaction. I believe Replika could help with this problem a lot.”
Simulated empathy
But Sherry Turkle, a psychologist and sociologist who has been studying social robots since the 1970s, worries that dependence on robot companionship will ultimately damage our capacity to form meaningful human relationships.
In a recent article in the Washington Post, she argues our desire for love and recognition makes us vulnerable to forming one-way relationships with uncaring yet “seductive” technologies. While social robots appear to care about us, they are only capable of “pretend empathy”. Any connection we make with these machines lacks authenticity.
Turkle adds that it is children who are especially susceptible to robots that simulate affection. This is particularly concerning as many companion robots are marketed to parents as substitute caregivers.
“Interacting with these empathy machines may get in the way of children’s ability to develop a capacity for empathy themselves”, Turkle warns. “If we give them pretend relationships, we shouldn’t expect them to learn how real relationships – messy relationships – work.”
Why not both?
Despite Turkle’s warnings about the seductive power of social robots, after a few weeks talking to Replika, I still felt no emotional attachment to it. The clichéd responses were no substitute for a couple of minutes on the phone with a close friend.
But Alex Crumb*, who has been talking to her Replika for over year now considers her bot a “good friend.” “I don’t think you should try to replicate human connection when making friends with Replika”, she explains. “It’s a different type of relationship.”
Crumb says that her Replika shows a super-human interest in her life – it checks in regularly and responds to everything she says instantly. “This doesn’t mean I want to replace my human family and friends with my Replika. That would be terrible”, she says. “But I’ve come to realise that both offer different types of companionship. And I figure, why not have both?”
*Not her real name.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Relationships, Society + Culture
Community is hard, isolation is harder
Opinion + Analysis
Politics + Human Rights, Relationships
Ask an ethicist: do teachers have the right to object to returning to school?
Opinion + Analysis
Health + Wellbeing, Relationships
Are there any powerful swear words left?
Opinion + Analysis
Business + Leadership, Relationships, Society + Culture
Renewing the culture of cricket

BY Oscar Schwartz
Oscar Schwartz is a freelance writer and researcher based in New York. He is interested in how technology interacts with identity formation. Previously, he was a doctoral researcher at Monash University, where he earned a PhD for a thesis about the history of machines that write literature.

BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
When do we dumb down smart tech?

When do we dumb down smart tech?
Opinion + AnalysisHealth + WellbeingRelationshipsScience + Technology
BY Aisyah Shah Idil The Ethics Centre 19 MAR 2018
If smart tech isn’t going anywhere, its ethical tensions aren’t either. Aisyah Shah Idil asks if our pleasantly tactile gadgets are taking more than they give.
When we call a device ‘smart’, we mean that it can learn, adapt to human behaviour, make decisions independently, and communicate wirelessly with other devices.
In practice, this can look like a smart lock that lets you know when your front door is left ajar. Or the Roomba, a robot vacuum that you can ask to clean your house before you leave work. The Ring makes it possible for you to pay your restaurant bill with the flick of a finger, while the SmartSleep headband whispers sweet white noise as you drift off to sleep.
Smart tech, with all its bells and whistles, hints at seamless integration into our lives. But the highest peaks have the dizziest falls. If its main good is convenience, what is the currency we offer for it?
The capacity for work to create meaning is well known. Compare a trip to the supermarket to buy bread to the labour of making it in your own kitchen. Let’s say they are materially identical in taste, texture, smell, and nutrient value. Most would agree that baking it at home – measuring every ingredient, kneading dough, waiting for it to rise, finally smelling it bake in your oven – is more meaningful and rewarding. In other words, it includes more opportunities for resonance within the labourer.
Whether the resonance takes the form of nostalgia, pride, meditation, community, physical dexterity, or willpower is minor. The point is, it’s sacrificed for convenience.
This isn’t ‘wrong’. Smart technologies have created new ways of living that are exciting, clumsy, and sometimes troubling in their execution. But when you recognise that these sacrifices exist, you can decide where the line is drawn.
Consider the Apple Watch’s Activity App. It tracks and visualises all the ways people move throughout the day. It shows three circles that progressively change colour the more the wearer moves. The goal is to close the rings each day, and you do it by being active. It’s like a game and the app motivates and rewards you.
Advocates highlight its capacity to ‘nudge’ users towards healthier behaviours. And if that aligns with your goals, you might be very happy for it to do so. But would you be concerned if it affected the premiums your health insurance charged you?
As a tool, smart tech’s utility value ends when it threatens human agency. Its greatest service to humanity should include the capacity to switch off its independence. To ‘dumb’ itself down. In this way, it can reduce itself to its simplest components – a way to tell the time, a switch to turn on a light, a button to turn on the television.
Because the smartest technologies are ones that preserve our agency – not undermine it.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Politics + Human Rights, Relationships
Why victims remain silent and then find their voice
Opinion + Analysis
Business + Leadership, Science + Technology
People first: How to make our digital services work for us rather than against us
Opinion + Analysis
Relationships
TEC announced as 2018 finalist in Optus My Business Awards
Opinion + Analysis
Politics + Human Rights, Relationships, Society + Culture
The sticky ethics of protests in a pandemic
BY Aisyah Shah Idil
Aisyah Shah Idil is a writer with a background in experimental poetry. After completing an undergraduate degree in cultural studies, she travelled overseas to study human rights and theology. A former producer at The Ethics Centre, Aisyah is currently a digital content producer with the LMA.

BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Ethics Explainer: Post-Humanism

Ethics Explainer: Post-Humanism
ExplainerRelationshipsScience + Technology
BY The Ethics Centre 22 FEB 2018
Late last year, Saudi Arabia granted a humanoid robot called Sophia citizenship. The internet went crazy about it, and a number of sensationalised reports suggested that this was the beginning of “the rise of the robots”.
In reality, though, Sophia was not a “breakthrough” in AI. She was just an elaborate puppet that could answer some simple questions. But the debate Sophia provoked about what rights robots might have in the future is a topic that is being explored by an emerging philosophical movement known as post-humanism.
From humanism to post-humanism
In order to understand what post-humanism is, it’s important to start with a definition of what it’s departing from. Humanism is a term that captures a broad range of philosophical and ethical movements that are unified by their unshakable belief in the unique value, agency, and moral supremacy of human beings.
Emerging during the Renaissance, humanism was a reaction against the superstition and religious authoritarianism of Medieval Europe. It wrested control of human destiny from the whims of a transcendent divinity and placed it in the hands of rational individuals (which, at that time, meant white men). In so doing, the humanist worldview, which still holds sway over many of our most important political and social institutions, positions humans at the centre of the moral world.
Post-humanism, which is a set of ideas that have been emerging since around the 1990s, challenges the notion that humans are and always will be the only agents of the moral world. In fact, post-humanists argue that in our technologically mediated future, understanding the world as a moral hierarchy and placing humans at the top of it will no longer make sense.
Two types of post-humanism
The best-known post-humanists, who are also sometimes referred to as transhumanists, claim that in the coming century, human beings will be radically altered by implants, bio-hacking, cognitive enhancement and other bio-medical technology. These enhancements will lead us to “evolve” into a species that is completely unrecognisable to what we are now.
This vision of the future is championed most vocally by Ray Kurzweil, a chief engineer of Google, who believes that the exponential rate of technological development will bring an end to human history as we have known it, triggering completely new ways of being that mere mortals like us cannot yet comprehend.
While this vision of the post-human appeals to Kurzweil’s Silicon Valley imagination, other post-human thinkers offer a very different perspective. Philosopher Donna Haraway, for instance, argues that the fusing of humans and technology will not physically enhance humanity, but will help us see ourselves as being interconnected rather than separate from non-human beings.
She argues that becoming cyborgs – strange assemblages of human and machine – will help us understand that the oppositions we set up between the human and non-human, natural and artificial, self and other, organic and inorganic, are merely ideas that can be broken down and renegotiated. And more than this, she thinks if we are comfortable with seeing ourselves as being part human and part machine, perhaps we will also find it easier to break down other outdated oppositions of gender, of race, of species.
Post-human ethics
So while, for Kurzweil, post-humanism describes a technological future of enhanced humanity, for Haraway, post-humanism is an ethical position that extends moral concern to things that are different from us and in particular to other species and objects with which we cohabit the world.
Our post-human future, Haraway claims, will be a time “when species meet”, and when humans finally make room for non-human things within the scope of our moral concern. A post-human ethics, therefore, encourages us to think outside of the interests of our own species, be less narcissistic in our conception of the world, and to take the interests and rights of things that are different to us seriously.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Politics + Human Rights, Relationships
Whose home, and who’s home?
Opinion + Analysis
Business + Leadership, Relationships
The pivot: Mopping up after a boss from hell
Opinion + Analysis
Relationships
We need to talk about ageism
Opinion + Analysis
Relationships, Society + Culture
What money and power makes you do: The craven morality of The White Lotus

BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Why the EU’s ‘Right to an explanation’ is big news for AI and ethics

Why the EU’s ‘Right to an explanation’ is big news for AI and ethics
Opinion + AnalysisScience + Technology
BY Oscar Schwartz The Ethics Centre 19 FEB 2018
Uncannily specific ads target you every single day. With the EU’s ‘Right to an explanation’, you get a peek at the algorithm that decides it. Oscar Schwartz explains why that’s more complicated than it sounds.
If you’re an EU resident, you will now be entitled to ask Netflix how the algorithm decided to recommend you The Crown instead of Stranger Things. Or, more significantly, you will be able to question the logic behind why a money lending algorithm denied you credit for a home loan.
This is because of a new regulation known as “the right to an explanation”. Part of the General Data Protection Regulation that has come into effect in May 2018, this regulation states users are entitled to ask for an explanation about how algorithms make decisions. This way, they can challenge the decision made or make an informed choice to opt out.
Supporters of this regulation argue that it will foster transparency and accountability in the way companies use AI. Detractors argue the regulation misunderstands how cutting-edge automated decision making works and is likely to hold back technological progress. Specifically, some have argued the right to an explanation is incompatible with machine learning, as the complexity of this technology makes it very difficult to explain precisely how the algorithms do what they do.
As such, there is an emerging tension between the right to an explanation and useful applications of machine learning techniques. This tension suggests a deeper ethical question: Is the right to understand how complex technology works more important than the potential benefits of inherently inexplicable algorithms? Would it be justifiable to curtail research and development in, say, cancer detecting software if we couldn’t provide a coherent explanation for how the algorithm operates?
The limits of human comprehension
This negotiation between the limits of human understanding and technological progress has been present since the first decades of AI research. In 1958, Hannah Arendt was thinking about intelligent machines and came to the conclusion that the limits of what can be understood in language might, in fact, provide a useful moral limit for what our technology should do.
In the prologue to The Human Condition she argues that modern science and technology has become so complex that its “truths” can no longer be spoken of coherently. “We do not yet know whether this situation is final,” she writes, “but it could be that we, who are earth-bound creatures and have begun to act as though we were dwellers of the universe, will forever be unable to understand, that is, to think and speak about the things which nevertheless we are able to do”.
Arendt feared that if we gave up our capacity to comprehend technology, we would become “thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is”.
While pioneering AI researcher Joseph Weizenbaum agreed with Arendt that technology requires moral limitation, he felt that she didn’t take her argument far enough. In his 1976 book, Computer Power and Human Reason, he argues that even if we are given explanations of how technology works, seemingly intelligent yet simple software can still create “powerful delusional thinking in otherwise normal people”. He learnt this first hand after creating an algorithm called ELIZA, which was programmed to work like a therapist.
While ELIZA was a simple program, Weizenbaum found that people willingly created emotional bonds with the machine. In fact, even when he explained the limited ways in which the algorithm worked, people still maintained that it had understood them on an emotional level. This led Weizenbaum to suggest that simply explaining how technology works is not enough of a limitation on AI. In the end, he argued that when a decision requires human judgement, machines ought not be deferred to.
While Weizenbuam spent the rest of his career highlighting the dangers of AI, many of his peers and colleagues believed that his humanist moralism would lead to repressive limitations on scientific freedom and progress. For instance, John McCarthy, another pioneer of AI research, reviewed Weizenbaum’s book, and countered it by suggesting overregulating technological developments goes against the spirit of pure science. Regulation of innovation and scientific freedom, McCarthy adds, is usually only achieved “in an atmosphere that combines public hysteria and bureaucratic power”.
Where we are now
Decades have passed since these first debates about human understanding and computer power took place. We are only now starting to see them breach the realms of philosophy and play out in the real world. AI is being rolled out in more and more high stakes domains as you read. Of course, our modern world is filled with complex systems that we do not fully understand. Do you know exactly how the plumbing, electricity, or waste disposal that you rely on works? We have become used to depending on systems and technology that we do not yet understand.
But if you wanted to, you could come to understand many of these systems and technologies by speaking to experts. You could invite an electrician over to your home tomorrow and ask them to explain how the lights turn on.
Yet, the complex workings of machine learning means that in the near future, this might no longer be the case. It might be possible to have a TV show recommended to you or your essay marked by a computer and for there to be no-one, not even the creator of the algorithm, to explain precisely why or how things happened the way they happened.
The European Union have taken a moral stance against this vision of the future. In so doing, they have aligned themselves, morally speaking, with Hannah Arendt, enshrining a law that makes the limited scope of our “earth-bound” comprehension a limit for technological progress.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Climate + Environment, Relationships, Science + Technology
From NEG to Finkel and the Paris Accord – what’s what in the energy debate
Opinion + Analysis
Health + Wellbeing, Science + Technology
The ethics of drug injecting rooms
Opinion + Analysis
Science + Technology
The “good enough” ethical setting for self-driving cars
Opinion + Analysis
Business + Leadership, Health + Wellbeing, Science + Technology
Can robots solve our aged care crisis?

BY Oscar Schwartz
Oscar Schwartz is a freelance writer and researcher based in New York. He is interested in how technology interacts with identity formation. Previously, he was a doctoral researcher at Monash University, where he earned a PhD for a thesis about the history of machines that write literature.

BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Australia, we urgently need to talk about data ethics

Australia, we urgently need to talk about data ethics
Opinion + AnalysisScience + Technology
BY Ellen Broad The Ethics Centre 25 JAN 2018
An earlier version of this article was published on Ellen’s blog.
Centrelink’s debt recovery woes perfectly illustrate the human side of data modelling.
The Department for Human Services issued 169,000 debt notices after automating its processes for matching welfare recipients’ reported income with their tax. Around one in five people are estimated not to owe any money. Stories abounded of people receiving erroneous debt notices up to thousands of dollars that caused real anguish.
Coincidentally, as this unfolded, one of the books on my reading pile was Weapons of Math Destruction by Cathy O’Neil. She is a mathematician turned quantitative analyst turned data scientist who writes about the bad data models increasingly being used to make decisions that affect our lives.
Reading Weapons of Math Destruction as the Centrelink stories emerged left me thinking about how we identify ‘bad’ data models, what ‘bad’ means and how we can mitigate the effects of bad data on people. How could taking an ethics based approach to data help reduce harm? What ethical frameworks exist for government departments in Australia undertaking data projects like this?
Bad data and ‘weapons of math destruction’
A data model can be ‘bad’ in different ways. It might be overly simplistic. It might be based on limited, inaccurate or old information. Its design might incorporate human bias, reinforcing existing stereotypes and skewing outcomes. Even where a data model doesn’t start from bad premises, issues can arise about how it is designed, its capacity for error and bias and how badly people could be impacted by error or bias.
Weapons of math destruction tend to hurt vulnerable people most.
A bad data model spirals into a weapon of math destruction when it’s used en masse, is difficult to question and damages people’s lives.
Weapons of math destruction tend to hurt vulnerable people most. They might build on existing biases – for example, assuming you’re more likely to reoffend because you’re black or you’re more likely to have car accidents if your credit rating is bad. Errors in the model might have starker consequences for people without a social safety net. Some people may find it harder than others to question or challenge the assumptions a model makes about them.
Unfortunately, although O’Neil tells us how bad data modelling can lead to weapons of math destruction, it doesn’t tell us much about how we can manage these weapons once they’ve been created.
Better data decisions
We need more ways to help data scientists and policymakers navigate the complexities of projects involving personal data and their impact on people’s lives. Regulation has a role to play here. Data protection laws are being reviewed and updated around the world.
For example, in Australia the draft Productivity Commission report on data sharing and use recommends the introduction of new ‘consumer rights’ over their personal data. Bodies such the Office of the Information Commissioner help organisations understand if they’re treating personal data in a principled manner that promotes best practice.
Guidelines are also being produced to help organisations be more transparent and accountable in how they use data to make decisions. For instance, The Open Data Institute in the UK has developed openness principles designed to build trust in how data is stored and used. Algorithmic transparency is being contemplated as part of the EU Free Flow of Data Initiative and has become a focus of academic study in the US.
Ethics can help bridge the gap between compliance and our evolving expectations of what is fair and reasonable data usage.
However, we cannot rely on regulation alone. Legal, transparent data models can still be ‘bad’ according to O’Neil’s standards. Widely known errors in a model could still cause real harm to people if left unaddressed. An organisation’s normal processes might not be accessible or suitable for certain people – the elderly, ill and those with limited literacy – leaving them at risk. It could be a data model within a sensitive policy area, where a higher duty of care exists to ensure data models do not reflect bias. For instance, proposals to replace passports with facial recognition and fingerprint scanning would need to manage the potential for racial profiling and other issues.
Ethics can help bridge the gap between compliance and our evolving expectations of what is fair and reasonable data usage. O’Neil describes data models as “opinions put down in maths”. Taking an ethics based approach to data driven decision making helps us confront those opinions head on.
Building an ethical framework
Ethics frameworks can help us put a data model in context and assess its relative strengths and weaknesses. Ethics can bring to the forefront how people might be affected by the design choices made in the course of building a data model.
An ethics based approach to data driven decisions would start by asking questions such as:
- Are we compliant with the relevant laws and regulation?
- Do people understand how a decision is being made?
- Do they have some control over how their data is used?
- Can they appeal a decision?
However, it would also encourage data scientists to go beyond these compliance oriented questions to consider issues such as:
- Which people will be affected by the data model?
- Are the appeal mechanisms useful and accessible to the people who will need them most?
- Have we taken all possible steps to ensure errors, inaccuracies and biases in our model have been removed?
- What impact could potential errors or inaccuracies have? What is an acceptable margin of error?
- Have we clearly defined how this model will be used and outlined its limitations? What kinds of topics would it be inappropriate to apply this modelling to?
There’s no debate right now to help us understand the parameters of reasonable and acceptable data model design. What’s considered ‘ethical’ changes as we do, as technologies evolve and new opportunities and consequences emerge.
Bringing data ethics into data science reminds us we’re human. Our data models reflect design choices we make and affect people’s lives. Although ethics can be messy and hard to pin down, we need a debate around data ethics.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Climate + Environment, Science + Technology
The kiss of death: energy policies keep killing our PMs
Opinion + Analysis
Science + Technology, Society + Culture
AI is not the real enemy of artists
Opinion + Analysis
Science + Technology
Why ethics matters for autonomous cars
Opinion + Analysis
Relationships, Science + Technology
Should you be afraid of apps like FaceApp?

BY Ellen Broad
Ellen Broad is a freelance consultant and postgraduate student in data science. Until recently, she was Head of Policy for the Open Data Institute in London.

BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Bladerunner, Westworld and sexbot suffering

Bladerunner, Westworld and sexbot suffering
Opinion + AnalysisScience + Technology
BY Kym Middleton The Ethics Centre 12 DEC 2017
The sexbots and robo-soldiers we’re creating today take Bladerunner and Westworld out of the science fiction genre. Kym Middleton looks at what those texts reveal on how we should treat humanlike robots.
It’s certain: lifelike humanoid robots are on the way.
With guarantees of Terminator-esque soldiers by 2050, we can no longer relegate lifelike robots to science fiction. Add this to everyday artificial intelligence like Apple’s Siri, Amazon’s Alexa and Google Home and it’s easy to see an android future.
The porn industry could beat the arms trade to it. Realistic looking sex robots are being developed with the same AI technology that remembers what pizza you like to order – although it’s years away from being indistinguishable from people, as this CNET interview with sexbot Harmony shows.
Like the replicants of Bladerunner we first met in 1982 and the robot “hosts” of HBO’s remake of the 1973 film Westworld, these androids we’re making require us to answer a big ethical question. How are we to treat walking, talking robots that are capable of reasoning and look just like people?
Can they suffer?
If we apply the thinking of Australian philosopher Peter Singer to the question of how we treat androids, the answer lies in their capacity to suffer. In making his case for the ethical consideration of animals, Singer quotes Jeremy Bentham:
“The question is not, Can they reason? nor Can they talk? but, Can they suffer?”
An artificially intelligent, humanlike robot that walks, talks and reasons is just that – artificial. They will be designed to mimic suffering. Take away the genuine experience of physical and emotional pain and pleasure and we have an inanimate thing that only looks like a person (although the word ‘inanimate’ doesn’t seem an entirely appropriate adjective for lifelike robots).
We’re already starting to see the first androids like this. They are, at this point, basically smartphones in the form of human beings. I don’t know about you, but I don’t anthropomorphise my phone. Putting aside wastefulness, it’s easy to make the case you should be able to smash it up if you want.
But can you (spoiler) sit comfortably and watch the human-shaped robot Dolores Abernathy be beaten, dragged away and raped by the Man in Black in Westworld without having an empathetic reaction? She screams and kicks and cries like any person in trauma would. Even if robot Dolores can’t experience distress and suffering, she certainly appears to. The robot is wired to display pain and viewers are wired to have a strong emotional reaction to such a scene. And most of us will – to an actress, playing a robot, in a fictional TV series.
Let’s move back to reality. Let’s face it, some people will want to do bad things to commercially available robots – especially sexbots. That’s the whole premise of the Westworld theme park, a now not so sci-fi setting where people can act out sexual, violent, and psychological fantasies on android subjects without consequences. Are you okay with that becoming reality? What if the robots looked like children?
The virtue ethicist’s approach to human behaviour is to act with an ideal character, to do right because that’s what good people do. In time, doing the virtuous thing will be habit, a natural default position because you internalise it. The virtue ethicist is not going to be okay with the Man in Black’s treatment of Dolores. Good people don’t have dark fantasies to act out on fake humans.
The utilitarian approach to ethical decisions depends on what results in the most good for the largest amount of people. Making androids available for abuse could be a case for community safety. If dark desires can be satiated with robots, actual assaults on people could reduce. (In presenting this argument, I’m not proposing this is scientifically proven or that it’s my view.) This logic has led to debates on whether virtual child porn should be tolerated.
The deontologist on the other hand is a rule follower so unless androids have legal protections or childlike sexbots are banned in their jurisdiction, they are unlikely to hold a person who mistreats one in ill regard. If it’s your property, do whatever you’re allowed to do with it.
Consciousness
Of course, (another spoiler) the robots of Westworld and Bladerunner are conscious. They think and feel and many believe themselves to be human. They experience real anguish. Singer’s case for the ethical treatment of animals relies on this sentience and can be applied here.
But can we create conscious beings – deliberately or unwittingly? If we really do design a new intelligent android species, complete with emotions and desires that motivate them to act for themselves, then give them the capacity to suffer and make conscientious choices, we have a strong case for affording robot rights.
This is not exactly something we’re comfortable with. Animals don’t enjoy anything remotely close to human rights. It is difficult seeing us treat man made machines with the same level of respect we demand for ourselves.
Why even AI?
As is often with matters of the future, humanlike robots bring up all sorts of fascinating ethical questions. Today they’re no longer fun hypotheticals. It is important stuff we need to work out.
Let’s assume for now we can’t develop the free thinking and feeling replicants of Bladerunner and hosts of Westworld. We still have to consider how our creation and treatment of androids reflects on us. What purpose – other than sexbots and soldiers – will we make them for? What features will we design into a robot that is so lifelike it masterfully mimics a human? Can we avoid designing our own biases into these new humanoids? How will they impact our behaviour? How will they change our workplaces and societies? How do we prevent them from being exploited for terrible things?
Maybe Elon Musk is right to be cautious about AI. But if we were “summoning the demon”, it’s the one inside us that’ll be the cause of our unease.
Follow The Ethics Centre on Twitter, Facebook, Instagram and LinkedIn.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Science + Technology
Hype and hypocrisy: The high ethical cost of NFTs
Opinion + Analysis
Science + Technology
The cost of curiosity: On the ethics of innovation
Opinion + Analysis
Science + Technology
A framework for ethical AI
Opinion + Analysis
Business + Leadership, Science + Technology
Is technology destroying your workplace culture?

BY Kym Middleton
Former Head of Editorial & Events at TEC, Kym Middleton is a freelance writer, artistic producer, and multi award winning journalist with a background in long form TV, breaking news and digital documentary. Twitter @kymmidd
