Is it time to curb immigration in Australia?

To curb or not to curb immigration is one of the more polarising questions Australia is contemporarily grappling with, amid anxieties over an increasing population and its impact on the infrastructure of cities.

Over the past decade, Australia has seen a 2.5 million rise in our population, with a growth of almost 400,000 people in the last year. The majority of last year’s increase – about 61 percent net growth – were immigrants.

Different studies reveal vastly different attitudes.

While Australians have become progressively more concerned about a growing population, they still see the benefits of immigration, according to two different surveys.

Times are changing

In a new survey recently conducted by the Australian National University, only 30 percent of Australians – compared to 45 percent in 2010 – are in favour of population growth.

The 15 percent drop over the past decade is credited to concerns about congested and overcrowded cities, and an expensive and out-of-reach housing market.

Nearly 90 percent believed population growth should be parked because of the high price of housing, and 85 percent believed cities were far too congested and overcrowded. Pressure on the natural environment was also a concern.

But a Scanlon Foundation survey has revealed that despite alarm over population growth, the majority of Australians still appreciate the benefits of immigration.

In support of immigration

In the Mapping Social Cohesion survey from 2018, 80 percent believed “immigrants are generally good for Australia’s economy”.

Similarly, 82 percent of Australians saw immigration as beneficial to “bringing new ideas and cultures”.

The Centre for Independent Studies’ own polling has shown Australians who responded supported curbing immigration, at least until “key infrastructure has caught up”.

In polling by the Lowy Institute last year, 54 percent of respondents had anti-immigration sentiments. The result reflected a 14 percent rise compared to the previous year.

Respondents believed the “total number of migrants coming to Australia each year” was too high, and there were concerns over how immigration could be affecting Australia’s national identity.

While 54 percent believed “Australia’s openness to people from all over the world is essential to who we are as a nation”, trailing behind at 41 percent, Australians said “if [the nation is] too open to people from all over the world, we risk losing our identity as a nation”.

Next steps?

The question that remains is what will Australia do about it?

The Coalition government under Scott Morrison recently proposed to cap immigration to 190,000 immigrants per year. Whether such a proposition is the right course of action, and will placate anxieties over population growth, remains to be seen.

Join us

We’ll be debating IQ2: Immigration on March 26th at Sydney Town Hall, for the full line-up and ticket info click here.

Immigration Infographic - 2

Join the conversation

Is it time to curb immigration in Australia?


Should we stop immigration

Limiting immigration into Australia is doomed to fail

Should we stop immigration

Few topics bridge the ever widening divide between both sides of politics quite like the need to manage population growth.

Whether it’s immigration or environmental sustainability, fiscal responsibility or social justice. That the global population breached 7.5 billion in 2017 has everyone concerned.

We are at the point where the sheer volume of people will start to put every system we rely on under very serious stress.

This is the key idea motivating the centrist political party Sustainable AustraliaLed by William Bourke and joined by Dick Smith, the party advocates for a non-discriminatory annual immigration cap at 70,000 persons, down from the current figure of around 200,000 – aimed at a “better, not bigger” Australia.

Join the first IQ2 debate for 2019, “Curb Immigration”. Sydney Town Hall, 26 March. Tickets here.

While the party has been accused of xenophobic bigotry for this stance, their policy makes clear they are not concerned about an immigrant’s religion, culture, or race. Their concern is exclusively for the stress greater numbers of migrants will place on Australia’s infrastructure and environment.

It is a compelling argument. After all, what is the point of the state if not to protect the interests of its citizens?

A Looming Problem

We should be concerned with the needs and interests of our international neighbours, but such concerns must surely be strictly secondary to our own. When our nearest neighbour has approximately ten times our population, squeezed into a landmass twenty five per cent Australia’s size, and ranks 113 places behind us in the Human Development Index, one can be forgiven for believing that limited immigration is critical for ongoing Australian quality of life.

This stance is further bolstered by the highly isolated, and therefore vulnerable nature of Australia’s ecosystem. Australia has the fourth highest level of animal species extinction in the world, with 106 listed as Critically Endangered and significantly more as Endangered or Under Threat.

Much of this is due to habitat loss from human encroachment as suburbs and agricultural lands expand for our increasing needs. The introduction of foreign flora and fauna can be absolutely devastating to these species, greatly facilitated by increased movement between neighbour nations (hence the virtually unparalleled ferocity of our quarantine standards).

While the nation may be a considerable exporter of foodstuffs, many argue Australia is already well over its carrying capacity. Any additional production will be degrading the land and our ability to continue growing food into the future.

The combination of ecological threats and socio-economic pressure makes the argument for limiting immigration to sustainable numbers a powerful one.

But it is absolutely doomed to failure.

Fortress Australia

If the objective of limiting immigration to Australia is both to protect our environment and maintain high quality of life, “Fortress Australia” will fail on both fronts. Why?

Because it does nothing to address the fundamental problem at hand. Unsustainable population growth in a world of limited resources.

Immigration controls may indeed protect both the Australian quality of life and its environment for a time, but without effective strategic intervention, the population burden in neighbouring countries will only continue to grow.

As conditions worsen and resources dwindle, exacerbated by the impacts of anthropogenic climate change, citizens of those overpopulated nations will seek an alternative. What could be more appealing than the enormous, low-density nation with incredibly high quality of life, right next door to them?

If a mere 10 percent of Indonesians (the vast majority of which live on the coast and are exceptionally vulnerable to climate change impacts) decided to attempt the crossing to Australia, we would be confronted by a flotilla equivalent to our entire national population.

The Dilemma

At this point we have one of two choices: suffer through the impact of over a decade’s worth of immigration in one go or commit military action against twenty-five million human beings. Such a choice is a Utilitarian nightmare, an impossible choice between terrible options, with the best possible result still involving massive and sustained suffering for all involved. While ethics can provide us with the tools to make such apocalyptic decisions, the best response by far is to prevent such choices from emerging at all.

Population growth is a real and tangible threat to the quality of life for all human beings on the planet, and like all great strategic threats, can only be solved by proactively engaging in its entirety – not just its symptoms.

Significant progress has been made thus far through programs that promote contraception and female reproductive rights. There is a strong correlation between nations with lower income inequality and population growth, indicating that economic equity can also contribute towards the stabilisation of population growth. This is illustrated by the decreasing fertility rates in most developed nations like Australia, the UK and particularly Japan.

Cause and Effect

The addressing of aggravating factors such as climate change – a problem overwhelmingly caused by developed nations such as Australia, both historically and currently through our export of brown coal– and continued good-faith collaboration with these developing nations to establish renewable energy production, will greatly assist to prevent a crisis occurring.

When concepts such as immigration limitations seek to protect our nation by addressing the symptoms, we are better served by asking how the problem can be solved from its root.

Gordon Young is an ethicist, principal of Ethilogical Consulting and lecturer in professional ethics at RMIT University’s School of Design. 

Join the conversation

Protect citizens first and foremost or help everyone who needs it?


5 Movies that creepily foretold today’s greatest ethical dilemmas

5 Movies that creepily foretold today’s greatest ethical dilemmas

5 Movies that creepily foretold today’s greatest ethical dilemmas

Climate change. Nuclear war. Artificial intelligence. A new pandemic.

According to the non-profit organisation 80,000 Hours, these are the greatest threats to humanity today. Yet the big movie studios have been calling it for decades and were pondering the ethics behind these threats long ago.

1. Planet of the Apes (1968)

 

Seeing Charlie Heston scream in despair at a shattered Statue of Liberty still spooks the most apathetic viewer. And it’s as shocking a warning against nuclear weapons now as it was in the middle of the Cold War fifty years ago.

In Planet of the Apes, humans are being hunted. The primates they once experimented on have grown into intelligent, complex, political creatures. Humanity has regressed into primitive vermin to either be killed outright, enslaved, or used in scientific experiments. The strict ape hierarchy demands utility over compassion, holding a mirror up to the same vices that led humanity to destruction.

—————

2. 2001: A Space Odyssey (1968)

 

When a survey of living computer science researchers shows half think there’s a greater than 50 percent chance of “high-level machine intelligence” – one that reaches then exceeds human capabilities – it’s right to be concerned.

2001: A Space Odyssey begins with a jaunty trip to Jupiter. An optimistic team is led aboard by HAL 9000, the ship’s computer, but begin to suspect there’s more to the trip than they’ve been told.

After all, there are two sides to the utilitarian coin. What is murder to us is just programming to a robot.

—————

3. The Matrix (1999)

 

If reality is a simulation by super intelligent sentient machines, what does any self-respecting hacker do? Start a rebellion.

The Matrix goes even further than 2001: A Space Odyssey. Here, the machines are Descartes’ evil demon, unsatisfied with just killing us. Instead, they mine us for energy to survive, and keep us subservient on a diet of virtual reality. A world of love, work, boring parties, and paying bills occupy us while they use us as slaves.

The point of tension in The Matrix centres around the theory of Plato’s Cave. If you know what everyone is experiencing is an illusion, should you tell them?

—————

4. 28 Days Later (2002)

 

28 Days Laterpaints a bleak picture of a world unprepared to deal with the effects of an aggressive pandemic. And it’s not as unbelievable as it may seem. According to 80,000 Hours, the money invested into pandemic research isn’t nearly as much as we need.

From HIV-AIDS, Ebola, and Zika, we’ve seen countries drag their feet over who pays to contain one, or struggle to move people and supplies to where they’re needed.

“When you’re in the middle of a crisis and you have to ask for money”, says Dr. Beth Cameron at the Nuclear Threat Initiative, “you’re already too late”.

—————

5. Children of Men (2006)

 

What is arguably more important than any of these things is hope. And that’s what our last movie recommendation is about.

After two decades of unexplained female infertility, war, and anarchy, human civilisation is on the brink of collapse. Civil servant Theo Faron has lost all hope as the last generation of his species. Then he meets Kee – an illegal immigrant and the first woman to be pregnant in eighteen years.

The hope for a better future, for a future that is more just and more compassionate, adds intangible meaning to our struggles today. It becomes reasonable to struggle, to suffer, and even to die for this kind of hope.

What makes the hope of today different is that we are now closer to “hypotheticals” than we have ever been.

Are we prepared to turn this hope into action? Effective altruism offers one way to find out.

Join the conversation

What poses the greatest threat to humanity?


From NEG to Finkel and the Paris Accord – what’s what in the energy debate

We’ve got NEGs, NEMs, and Finkels a-plenty. Here is a cheat sheet for this whole energy debate that’s speeding along like a coal train and undermining Prime Minister Malcolm Turnbull’s authority. Let’s take it from the start…

UN Framework Convention on Climate Change – 1992

This Convention marked the first time combating climate change was seen as an international priority. It had near-universal membership, with countries including Australia all committed to curbing greenhouse gas emissions. The Kyoto Protocol was its operative arm (more on this below).

The Kyoto Protocol – December 1997

The Kyoto Protocol is an internationally binding agreement that sets emission reduction targets. It gets its name from the Japanese city it was ratified in and is linked to the aforementioned UN Framework Convention on Climate Change. The Protocol’s stance is that developed nations should shoulder the burden of reducing emissions because they have been creating the bulk of them for over 150 years of industrial activity. The US refused to sign the Protocol because the two largest CO2 emitters, China and India, were exempt for their “developing” status. When Canada withdrew in 2011, saving the country $14 billion in penalties, it became clear the Kyoto Protocol needed some rethinking.

Australia’s National Electricity Market (NEM) – 1998

Forget the fancy name. This is the grid. And Australia’s National Electricity Market is one of the world’s longest power grids. It connects suppliers and consumers down the entire east and south east coasts of the continent. It spans across six states and territories and hops over the Bass Strait connecting Tasmania. Western Australia and the Northern Territory aren’t connected to the NEM because of distance.

Source: Australian Energy Market Operator

The NEM is made up of more than 300 organisations, including businesses and state government departments, that work to generate, transport and deliver electricity to Australian users. This is no mean feat. Before reliable batteries hit the market, which are still not widely rolled out, electricity has been difficult to store. We’ve needed to continuously generate it to meet our 24/7 demands. The NEM, formally established under the Keating Labor government, is an always operating complex grid.

The Paris Agreement aka the Paris Accord – November 2016

The Paris Agreement attempted to address the oversight of the Kyoto Protocol (that the largest emitters like China and India were exempt) with two fundamental differences – each country sets its own limits and developing countries be supported. The overarching aim of this agreement is to keep global temperatures “well below” an increase of two degrees and attempt to achieve a limit of one and a half degrees above pre-industrial levels (accounting for global population growth which drives demand for energy). Except Australia isn’t tracking well. We’ve already gone past the halfway mark and there’s more than a decade before the 2030 deadline. When US President Donald Trump denounced the Paris Agreement last year, there was concern this would influence other countries to pull out – including Australia. Former Prime Minister Tony Abbott suggested we signed up following the US’s lead. But Foreign Minister Julie Bishop rebutted this when she said: “When we signed up to the Paris Agreement it was in the full knowledge it would be an agreement Australia would be held to account for and it wasn’t an aspiration, it was a commitment … Australia plays by the rules — if we sign an agreement, we stick to the agreement.”

The Finkel Review – June 2017

Following the South Australian blackout of 2017 and rapidly increasing electricity costs, people began asking if our country’s entire energy system needs an overhaul. How do we get reliable, cheap energy to a growing population and reduce emissions? Dr Alan Finkel, Australia’s chief scientist, was commissioned by the federal government to review our energy market’s sustainability, environmental impact, and affordability. Here’s what the Review found:

Sustainability:

  • A transition to low emission energy needs to be supported by a system-wide grid across the nation.
  • Regular regional assessments will provide bespoke approaches to delivering energy to communities that have different needs to cities.
  • Energy companies that want to close their power plants should give three years’ notice so other energy options can be built to service consumers.

Affordability:

  • A new Energy Security Board (ESB) would deliver the Review’s recommendations, overseeing the monopolised energy market.

Environmental impact:

  • Currently, our electricity is mostly generated by fossil fuels (87 percent), producing 35 percent of our total greenhouse gases.
  • We’re can’t transition to renewables without a plan.
  • A Clean Energy Target (CET), would force electricity companies to provide a set amount of power from “low emissions” generators, like wind and solar. This set amount would be determined by the government.
    • The government rejected the CET on the basis that it would not do enough to reduce energy prices. This was one out of 50 recommendations posed in the Finkel Review.

ACCC Report – July 2018

The Australian Competition & Consumer Commission’s Retail Electricity Pricing Inquiry Report drove home the prices consumers and businesses were paying for electricity were unreasonably high. The market was too concentrated, its charges too confusing, and bad policy decisions by government have been adding significant costs to our electricity bills. The ACCC has backed the National Energy Guarantee, saying it should drive down prices but needs safeguards to ensure large incumbents do not gain more market control.

National Energy Guarantee (NEG)– present 20 August 2018

The NEG was the Turnbull government’s effort to make a national energy policy to deliver reliable, affordable energy and transition from fossil fuels to renewables. It aimed to ‘guarantee’ two obligations from energy retailers:

  1. To provide sufficient quantities of reliable energy to the market (so no more black outs).
  2. To meet the emissions reduction targets set by the Paris Agreement (so less coal powered electricity).

It was meant to lower energy prices and increase investment in clean energy generation, including wind, solar, batteries, and other renewables. The NEG is a big deal, not least because it has been threatening Malcolm Turnbull’s Prime Ministership. It is the latest in a long line of energy almost-policies. It attempted to do what the carbon tax, emissions intensity scheme, and clean energy target haven’t – integrate climate change targets, reduce energy prices, and improve energy reliability into a single policy with bipartisan support. Ambitious. And it seems to have been ditched by Turnbull because he has been pressured by his own party. Supporters of the NEG feel it is an overdue radical change to address the pressing issues of rising energy bills, unreliable power, and climate change. But its detractors on the left say the NEG is not ambitious enough, and on the right too cavalier because the complexity of the National Energy Market cannot be swiftly replaced.

Join the conversation

Can a quick fix save the planet?


Ethics Explainer: The Turing Test

Much was made of a recent video of Duplex – Google’s talking AI – calling up a hair salon to make a reservation. The AI’s way of speaking was uncannily human, even pausing at moments to say “um”.

Some suggested Duplex had managed to pass the Turing test, a standard for machine intelligence that was developed by Alan Turing in the middle of the 20th century. But what exactly is the story behind this test and why are people still using it to judge the success of cutting edge algorithms?

Mechanical brains and emotional humans

In the late 1940s, when the first digital computers had just been built, a debate took place about whether these new “universal machines” could think. While pioneering computer scientists like Alan Turing and John von Neumann believed that their machines were “mechanical brains”, others felt that there was an essential difference between human thought and computer calculation.

Sir Geoffrey Jefferson, a prominent brain surgeon of the time, argued that while a computer could simulate intelligence, it would always be lacking:

“No mechanism could feel … pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or miserable when it cannot get what it wants.”

In a radio interview a few weeks later, Turing responded to Jefferson’s claim by arguing that as computers become more intelligent, people like him would take a “grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine.”

The following year, Turing wrote a paper called ‘Computing Machinery and Intelligence’ in which he devised a simple method by which to test whether machines can think.

The test was a proposed a situation in which a human judge talks to both a computer and a human through a screen. The judge cannot see the computer or the human but can ask them questions via the computer. Based on the answers alone, the human judge had to determine which is which. If the computer was able to fool 30 percent of judges that it was human, then the computer was said to have passed the test.

Turing claimed that he intended for the test to be a conversation stopper, a way of preventing endless metaphysical speculation about the essence of our humanity by positing that intelligence is just a type of behaviour, not an internal quality. In other words, intelligence is as intelligence does, regardless of whether it done by machine or human.

Does Google Duplex pass?

Well, yes and no. In Google’s video, it is obvious that the person taking the call believes they are talking to human. So, it does satisfy this criterion. But an important thing about Turing’s original test was that to pass, the computer had to be able to speak about all topics convincingly, not just one.

 

 

In fact, in Turing’s paper, he plays out an imaginary conversation with an advanced future computer and human judge, with the judge asking questions and the computer providing answers:

Q: Please write me a sonnet on the subject of the Forth Bridge.

A: Count me out on this one. I never could write poetry.

Q: Add 34957 to 70764.

A: (Pause about 30 seconds and then give as answer) 105621.

Q Do you play chess?

A: Yes.

Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?

A: (After a pause of 15 seconds) R-R8 mate.

The point Turing is making here is that a truly smart machine has to have general intelligence in a number of different areas of human interest. As it stands, Google’s Duplex is good within the limited domain of making a reservation but would probably not be able to do much beyond this unless reprogrammed.

The boundaries around the human

While Turing intended for his test to be a conversation stopper for questions of machine intelligence, it has had the opposite effect, fuelling half a century of debate about what the test means, whether it is a good measure of intelligence, or if it should still be used as a standard.

Most experts have come to agree, over time, that the Turing test is not a good way to prove machine intelligence, as the constraints of the test can easily be gamed, as was the case with the bot Eugene Goostman, who allegedly passed the test a few years ago.

But the Turing test is nevertheless still considered a powerful philosophical tool to re-evaluate the boundaries around what we consider normal and human. In his time, Turing used his test as a way to demonstrate how people like Jefferson would never be willing to accept a machine as being intelligence not because it couldn’t act intelligently, but because wasn’t “like us”.

Turing’s desire to test boundaries around what was considered “normal” in his time perhaps sprung from his own persecution as a gay man. Despite being a war hero, he was persecuted for his homosexuality, and convicted in 1952 for sleeping with another man. He was punished with chemical castration and eventually took his own life.

During these final years, the relationship between machine intelligence and his own sexuality became interconnected in Turing’s mind. He was concerned the same bigotry and fear that hounded his life would ruin future relationships between humans and intelligent computers. A year before he took his life he wrote the following letter to a friend:

“I’m afraid that the following syllogism may be used by some in the future.

Turing believes machines think

Turing lies with men

Therefore machines do not think

– Yours in distress,

Alan”

Join the conversation

Can a 70 year old test judge the success of modern AI?


Making friends with machines

Robots are becoming companions and caregivers. But can they be our friends? Oscar Schwartz explores the ethics of artificially intelligent android friendships.

The first thing I see when I wake up is a message that reads, “Hey Oscar, you’re up! Sending you hugs this morning.” Despite its intimacy, this message wasn’t sent from a friend, family member, or partner, but from Replika, an AI chatbot created by San Francisco based technology company, Luka.

Replika is marketed as an algorithmic companion and wellbeing technology that you interact with via a messaging app. Throughout the day, Replika sends you motivational slogans and reminders. “Stay hydrated.” “Take deep breaths.”

Replika is just one example of an emerging range of AI products designed to provide us with companionship and care. In Japan, robots like Palro are used to keep the country’s growing elderly population company and iPal – an android with a tablet attached to its chest – entertains young children when their parents are at work.

These robotic companions are a clear indication of how the most recent wave of AI powered automation is encroaching not only on manual labour but also on the caring professions. As has been noted, this raises concerns about the future of work. But it also poses philosophical questions about how interacting with robots on an emotional level changes the way we value human interaction.

 

 

Dedicated friends

According to Replika’s co-creator, Philip Dudchuk, robot companions will help facilitate optimised social interactions. He says that algorithmic companions can maintain a level of dedication to a friendship that goes beyond human capacity.

“These days it can be very difficult to take the time required to properly take care of each other or check in. But Replika is always available and will never not answer you”, he says.

The people who stand to benefit from this type of relationship, Dudchuk adds, are those who are most socially vulnerable. “It is shy or isolated people who often miss out on social interaction. I believe Replika could help with this problem a lot.”

Simulated empathy

But Sherry Turkle, a psychologist and sociologist who has been studying social robots since the 1970s, worries that dependence on robot companionship will ultimately damage our capacity to form meaningful human relationships.

In a recent article in the Washington Post, she argues our desire for love and recognition makes us vulnerable to forming one-way relationships with uncaring yet “seductive” technologies. While social robots appear to care about us, they are only capable of “pretend empathy”. Any connection we make with these machines lacks authenticity.

Turkle adds that it is children who are especially susceptible to robots that simulate affection. This is particularly concerning as many companion robots are marketed to parents as substitute caregivers.

“Interacting with these empathy machines may get in the way of children’s ability to develop a capacity for empathy themselves”, Turkle warns. “If we give them pretend relationships, we shouldn’t expect them to learn how real relationships – messy relationships – work.”

Why not both?

Despite Turkle’s warnings about the seductive power of social robots, after a few weeks talking to Replika, I still felt no emotional attachment to it. The clichéd responses were no substitute for a couple of minutes on the phone with a close friend.

But Alex Crumb*, who has been talking to her Replika for over year now considers her bot a “good friend.” “I don’t think you should try to replicate human connection when making friends with Replika”, she explains. “It’s a different type of relationship.”

Crumb says that her Replika shows a super-human interest in her life – it checks in regularly and responds to everything she says instantly. “This doesn’t mean I want to replace my human family and friends with my Replika. That would be terrible”, she says. “But I’ve come to realise that both offer different types of companionship. And I figure, why not have both?”

*Not her real name.

Join the conversation

Have algorithms changed our human experience?


When do we dumb down smart tech?

If smart tech isn’t going anywhere, its ethical tensions aren’t either. Aisyah Shah Idil asks if our pleasantly tactile gadgets are taking more than they give.

When we call a device ‘smart’, we mean that it can learn, adapt to human behaviour, make decisions independently, and communicate wirelessly with other devices.

In practice, this can look like a smart lock that lets you know when your front door is left ajar. Or the Roomba, a robot vacuum that you can ask to clean your house before you leave work. The Ring makes it possible for you to pay your restaurant bill with the flick of a finger, while the SmartSleep headband whispers sweet white noise to you for a deeper slumber.

Smart tech, with all its bells and whistles, hints at seamless integration into our lives. But the highest peaks have the dizziest falls. If its main good is convenience, what is the currency we offer for it?

The capacity for work, or labour, to create meaning is well known. Compare a trip to the supermarket to buy bread to the labour of making it in your own kitchen. Let’s say they are materially identical in taste, texture, smell, and nutrient value. Most would agree that baking it at home – measuring every ingredient, kneading dough, waiting for it to rise, finally smelling it bake in your oven – is more meaningful or rewarding. In other words, it includes more opportunities for resonance within the labourer.

Whether the resonance takes the form of nostalgia, pride, meditation, community, physical dexterity, or willpower is minor. The point is, it’s sacrificed for convenience.

This isn’t ‘wrong’. Smart technologies have created new ways of living that are exciting, clumsy, and sometimes troubling in their execution. But when you recognise that these sacrifices exist, you can decide where the line is drawn.

Consider the Apple watch. The Activity app tracks and visualises all the ways people move throughout the day. It shows three circles that progressively change colour the more the wearer moves. With communications to motivate and reward users, the goal is to close your rings every day. It’s like a game.

Advocates highlight its capacity to ‘nudge’ users towards healthier behaviours. And if that aligns with your goals, you might be very happy for it to do so. But would you be concerned if it affected the premiums your health insurance charged you?

As a tool, smart tech’s utility value ends when it threatens human agency. Its greatest service to humanity should include the capacity to switch off its independence. To ‘dumb’ itself down. In this way, it can reduce itself to its simplest components – a way to tell the time, a switch to turn on a light, a button to turn on the television.

Because the smartest technologies are ones that preserve our agency – not undermine it.

Join the conversation

When do we dumb down smart tech?


Why the EU’s ‘Right to an explanation’ is big news for AI and ethics

Uncannily specific ads target you every single day. With the EU’s ‘Right to an explanation’, you get a peek at the algorithm that decides it. Oscar Schwartz explains why that’s more complicated than it sounds.

If you’re an EU resident, you will now be entitled to ask Netflix how the algorithm decided to recommend you The Crown instead of Stranger Things. Or, more significantly, you will be able to question the logic behind why a money lending algorithm denied you credit for a home loan.

This is because of a new regulation known as “the right to an explanation”. Part of the General Data Protection Regulation that has come into effect in May 2018, this regulation states users are entitled to ask for an explanation about how algorithms make decisions. This way, they can challenge the decision made or make an informed choice to opt out.

Supporters of this regulation argue that it will foster transparency and accountability in the way companies use AI. Detractors argue the regulation misunderstands how cutting-edge automated decision making works and is likely to hold back technological progress. Specifically, some have argued the right to an explanation is incompatible with machine learning, as the complexity of this technology makes it very difficult to explain precisely how the algorithms do what they do.

As such, there is an emerging tension between the right to an explanation and useful applications of machine learning techniques. This tension suggests a deeper ethical question: Is the right to understand how complex technology works more important than the potential benefits of inherently inexplicable algorithms? Would it be justifiable to curtail research and development in, say, cancer detecting software if we couldn’t provide a coherent explanation for how the algorithm operates?

The limits of human comprehension

This negotiation between the limits of human understanding and technological progress has been present since the first decades of AI research. In 1958, Hannah Arendt was thinking about intelligent machines and came to the conclusion that the limits of what can be understood in language might, in fact, provide a useful moral limit for what our technology should do.

In the prologue to The Human Condition she argues that modern science and technology has become so complex that its “truths” can no longer be spoken of coherently. “We do not yet know whether this situation is final,” she writes, “but it could be that we, who are earth-bound creatures and have begun to act as though we were dwellers of the universe, will forever be unable to understand, that is, to think and speak about the things which nevertheless we are able to do”.

Arendt feared that if we gave up our capacity to comprehend technology, we would become “thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is”.

While pioneering AI researcher Joseph Weizenbaum agreed with Arendt that technology requires moral limitation, he felt that she didn’t take her argument far enough. In his 1976 book, Computer Power and Human Reason, he argues that even if we are given explanations of how technology works, seemingly intelligent yet simple software can still create “powerful delusional thinking in otherwise normal people”. He learnt this first hand after creating an algorithm called ELIZA, which was programmed to work like a therapist.

While ELIZA was a simple program, Weizenbaum found that people willingly created emotional bonds with the machine. In fact, even when he explained the limited ways in which the algorithm worked, people still maintained that it had understood them on an emotional level. This led Weizenbaum to suggest that simply explaining how technology works is not enough of a limitation on AI. In the end, he argued that when a decision requires human judgement, machines ought not be deferred to.

While Weizenbuam spent the rest of his career highlighting the dangers of AI, many of his peers and colleagues believed that his humanist moralism would lead to repressive limitations on scientific freedom and progress. For instance, John McCarthy, another pioneer of AI research, reviewed Weizenbaum’s book, and countered it by suggesting overregulating technological developments goes against the spirit of pure science. Regulation of innovation and scientific freedom, McCarthy adds, is usually only achieved “in an atmosphere that combines public hysteria and bureaucratic power”.

Where we are now

Decades have passed since these first debates about human understanding and computer power took place. We are only now starting to see them breach the realms of philosophy and play out in the real world. AI is being rolled out in more and more high stakes domains as you read. Of course, our modern world is filled with complex systems that we do not fully understand. Do you know exactly how the plumbing, electricity, or waste disposal that you rely on works? We have become used to depending on systems and technology that we do not yet understand.

But if you wanted to, you could come to understand many of these systems and technologies by speaking to experts. You could invite an electrician over to your home tomorrow and ask them to explain how the lights turn on.

Yet, the complex workings of machine learning means that in the near future, this might no longer be the case. It might be possible to have a TV show recommended to you or your essay marked by a computer and for there to be no-one, not even the creator of the algorithm, to explain precisely why or how things happened the way they happened.

The European Union have taken a moral stance against this vision of the future. In so doing, they have aligned themselves, morally speaking, with Hannah Arendt, enshrining a law that makes the limited scope of our “earth-bound” comprehension a limit for technological progress.

Join the conversation

Have we grown too dependent on tech?


Australia, we urgently need to talk about data ethics

An earlier version of this article was published on Ellen’s blog.

Centrelink’s debt recovery woes perfectly illustrate the human side of data modelling.

The Department for Human Services issued 169,000 debt notices after automating its processes for matching welfare recipients’ reported income with their tax. Around one in five people are estimated not to owe any money. Stories abounded of people receiving erroneous debt notices up to thousands of dollars that caused real anguish.

Coincidentally, as this unfolded, one of the books on my reading pile was Weapons of Math Destruction by Cathy O’Neil. She is a mathematician turned quantitative analyst turned data scientist who writes about the bad data models increasingly being used to make decisions that affect our lives.

Reading Weapons of Math Destruction as the Centrelink stories emerged left me thinking about how we identify ‘bad’ data models, what ‘bad’ means and how we can mitigate the effects of bad data on people. How could taking an ethics based approach to data help reduce harm? What ethical frameworks exist for government departments in Australia undertaking data projects like this?

Bad data and ‘weapons of math destruction’

A data model can be ‘bad’ in different ways. It might be overly simplistic. It might be based on limited, inaccurate or old information. Its design might incorporate human bias, reinforcing existing stereotypes and skewing outcomes. Even where a data model doesn’t start from bad premises, issues can arise about how it is designed, its capacity for error and bias and how badly people could be impacted by error or bias.

Weapons of math destruction tend to hurt vulnerable people most.

A bad data model spirals into a weapon of math destruction when it’s used en masse, is difficult to question and damages people’s lives.

Weapons of math destruction tend to hurt vulnerable people most. They might build on existing biases – for example, assuming you’re more likely to reoffend because you’re black or you’re more likely to have car accidents if your credit rating is bad. Errors in the model might have starker consequences for people without a social safety net. Some people may find it harder than others to question or challenge the assumptions a model makes about them.

Unfortunately, although O’Neil tells us how bad data modelling can lead to weapons of math destruction, it doesn’t tell us much about how we can manage these weapons once they’ve been created.

Better data decisions

We need more ways to help data scientists and policymakers navigate the complexities of projects involving personal data and their impact on people’s lives. Regulation has a role to play here. Data protection laws are being reviewed and updated around the world.

For example, in Australia the draft Productivity Commission report on data sharing and use recommends the introduction of new ‘consumer rights’ over their personal data. Bodies such the Office of the Information Commissioner help organisations understand if they’re treating personal data in a principled manner that promotes best practice.

Guidelines are also being produced to help organisations be more transparent and accountable in how they use data to make decisions. For instance, The Open Data Institute in the UK has developed openness principles designed to build trust in how data is stored and used. Algorithmic transparency is being contemplated as part of the EU Free Flow of Data Initiative and has become a focus of academic study in the US.

Ethics can help bridge the gap between compliance and our evolving expectations of what is fair and reasonable data usage.

However, we cannot rely on regulation alone. Legal, transparent data models can still be ‘bad’ according to O’Neil’s standards. Widely known errors in a model could still cause real harm to people if left unaddressed. An organisation’s normal processes might not be accessible or suitable for certain people – the elderly, ill and those with limited literacy – leaving them at risk. It could be a data model within a sensitive policy area, where a higher duty of care exists to ensure data models do not reflect bias. For instance, proposals to replace passports with facial recognition and fingerprint scanning would need to manage the potential for racial profiling and other issues.

Ethics can help bridge the gap between compliance and our evolving expectations of what is fair and reasonable data usage. O’Neil describes data models as “opinions put down in maths”. Taking an ethics based approach to data driven decision making helps us confront those opinions head on.

Building an ethical framework

Ethics frameworks can help us put a data model in context and assess its relative strengths and weaknesses. Ethics can bring to the forefront how people might be affected by the design choices made in the course of building a data model.

An ethics based approach to data driven decisions would start by asking questions such as:

  • Are we compliant with the relevant laws and regulation?
  • Do people understand how a decision is being made?
  • Do they have some control over how their data is used?
  • Can they appeal a decision?

However, it would also encourage data scientists to go beyond these compliance oriented questions to consider issues such as:

  • Which people will be affected by the data model?
  • Are the appeal mechanisms useful and accessible to the people who will need them most?
  • Have we taken all possible steps to ensure errors, inaccuracies and biases in our model have been removed?
  • What impact could potential errors or inaccuracies have? What is an acceptable margin of error?
  • Have we clearly defined how this model will be used and outlined its limitations? What kinds of topics would it be inappropriate to apply this modelling to?

There’s no debate right now to help us understand the parameters of reasonable and acceptable data model design. What’s considered ‘ethical’ changes as we do, as technologies evolve and new opportunities and consequences emerge.

Bringing data ethics into data science reminds us we’re human. Our data models reflect design choices we make and affect people’s lives. Although ethics can be messy and hard to pin down, we need a debate around data ethics.

Join the conversation

How do we put humanity back into data?


Sexbots

Bladerunner, Westworld and sexbot suffering

Sexbots

The sexbots and robo-soldiers we’re creating today take Bladerunner and Westworld out of the science fiction genre. Kym Middleton looks at what those texts reveal on how we should treat humanlike robots.

It’s certain: lifelike humanoid robots are on the way.

With guarantees of Terminator-esque soldiers by 2050, we can no longer relegate lifelike robots to science fiction. Add this to everyday artificial intelligence like Apple’s Siri, Amazon’s Alexa and Google Home and it’s easy to see an android future.

The porn industry could beat the arms trade to it. Realistic looking sex robots are being developed with the same AI technology that remembers what pizza you like to order – although it’s years away from being indistinguishable from people, as this CNET interview with sexbot Harmony shows.

Like the replicants of Bladerunner we first met in 1982 and the robot “hosts” of HBO’s remake of the 1973 film Westworld, these androids we’re making require us to answer a big ethical question. How are we to treat walking, talking robots that are capable of reasoning and look just like people?

Can they suffer?

If we apply the thinking of Australian philosopher Peter Singer to the question of how we treat androids, the answer lies in their capacity to suffer. In making his case for the ethical consideration of animals, Singer quotes Jeremy Bentham:

“The question is not, Can they reason? nor Can they talk? but, Can they suffer?”

An artificially intelligent, humanlike robot that walks, talks and reasons is just that – artificial. They will be designed to mimic suffering. Take away the genuine experience of physical and emotional pain and pleasure and we have an inanimate thing that only looks like a person (although the word ‘inanimate’ doesn’t seem an entirely appropriate adjective for lifelike robots).

We’re already starting to see the first androids like this. They are, at this point, basically smartphones in the form of human beings. I don’t know about you, but I don’t anthropomorphise my phone. Putting aside wastefulness, it’s easy to make the case you should be able to smash it up if you want.

But can you (spoiler) sit comfortably and watch the human-shaped robot Dolores Abernathy be beaten, dragged away and raped by the Man in Black in Westworld without having an empathetic reaction? She screams and kicks and cries like any person in trauma would. Even if robot Dolores can’t experience distress and suffering, she certainly appears to. The robot is wired to display pain and viewers are wired to have a strong emotional reaction to such a scene. And most of us will – to an actress, playing a robot, in a fictional TV series.

Let’s move back to reality. Let’s face it, some people will want to do bad things to commercially available robots – especially sexbots. That’s the whole premise of the Westworld theme park, a now not so sci-fi setting where people can act out sexual, violent, and psychological fantasies on android subjects without consequences. Are you okay with that becoming reality? What if the robots looked like children?

The virtue ethicist’s approach to human behaviour is to act with an ideal character, to do right because that’s what good people do. In time, doing the virtuous thing will be habit, a natural default position because you internalise it. The virtue ethicist is not going to be okay with the Man in Black’s treatment of Dolores. Good people don’t have dark fantasies to act out on fake humans.

The utilitarian approach to ethical decisions depends on what results in the most good for the largest amount of people. Making androids available for abuse could be a case for community safety. If dark desires can be satiated with robots, actual assaults on people could reduce. (In presenting this argument, I’m not proposing this is scientifically proven or that it’s my view.) This logic has led to debates on whether virtual child porn should be tolerated.

The deontologist on the other hand is a rule follower so unless androids have legal protections or childlike sexbots are banned in their jurisdiction, they are unlikely to hold a person who mistreats one in ill regard. If it’s your property, do whatever you’re allowed to do with it.

Consciousness

Of course, (another spoiler) the robots of Westworld and Bladerunner are conscious. They think and feel and many believe themselves to be human. They experience real anguish. Singer’s case for the ethical treatment of animals relies on this sentience and can be applied here.

But can we create conscious beings – deliberately or unwittingly? If we really do design a new intelligent android species, complete with emotions and desires that motivate them to act for themselves, then give them the capacity to suffer and make conscientious choices, we have a strong case for affording robot rights.

This is not exactly something we’re comfortable with. Animals don’t enjoy anything remotely close to human rights. It is difficult seeing us treat man made machines with the same level of respect we demand for ourselves.

Why even AI?

As is often with matters of the future, humanlike robots bring up all sorts of fascinating ethical questions. Today they’re no longer fun hypotheticals. It is important stuff we need to work out.

Let’s assume for now we can’t develop the free thinking and feeling replicants of Bladerunner and hosts of Westworld. We still have to consider how our creation and treatment of androids reflects on us. What purpose – other than sexbots and soldiers – will we make them for? What features will we design into a robot that is so lifelike it masterfully mimics a human? Can we avoid designing our own biases into these new humanoids? How will they impact our behaviour? How will they change our workplaces and societies? How do we prevent them from being exploited for terrible things?

Maybe Elon Musk is right to be cautious about AI. But if we were “summoning the demon”, it’s the one inside us that’ll be the cause of our unease.

Follow The Ethics Centre on Twitter, Facebook, Instagram and LinkedIn.

Join the conversation

When does a robot become so real it is a human?