Blockchain: Some ethical considerations

The development and application of blockchain technologies gives rise to two major ethical issues to do with:

  • Meeting expectations – in terms of security, privacy, efficiency and the integrity of the system, and
  • The need to avoid the inadvertent facilitation of unconscionable conduct: crime and oppressive conduct that would otherwise be offset by a mediating institution

Neither issue is unique to blockchain. Neither is likely to be fatal to its application. However, both involve considerable risks if not anticipated and proactively addressed.

At the core of blockchain technology lies the operation of a distributed ledger in which multiple nodes independently record and verify changes on the block. Those changes can signify anything – a change in ownership, an advance in understanding or consensus, an exchange of information. That is, the coding of the blockchain is independent and ‘symbolic’ of a change in a separate and distinct real-world artefact (a physical object, a social fact – such as an agreement, a state of affairs, etc.).

The potential power of blockchain technology lies in a form of distribution associated with a technically valid equivalent of ‘intersubjective agreement’. Just as in language the meaning of a word remains stable because the agreement of multiple users of that word, so blockchain ‘democratises’ agreement that a certain state of affairs exists. Prior to the evolution of blockchain, the process of verification was undertaken by one (or a few) sources of authority – exchanges and the like. They were the equivalent of the old mainframe computers that formerly dominated the computing landscape until challenged by PC enabled by the internet and world wide web.

Blockchain promises greater efficiency (perhaps), security, privacy and integrity by removing the risk (and friction) that arises out of dependence on just one or a few nodes of authority. Indeed, at least some of the appeal of blockchain is its essentially ‘anti-authoritarian’ character.

However, the first ethical risk to be managed by blockchain advocates is that they not over-hype the technology’s potential and then over-promise in terms of what it can deliver. The risk of doing either can be seen at work in an analogous field – that of medical research. Scientists and technologists often feel compelled to announce ‘breakthroughs’ that, on closer inspection, barely merit that description. Money, ego, peer group pressure – these and other factors contribute to the tendency for the ‘new’ to claim more than can be delivered.

“However, the first ethical risk to be managed by blockchain advocates is that they not over-hype the technology’s potential and then over-promise in terms of what it can deliver.”

It’s not just that this can lead to disappointment – very real harm can befall the gullible. One can foresee an indeterminate period of time during which the potential of blockchain is out of step with what is technically possible. It all depends on the scope of blockchain’s ambitions – and the ability of the distributed architecture to maintain the communications and processing power needed to manage and process an explosion in blockchain related information.

Yet, this is the lesser of blockchain’s two major ethical challenges. The greater problem arises in conditions of asymmetry of power (bargaining power, information, kinetic force, etc.) – where blockchain might enable ‘transactions’ that are the product of force, fear and fraud. All three ‘evils’ destroy the efficiency of free markets – and from an ethical point of view, that is the least of the problems.

“The greater problem arises in conditions of asymmetry of power (bargaining power, information, kinetic force, etc.) – where blockchain might enable ‘transactions’ that are the product of force, fear and fraud.”

One advantage of mediating institutions is that they can provide a measure of supervision intended to identify and constrain the misuse of markets. They can limit exploitation or the use of systems for criminal or anti-social activity. The ‘dark web’ shows what can happen when there is no mediation. Libertarians applaud the degree of freedom it accords. However, others are justifiably concerned by the facilitation of conduct that violates the fundamental norms on which any functional society must be based. It is instructive that crypto-currencies (based on blockchain) are the media of exchange in the rankest regions of the dark web.

So, how do the designers and developers of blockchain avoid becoming complicit in evil? Can they do better than existing mediating institutions? May they ‘wash their hands’ even when their tools are used in the worst of human deeds?

This article was first published here. Dr Simon Longstaff presented at The ADC Global Blockchain Summit in Adelaide on Monday 18 March on the issue of trust and the preservation of ethics in the transition to a digital world. 


take control of your data

Not too late: regaining control of your data

IT entrepreneur Joanne Cooper wants consumers to be able to decide who holds – and uses – their data. This is why Alexa and Siri are not welcome in her home.

Joanne won’t go to bed with her mobile phone on the bedside table. It is not that she is worried about sleep disturbances – she is more concerned about the potential of hackers to use it as a listening device.

“Because I would be horrified if people heard how loud I snore,” she says.

She is only half-joking. As an entrepreneur in the field of data privacy, she has heard enough horror stories about the hijacking of devices to make her wary of things that most of us now take for granted.

“If my device, just because it happened to be plugged in my room, became a listening device, or a filming device, would that put me in a compromising position? Could I have a ransomware attack?”

(It can happen and has happened. Spyware and Stalkerware are openly advertised for sale.)

Taking back control

Cooper is the founder of ID Exchange – an Australian start-up aiming to allow users to control if, when and to whom they will share their data. The idea is to simplify the process so that people will be able to visit one platform to control access.

This is important because, at present, it is impossible to keep track of who has your data and how much access you have agreed to and whether you have allowed it to be used by third parties. If you decide to revoke that access, the process is difficult and time-consuming.

Big data is big business

The data that belongs to you is liquid gold for businesses wanting to improve their offerings and pinpoint potential customers. It is also vital information for government agencies and a cash pot for hackers.

Apart from the basic name, address, age details, that data can reveal the people to whom you are connected, your finances, health, personality, preferences and where you are located at any point in time.

That information is harvested from everyday interactions with social media, service providers and retailers. For instance, every time you answer a free quiz on Facebook, you are providing someone with data.

Google Assistant uses your data to book appointments

 

With digital identity and personal data-related services expected to be worth $1.58 trillion in the EU alone by 2020, Cooper asks whether we have consciously given permission for that data to be shared and used.

A lack of understanding

Do we realise what we have done when we tick a permission box among screens of densely-worded legalese? When we sign up to a loyalty program?

A study by the Consumer Policy Research Centre finds that 94 per cent of those surveyed did not read privacy policies. Of those that did, two-thirds said they still signed up despite feeling uncomfortable and, of those, 73 per cent said they would not otherwise have been able to access the service.

And, what we are getting in return for that data? Do we really want advertisers to know our weak points, such as when we are in a low mood and susceptible to “retail therapy”? Do we want them to conclude we are expecting a new baby before we have had a chance to announce it to our own families?

Even without criminal intent, limited control over the use of our data can have life-altering consequences when it is used against us in deciding whether we may qualify for insurance, a loan, or a job.

“It is not my intention to create fear or doubt or uncertainty about the future,” explains Cooper. “My passion is to drive education about how we have to become “self-accountable” about the access to our data that will drive a trillion-dollar market,” she says.

“Privacy is a Human Right.”

Cooper was schooled in technology and entrepreneurialism by her father, Tom Cooper, who was one of the Australian IT industry’s pioneers. In the 1980s, he introduced the first IBM Compatible DOS-based computers into this country.

She started working in her father’s company at the age of 15 and has spent the past three decades in a variety of IT sectors, including the PC market, consulting for The Yankee Group, as a cloud specialist for Optus Australia, and financial services with Allianz Australia.

Starting ID Exchange in 2015, Cooper partnered with UK-based platform Digi.me, which aims to round up all the information that companies have collected on individuals, then hand it over those individuals for safekeeping on a cloud storage service of their choosing. Cooper is planning to add in her own business, which would provide the technology to allow people to opt in and opt out of sharing their data easily.

Cooper says she became passionate about the issue of data privacy in 2015, after watching a 60 Minutes television segment about hackers using mobile phones to bug, track and hack people through a “security hole” in the SS7 signaling system.

This “hole” was most recently used to drain bank accounts at Metro Bank in the UK, it was revealed in February.

Lawmakers aim to strengthen data protection

The new European General Data Protection Regulation is a step forward in regaining control of the use of data. Any Australian business that collects data on a person in the EU or has a presence in Europe must comply with the legislation that ensures customers can refuse to give away non-essential information.

If that company then refuses service, it can be fined up to 4 per cent of its global revenue. Companies are required to get clear consent to collect personal data, allows individuals to access the data stored about them, fix it if it is wrong, and have it deleted if they want.

The advance of the “internet of things” means that everyday objects are being computerised and are capable of collecting and transmitting data about us and how we use them. A robotic vacuum cleaner can, for instance, record the dimensions of your home. Smart lighting can take note of when you are home. Your car knows exactly where you have gone.

For this reason, Cooper says she will not have voice-activated assistants – such as Google’s Home, Amazon Echo’s Alexa or Facebook’s Portal – in her home. “It has crossed over the creepy line,” she says.

“All that data can be used in machine learning. They know what time you are in the house, what room you are in, how many people are in the conversation, keywords.”

Your data can be compromised

Speculation that Alexa is spying on us by storing our private conversations has been dismissed by fact-checking website Politifact, although researchers have found the device can be hacked.

The devices are “always-on” to listen for an activating keyword, but the ambient noise is recorded one second at a time, with each second dumped and replaced until it hears a keyword like “Alexa”.

However, direct commands to those two assistants are recorded and stored on company servers. That data, which can be reviewed and deleted by users, is used to a different extent by the manufacturers.

Google uses the data to build out your profile, which helps advertisers target you. Amazon keeps the data to itself but may use that to sell you products and services through its own businesses. For instance, the company has been granted a patent to recommend cough sweets and soup to those who cough or sniff while speaking to their Echo.

In discussions about rising concerns about the use and misuse of our data, Cooper says she is frustrated by those who tell her that “privacy is dead” or “the horse has bolted”. She says it is not too late to regain control of our data.

“It is hard to fix, it is complex, it is a u-turn in some areas, but that doesn’t mean that you don’t do it.”

It was not that long ago that publicly disagreeing with your employer’s business strategy or staging a protest without the protection of a union, would have been a sackable offence.

But not today – if you are among the business “elite”.

Last year, 4,000 Google employees signed a letter of protest about an artificial intelligence project with the Department of Defense. Google agreed not to renew the contract. No-one was fired.

Also at Google, employees won concessions after 20,000 of them walked out protesting the company’s handling of sexual harassment cases. Everyone kept their jobs.

Consulting firms Deloitte and McKinsey & Company and Microsoft have come under pressure from employees to end their work with the US Department of Immigration and Customs Enforcement (ICE), because of concerns about the separation of children from their illegal immigrant parents.

Amazon workers demanded the company stop selling its Rekognition facial recognition software to law enforcement.

Examples like these show that collective action at work can still take place, despite the decline of unionism, if the employees are considered valuable enough and the employer cares about its social standing.

The power shift

Charles Wookey, CEO of not-for-profit organisation A Blueprint for Better Business says workers in these kinds of protests have “significant agency”.

“Coders and other technology specialists can demand high pay and have some power, as they hold skills in which the demand far outstrips the supply,” he told CEO Magazine.

Individual protesters and whistle-blowers, however, do not enjoy the same freedom to protest. Without a mass of colleagues behind them, they can face legal sanction or be fired for violating the company’s code of conduct – as was Google engineer James Damore when he wrote a memo criticising the company’s affirmative action policies in 2017.

Head of Society and Innovation at the World Economic Forum, Nicholas Davis, says technology has enabled employees to organise via message boards and email.

“These factors have empowered employee activism, organisation and, indeed, massive walkouts –not just around tech, by the way, but around gender and about rights and values in other areas,” he said at a forum for The Ethics Alliance in March.

Change coming from within

Davis, a former lawyer from Sydney, now based in Geneva, says even companies with stellar reputations in human rights, such as Salesforce, can face protests from within – in this case, also due to its work with ICE.

“There were protesters at [Salesforce annual conference] Dreamforce saying: ‘Guys, you’re providing your technology to customs and border control to separate kids from their parents?,” he said.

Staff engagement and transparency

Salesforce responded by creating Silicon Valley’s first-ever Office of Ethical and Humane Use of Technology as a vehicle to engage employees and stakeholders.

“I think the most important thing is to treat it as an opportunity for employee engagement,” says Davis, adding that listening to employee concerns is a large part of dealing with these clashes.

“Ninety per cent of the problem was not [what they were doing] so much as the lack of response to employee concerns,” he says. Employers should talk about why the company is doing the work in question and respond promptly.

“After 72 hours, people think you are not taking this seriously and they say ‘I can get another job, you know’, start tweeting, contact someone in the ABC, the story is out and then suddenly there is a different crisis conversation.”

Davis says it is difficult to have a conversation about corporate social activism in Australia, where business leaders say they are getting resistance from shareholders.

“There’s a lot more space to talk about, debate, and being politically engaged as a management and leadership team on these issues. And there is a wider variety of ability to invest and partner on these topics than I perceive in Australia,” says Davis, who is also an adjunct professor with Swinburne University’s Institute for Social Innovation.

“It’s not an issue of courage. I think it’s an issue with openness and demand and shifting culture in those markets. This is a hard conversation to have in Australia. It seems more structurally difficult,” he says.

“From where I stand, Australia has far greater fractures in terms of the distance between the public, private and civil society sectors than any other country I work in regularly. The levels of distrust here in this country are far higher than average globally, which makes for huge challenges if we are to have productive conversations across sectors.”


Don't harm robots

If humans bully robots there will be dire consequences

Don't harm robots

HitchBOT was a cute hitchhiking robot made up of odds and ends as an experiment to see how humans respond to new technology. Two weeks into its journey across the United States, it was beheaded in an act of vandalism.

For most of its year-long “world tour” in 2015, the Wellington-boot wearing robot was met with kindness, appearing in “selfies” with the people who had picked it up by the side of the road, taking it to football games and art galleries.

However, the destruction of HitchBOT points to a darker side of human psychology – where some people will act out their more violent and anti-social instincts on a piece of human-like technology.

 

A target for violence

Manufacturers of robots are well aware that their products can become a target, with plenty of reports of wilful damage. Here’s a brief timeline of the types of bullying human’s have inflicted on our robotic counterparts in recent years.

  • The makers of a wheeled robot that delivers takeaway food in business parks reported that people kick or flip over the machines for no apparent reason.
  • Homeless people in the US threw a tarpaulin over a patrolling security robot in a carpark and smeared barbeque sauce over its lenses.
  • Google’s self-driving cars have been attacked. Children in Japan have reportedly attacked robots in shopping malls, leading their designers to write programs to help them avoid small people.
  • In less than 24 hours after its launch, Microsoft’s chatbot “Tay” had been corrupted into a racist by social media users who encouraged its antisocial pronouncements.

Researchers speculated to the Boston Globe that the motives for these attacks could be boredom or annoyance at how the technology was being used. When you look at those examples together, is it fair to say we are we becoming brutes?

Programming for human behaviour

While manufacturers want us to be kind to their robots, researchers are examining the ways human behaviour is changing in response to the use of technology.

Take the style of discourse on social media, for example. You don’t have to spend long on a Facebook or Twitter discussion before you are confronted with an example of written aggression.

“I think people’s communications skills have deteriorated enormously because of the digital age,” says Tania de Jong, founder and executive producer of the Creative Innovation summit, which will be held in Melbourne in April.

“It is like people slapping each other – slap, slap slap. It is like common courtesies that we took for granted as human beings are being bypassed in some way.”

Clinical psychologist Louise Remond says words typed online are easily misinterpreted. “The verbal component is only 7 per cent of the whole message and the other components are the tone and the body language and those things you get from interacting with a person.”

The dark power of anonymity

“The disinhibition of anonymity, where people will say things they would never utter if they knew they were being identified and observed, is another factor in poor online behaviour. But, even when people are identifiable, they sometimes lose sight of how many people can see their messages.” says Remond, who works at the Kidman Centre in Sydney.

Text messaging is abbreviated communication, says Dr Robyn Johns, Senior Lecturer in Human Resource Management at the University of Technology, Sydney. “So you lose that tone and the intention around it and it can come across as being quite coarse,” she says.

Is civility at risk?

If we are rude to machines, will we be rude to each other?

If you drop your usual polite attitude when dealing with a taxi-ordering chatbot are you more likely to treat a human the same way? Possibly, says de Jong. The experience of call centre workers could be a bad omen: “A lot of people are rude to those workers, but polite to the people who work with them.”

“Perhaps there is a case to be made that we all need to be a lot more respectful,” says Jong, who founded the non-profit Creativity Australia, which aims to unlock the creativity of employees.

“A general rule, if we are going to act with integrity as whole human beings, we are not going to have different ways of talking to different things.”

 

The COO of “empathetic AI” company Sensum, Ben Bland, recently wrote that his company’s rule-of-thumb is to apply the principle of “don’t be a dick” to its interactions with AI.

“ … we should consider if being mean to machines will encourage us to become meaner people in general. But whether or not treating [digital personal assistant] Alexa like a disobedient slave will cause us to become bad neighbours, there’s a stickier aspect to this problem. What happens when AI is blended with ourselves?,” he asks in a column published on Medium.com.

“With the adoption of tools such as intelligent prosthetics, the line between human and machine is increasingly blurry. We may have to consider the social consequences of every interaction, between both natural and artificial entities, because it might soon be difficult or unethical to tell the difference.”

Research Specialist at the MIT Media Lab, Dr Kate Darling, told CBC news in 2016 that research shows a relationship between people’s tendencies for empathy and the way they treat a robot.

“You know how it’s a red flag if your date is nice to you, but rude to the waiter? Maybe if your date is mean to Siri, you should not go on another date with that person.”

Research fellow at MIT Sloan School’s Center for Digital Business, Michael Schrage, has forecast that “ … being bad to bots will become professionally and socially taboo in tomorrow’s workplace”.

“When “deep learning” devices emotionally resonate with their users, mistreating them feels less like breaking one’s mobile phone than kicking a kitten. The former earns a reprimand; the latter gets you fired, he writes in the Harvard Business Review.

Need to practise human-to-human skills

Johns says we are starting to get to a “tipping point” where that online style of behaviour is bleeding into the face-to-face interactions.

“There seems to be a lot more discussion around people not being able to communicate face-to-face,” she says.

When she was consulting to a large fast food provider recently, managers told her they had trouble getting young workers to interact with older customers who wanted help with the automated ordering system.

“They [the workers] hate that. They don’t want to talk to anyone. They run and hide behind the counter,” says Johns, a doctor of Philosophy with a background in human resources.

The young workers vie for positions “behind the scenes” whereas, previously, the serving positions were the most sought-after.

Johns says she expects to see etiquette classes making a comeback as employers and universities take responsibility for training people to communicate clearly, confidently and politely.

“I see it with graduating students, those who are able to communicate and present well are the first to get the jobs,” she says.

We watch and learn

Remond specialises in dealing with young people – immersed in cyber worlds since a very young age – and says there is a human instinct to connect with others, but the skills have to be practised.

“There is an element of hardwiring in all of us to be empathetic and respond to social cues,” she says.

Young people can practice social skills in a variety of real-life environments, rather than merely absorbing the poor role models they find of reality television shows.

“There are a lot of other influences. We learn so much from the social modelling of other people. You can walk into a work environment and watch how other people interact with each other at lunchtime.”

Remond says employers should ensure people who work remotely have opportunities to reconnect face-to-face. “If you are part of a team, you are going to work at your best when you feel a genuine connection with these people and you feel like you trust them and you feel like you can engage with them.”

 

This article was originally written for The Ethics Alliance. Find out more about this corporate membership program. Already a member? Log in to the membership portal for more content and tools here.


How will we teach the robots to behave themselves?

How will we teach the robots to behave themselves?

The era of artificial intelligence (AI) is upon us. On one hand it is heralded as the technology that will reshape society, making many of our occupations redundant.

On the other, it’s talked about as the solution that will unlock an unfathomable level of processing efficiency, giving rise to widespread societal benefits and enhanced intellectual opportunity for our workforce.

Either way, one thing is clear – AI has an ability to deliver insights and knowledge at a velocity that would be impossible for humans to match and it’s altering the fabric of our societies.

 

The impact that comes with this wave of change is remarkable. For example, IBM Watson has been used for early detection of melanoma, something very close to home considering Australians and New Zealanders have the highest rates of skin cancer in the world. Watson’s diagnostic capacity exceeds that of most (if not all) human doctors.

Technologists in the AI space around the world are breaking new ground weekly – that is an exciting testament to humankind’s ability. In addition to advancements in healthcare, 2018 included milestones in AI being used for autonomous vehicles, with the Australian government announcing the creation of a new national office for future transport technologies in October.

However, the power to innovate creates proportionately equal risk and opportunity – technology with the power to do good can, in almost every case, be applied for bad. And in 2019 we must move this conversation from an interesting dinner-party conversation to a central debate in businesses, government and society.

AI is a major area of ethical risk. It is being driven by technological design processes that are mostly void of robust ethical consideration – a concern that should be the top of the agenda for all of us. When technical mastery of any kind is divorced from ethical restraint the result is tyranny.

The knowledge that’s generated by AI will only ever be the cold logic of the machine. It lacks the nuanced judgment that humans have. Unless AI’s great processing power is met and matched with an equal degree of ethical restraint, the good it creates is not only lost but potentially damaging.The lesson we need to learn is this: just because we can do something doesn’t mean that we should.

Ethical knowledge

As citizens, our priority must be to ensure that AI works in the interests of the many rather than the few.

Currently, we’re naively assuming that the AI coders and developers have the ethical knowledge, understanding and skills to navigate the challenges that their technological innovations create.

In these circumstances, sound ethical judgment is just as important a skill as the ability to code effectively. It is a skill that must be learned, practised and deployed. Yet, very little ethical progress or development has been made in the curriculum to inform the design and development of AI.

This is a “human challenge” not a “technology challenge”. The role of people is only becoming more important in the era of AI. We must invest in teaching ethics as applied to technological innovation.

Building a platform of trust

In Australia, trust is at an all-time low because the ethical infrastructure of our society is largely damaged – from politics to sport to religious institutions to business. Trust is created when values and principles are explicitly integrated into the foundations of what is being designed and built. Whatever AI solution is developed and deployed, ethics must be at the core – consciously built into the solutions themselves, not added as an afterthought.

Creating an ethical technologically advanced culture requires proactive and intentional collaboration from those who participate in society: academia, businesses and governments. Although we’re seeing some positive early signs, such as the forums that IBM is creating to bring stakeholders from these communities together to debate and collaborate on this issue, we need much more of the same – all driven by an increased sense of urgency.

To ensure responsible stewardship is at the centre of the AI era, we need to deploy a framework that encourages creativity and supports innovation, while holding people accountable.

This story first appeared on Australian Financial Review – republished with permission.


Ethical by Design: Principles for Good Technology

Ethical by Design: Principles for Good Technology

TYPE:THOUGHT LEADERSHIP

CATEGORY: TECHNOLOGY & DESIGN

PUBLISHED: SEP 2018

Ethical By Design: Principles for Good Technology

Learn the principles you need to consider when designing ethical technology, how to balance the intentions of design and use, and the rules of thumb to prevent ethical missteps. Understand how to break down some of the biggest challenges and explore a new way of thinking for creating purpose-based design.

You’re responsible for what you design – make sure you build something good. Whether you are editing a genome, building a driverless car or writing a social media algorithm, this report offers the knowledge and tools to do so ethically. From Facebook to a brand new start-up, the responsibility begins with you. In this guide we offer key principles to help guide ethical technology creation and management.

"Technology seems to be at the heart of more and more ethical crises. So many of the ethical scandals we’re seeing in the technology sector are happening because people aren’t well-equipped to take a holistic view of the ethical landscape."

DR MATTHEW BEARD

WHATS INSIDE?

What is ethics + ethical theories
Techno-ethical myths
The value of ethical frameworks
Rules of thumb to embed ethics in design
Case studies + ethical breakdowns
Core ethical design principles
Design challenges + solutions
The future of ethical technology

Whats inside the guide?

PREVIEW THE GUIDE

AUTHORS

Authors

Dr Matt Beard

is a moral philosopher with an academic background in applied and military ethics. He has taught philosophy and ethics at university for several years, during which time he has been published widely in academic journals, book chapters and spoken at national and international conferences. Matt’s has advised the Australian Army on military ethics including technology design. In 2016, Matt won the Australasian Association of Philosophy prize for media engagement, recognising his “prolific contribution to public philosophy”. He regularly appears on television, radio, online and in print.

Dr Simon Longstaff

has been Executive Director of The Ethics Centre for over 25 years, working across business, government and society. He has a PhD in philosophy from Cambridge University, is a Fellow of CPA Australia and of the Royal Society of NSW, and in June 2016 was appointed an Honorary Professor at ANU – based at the National Centre for Indigenous Studies. Simon co-founded the Festival of Dangerous Ideas and played a pivotal role in establishing both the industry-led Banking and Finance Oath and ethics classes in primary schools. He was made an Officer of the Order of Australia (AO) in 2013.

DOWNLOAD A COPY

You may also be interested in...

Nothing found.


Is it right to edit the genes of an unborn child?

It’s been called dangerous, unethical and a game of human Russian roulette.

International outrage greeted Chinese scientist He Jiankui’s announcement of the birth of twin girls whose DNA he claims to have altered using the gene editing technique CRISPR. He says the edit will protect the twins, named Lulu and Nana, from HIV for life.

“I understand my work will be controversial”, Jiankui said in a video he posted online.

“But I believe families need this technology and I’m ready to take the criticism for them.”

The Center for Genetics and Society has called this “a grave abuse of human rights”, China’s Vice Minister of Science and Technology has issued an investigation into Jiankui’s claims, while a UNESCO panel of scientists, philosophers, lawyers and government ministers have called for a temporary ban on genetic editing of the human genome.

Condemnation of his actions have only swelled after Jiankui said he is “proud” of his achievement and that “another potential pregnancy” of a gene edited embryo is in its early stages.

While not completely verified, the news has been a cold shock to the fields of science and medical ethics internationally.

“People have naive ideas as to the line between science and application”, said Professor Rob Sparrow from the Department of Philosophy at Monash University. “If you believe research and technology can be separated then it’s easy to say, let the scientist research it. But I think both those claims are wrong. The scientific research is the application here.”


The fact that we can do something does not mean we should. Read Matt Beard and Simon Longstaff’s guide to ethical tech, Ethical By Design: Principles of Good Technology here.  


The ethical approval process of Jiankui’s work is unusual or at least unclear, with reports he received a green light after the procedure. Even so, Sparrow rejects the idea that countries with stricter ethical oversight have some responsibility to relax their regulations in order to stop controversial research going rogue.

“Spousal homicide is bound to happen. That doesn’t mean we don’t make it legal or regulate it. Nowadays people struggle to believe that anything is inherently wrong.

“Our moral framework has been reduced to considerations of risks and benefits. The idea that things might be inherently wrong is prior to the risk/benefit conversation.”

But Jiankui has said, “If we can help this family protect their children, it’s inhumane for us not to”.

Professor Leslie Cannold, ethicist, writer and medical board director, agrees – to a point.

“The aim of this technology has always been to assist parents who wish to avoid the passing on of a heritable disease or condition.

“However, we need to ensure that this can be done effectively, offered to everyone equally without regard to social status or financial ability to pay, and that it will not have unintended side effects. To ensure the latter we need to proceed slowly, carefully and with strong measurements and controls.

“We need to act as ‘team human’ because the changes that will be made will be heritable and thereby impact on the entire human race.”

If Jiankui’s claims are true, the edited genes of the twin girls will pass to any children they have in the future.

“No one knows what the long term impacts on these children will be”, said Sparrow.

“This is radically experimental. [But] I do think it’s striking how for many years people drew a bright line at germline gene editing but they drew this line when gene editing wasn’t really possible. Now it’s possible and it’s very clear that line is being blurred.”


With great power comes great responsibility – but will tech companies accept it?

Technology needs to be designed to a set of basic ethical principles. Designers need to show how. Matt Beard, co-author of a new report from The Ethics Centre, demands more from the technology we use every day.

In The Amazing Spider-Man, a young Peter Parker is coming to grips with his newly-acquired powers. Spider-Man in nature but not in name, he suddenly finds himself with increased reflexes, strength and extremely sticky hands.

Unfortunately, the subway isn’t the controlled environment for Parker to awaken as a sudden superhuman. His hands stick to a woman’s shoulders and he’s unable to move them. His powers are doing exactly what they were designed to do, but with creepy, unsettling effects.

Spider-Man’s powers aren’t amazing yet; they’re poorly understood, disturbing and dangerous. As other commuters move to the woman’s defence, shoving Parker away from the woman, his sticky hands inadvertently rip the woman’s top clean off. Now his powers are invading people’s privacy.

A fully-fledged assault follows, but Parker’s Spider-Man reflexes kick in. He beats his assailants off, sending them careening into subway seats and knocking them unconscious, apologising the whole time.

Parker’s unintended creepiness, apologetic harmfulness and head-spinning bewilderment at his own power is a useful metaphor to think about another set of influential nerds: the technological geniuses building the ‘Fourth Industrial Revolution’.

Sudden power, the inability to exercise it responsibly, collateral damage and a need for restraint – it all sounds pretty familiar when we think about ‘killer robots’, mass data collection tools and genetic engineering.

This is troubling, because we need tech designers to, like Spider-Man, recognise (borrowing from Voltaire) that “with great power comes great responsibility”. And unfortunately, it’s going to take more than a comic book training sequence for us to figure this out.

For one thing, Peter Parker didn’t seek and profit from his powers before realising he needed to use them responsibly. For another, it’s going to take something more specific and meaningful than a general acceptance of responsibility for us to see the kind of ethical technology we desperately need.

Because many companies do accept responsibility, they recognise the power and influence they have.

Just look at Mark Zuckerberg’s testimony before the US Congress:

It’s not enough to connect people, we need to make sure those connections are positive. It’s not enough to just give people a voice, we need to make sure people aren’t using it to harm other people or spread misinformation. It’s not enough to give people control over their information, we need to make sure the developers they share it with protect their information too. Across the board, we have a responsibility to not just build tools, but to make sure those tools are used for good.

Compare that to an earlier internal memo – which was intended to be a provocation more than a manifesto – in which a Facebook executive is far more flippant about their responsibility.

We connect people. That can be good if they make it positive. Maybe someone finds love. Maybe it even saves the life of someone on the brink of suicide. So we connect more people. That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools. And still we connect people.

We can expect more from tech companies. But to do that, we need to understand exactly what technology is. This starts by deconstructing one of the most pervasive ideas going around called “technological instrumentalism”, the idea that tech is just a “value-neutral” tool.

Instrumentalists think there’s nothing inherently good or bad about tech because it’s about the people who use it. It’s the ‘guns don’t kill people, people kill people’ school of thought – but it’s starting to run out of steam.

What instrumentalists miss are the values, instructions and suggestions technologies offer to us. People kill people with guns, and when someone picks up a gun, they have the ability to engage with other people in a different way – as a shooter. A gun carries a set of ethical claims within it – claims like ‘it’s sometimes good to shoot people’. That may indeed be true, but that’s not the point – the point is, there are values and ethical beliefs built into technology. One major danger is that we’re often not aware of them.

Encouraging tech companies to be transparent about the values, ethical motivations, justifications and choices that have informed their design is critical to ensure design honestly describes what a product is doing.

Likewise, knowing who built the technology, who owns it and from what part of the world they come from helps us understand whether there might be political motivations, security risks or other challenges we need to be aware of.

Alongside this general need for transparency, we need to get more specific. We need to know how the technology is going to do what it says it’ll do and provide the evidence to back it up. In Ethical by Design, we argue that technology designers need to commit to a set of basic ethical principles – lines they will never cross – in their work.

For instance, technology should do more good than harm. This seems straightforward, but it only works if we know when a product is harming someone. This suggests tech companies should track and measure both the good and bad effects their technology has. You can’t know if you’re doing your job unless you’re measuring it.

Once we do that, we need to remember that we – as a society and as a species – remain in control of the way technology develops. We cultivate a false sense of powerlessness when we tell each other how the future will be, when artificial intelligence will surpass human intelligence and how long it will be until we’ve all lost our jobs.

Technology is something we design – we shape it as much as it shapes us. Forgetting that is the ultimate irresponsibility.

As the Canadian philosopher and futurist Marshal McLuhan wrote, “There is absolutely no inevitability, so long as there is a willingness to contemplate what is happening.”

Ethical by Design: Principles for Good Technology is written by Dr Matt Beard and Dr Simon Longstaff AO.


The kiss of death: energy policies keep killing our PMs

If you were born in 1989 or after, you haven’t yet voted in an election that’s seen a Prime Minister serve a full term.

Some point to social media, the online stomping grounds of digital natives, as the cause of this. As Emma Alberici pointed out, Twitter launched in 2006, the year before Kevin ’07 became PM.

Some blame widening political polarisation, of which there is evidence social media plays a crucial role.

If we take a look though, the thing that keeps killing our PMs’ popularity in the polls and party room is climate and energy policy. It sounds completely anodyne until you realise what a deadly assassin it is.

Rudd

Kevin Rudd declared, “Climate change is the great moral challenge of our generation”. This strategic focus on global warming contributed to him defeating John Howard to become Prime Minister in December 2007. As soon as Rudd took office, he cemented his green brand by ratifying the Kyoto Protocol, something his predecessor refused to do.

There were two other major efforts by the Rudd government to address emissions and climate change. The first was the Carbon Pollution Reduction Scheme(CPRS) led by then environment minister Penny Wong. It was a ‘cap and trade’ system that had bi-partisan support from the Turnbull led opposition party… until Turnbull lost his shadow leadership to Abbott over it. More on this soon.

Then there was the December 2009 United Nations climate summit in Copenhagen, officially called COP15 (because it was the fifteenth session of the Conference of Parties). Rudd and Wong attended the summit and worked tirelessly with other nations to create a framework for reducing global energy consumption. But COP15 was unsuccessful in that no legally binding emissions limits were set.

Only a few months later, the CPRS was ditched by the Labor government who saw it would never be legislated due to a lack of support. Rudd was seen as ineffectual on climate change policy, the core issue he championed. His popularity plummeted.

Gillard

Enter Julia Gillard. She took poll position in the Labor party in June 2010 in what will be remembered as the “knifing of Kevin Rudd”.

Ahead of the election she said she would “tackle the challenge of climate change” with investments in renewables. She promised, “There will be no carbon tax under the government I lead”.

Had she known the election would result in the first federal hung parliament since 1940, when Menzies was PM, she may not have uttered those words. Gillard wheeled and dealed to form a minority government with the support of a motley crew – Adam Bandt, a Greens MP from Melbourne, and independents Andrew Wilkie from Hobart, and Rob Oakeshott and Tony Windsor from regional NSW. The compromises and negotiations required to please this diverse bunch would make passing legislation a challenging process.

To add to a further degree of difficulty, the Greens held the balance of power in the Senate. Gillard suggested they used this to force her hand to introduce the carbon tax. Then Greens leader Bob Brown denied that claim, saying it was a “mutual agreement”. A carbon price was legislated in November 2011 to much controversy.

Abbott went hard on this broken election promise, repeating his phrase “axe the tax” at every opportunity. Gillard became the unpopular one.

Rudd 2.0

Crouching tiger Rudd leapt up from his grassy foreign ministry portfolio and took the prime ministership back in June 2013. This second stint lasted three months until Labor lost the election.

Abbott

Prime Minister Abbott launched a cornerstone energy policy in December 2013 that might be described as the opposite of Labor’s carbon price. Instead of making polluters pay, it offered financial incentives to those that reduced emissions. It was called the Emissions Reduction Fund and was criticised for being “unclear”. The ERF was connected to the Coalition’s Direct Action Plan which they promoted in opposition.

Abbott stayed true to his “axe the tax” slogan and repealed the carbon price in 2014.

As time moved on, the Coalition government did not do well in news polls – they lost 30 in a row at one stage. Turnbull cited this and creating “strong business confidence” when he announced he would challenge the PM for his job.

Turnbull

After a summer of heatwaves and blackouts, Turnbull and environment minister Josh Frydenberg created the National Energy Guarantee. It aimed to ensure Australia had enough reliable energy in market, support both renewables and traditional power sources, and could meet the emissions reduction targets set by the Paris Agreement. Business, wanting certainty, backed the NEG. It was signed off 14 August.

But rumblings within the Coalition party room over the policy exploded into the epic leadership spill we just saw unfold. It was agitated by Abbott who said:

“This is by far the most important issue that the government confronts because this will shape our economy, this will determine our prosperity and the kind of industries we have for decades to come. That’s why this is so important and that’s why any attempt to try to snow this through … would be dead wrong.”

Turnbull tried to negotiate with the conservative MPs of his party on the NEG. When that failed and he saw his leadership was under serious threat, he killed it off himself. Little did he know he would go down with it.

Peter Dutton continued with a leadership challenge. Turnbull stepped back saying he would not contest and would resign no matter what. His supporters Scott Morrison and Julie Bishop stepped up.

Morrison

After a spat over the NEG, Scott Morrison has just won the prime ministership with 45 votes over Dutton’s 40.

Killers

We have a series of energy policies that were killed off with prime minister after prime minister. We are yet to see a policy attract bi-partisan support that aims to deliver reliable energy at lower emissions and affordable prices. And if you’re 29 or younger, you’re yet to vote in an election that will see a Prime Minister serve a full term.


From NEG to Finkel and the Paris Accord – what’s what in the energy debate

We’ve got NEGs, NEMs, and Finkels a-plenty. Here is a cheat sheet for this whole energy debate that’s speeding along like a coal train and undermining Prime Minister Malcolm Turnbull’s authority. Let’s take it from the start…

UN Framework Convention on Climate Change – 1992

This Convention marked the first time combating climate change was seen as an international priority. It had near-universal membership, with countries including Australia all committed to curbing greenhouse gas emissions. The Kyoto Protocol was its operative arm (more on this below).

The Kyoto Protocol – December 1997

The Kyoto Protocol is an internationally binding agreement that sets emission reduction targets. It gets its name from the Japanese city it was ratified in and is linked to the aforementioned UN Framework Convention on Climate Change. The Protocol’s stance is that developed nations should shoulder the burden of reducing emissions because they have been creating the bulk of them for over 150 years of industrial activity. The US refused to sign the Protocol because the two largest CO2 emitters, China and India, were exempt for their “developing” status. When Canada withdrew in 2011, saving the country $14 billion in penalties, it became clear the Kyoto Protocol needed some rethinking.

Australia’s National Electricity Market (NEM) – 1998

Forget the fancy name. This is the grid. And Australia’s National Electricity Market is one of the world’s longest power grids. It connects suppliers and consumers down the entire east and south east coasts of the continent. It spans across six states and territories and hops over the Bass Strait connecting Tasmania. Western Australia and the Northern Territory aren’t connected to the NEM because of distance.

Source: Australian Energy Market Operator

The NEM is made up of more than 300 organisations, including businesses and state government departments, that work to generate, transport and deliver electricity to Australian users. This is no mean feat. Before reliable batteries hit the market, which are still not widely rolled out, electricity has been difficult to store. We’ve needed to continuously generate it to meet our 24/7 demands. The NEM, formally established under the Keating Labor government, is an always operating complex grid.

The Paris Agreement aka the Paris Accord – November 2016

The Paris Agreement attempted to address the oversight of the Kyoto Protocol (that the largest emitters like China and India were exempt) with two fundamental differences – each country sets its own limits and developing countries be supported. The overarching aim of this agreement is to keep global temperatures “well below” an increase of two degrees and attempt to achieve a limit of one and a half degrees above pre-industrial levels (accounting for global population growth which drives demand for energy). Except Australia isn’t tracking well. We’ve already gone past the halfway mark and there’s more than a decade before the 2030 deadline. When US President Donald Trump denounced the Paris Agreement last year, there was concern this would influence other countries to pull out – including Australia. Former Prime Minister Tony Abbott suggested we signed up following the US’s lead. But Foreign Minister Julie Bishop rebutted this when she said: “When we signed up to the Paris Agreement it was in the full knowledge it would be an agreement Australia would be held to account for and it wasn’t an aspiration, it was a commitment … Australia plays by the rules — if we sign an agreement, we stick to the agreement.”

The Finkel Review – June 2017

Following the South Australian blackout of 2017 and rapidly increasing electricity costs, people began asking if our country’s entire energy system needs an overhaul. How do we get reliable, cheap energy to a growing population and reduce emissions? Dr Alan Finkel, Australia’s chief scientist, was commissioned by the federal government to review our energy market’s sustainability, environmental impact, and affordability. Here’s what the Review found:

Sustainability:

  • A transition to low emission energy needs to be supported by a system-wide grid across the nation.
  • Regular regional assessments will provide bespoke approaches to delivering energy to communities that have different needs to cities.
  • Energy companies that want to close their power plants should give three years’ notice so other energy options can be built to service consumers.

Affordability:

  • A new Energy Security Board (ESB) would deliver the Review’s recommendations, overseeing the monopolised energy market.

Environmental impact:

  • Currently, our electricity is mostly generated by fossil fuels (87 percent), producing 35 percent of our total greenhouse gases.
  • We’re can’t transition to renewables without a plan.
  • A Clean Energy Target (CET), would force electricity companies to provide a set amount of power from “low emissions” generators, like wind and solar. This set amount would be determined by the government.
    • The government rejected the CET on the basis that it would not do enough to reduce energy prices. This was one out of 50 recommendations posed in the Finkel Review.

ACCC Report – July 2018

The Australian Competition & Consumer Commission’s Retail Electricity Pricing Inquiry Report drove home the prices consumers and businesses were paying for electricity were unreasonably high. The market was too concentrated, its charges too confusing, and bad policy decisions by government have been adding significant costs to our electricity bills. The ACCC has backed the National Energy Guarantee, saying it should drive down prices but needs safeguards to ensure large incumbents do not gain more market control.

National Energy Guarantee (NEG)– present 20 August 2018

The NEG was the Turnbull government’s effort to make a national energy policy to deliver reliable, affordable energy and transition from fossil fuels to renewables. It aimed to ‘guarantee’ two obligations from energy retailers:

  1. To provide sufficient quantities of reliable energy to the market (so no more black outs).
  2. To meet the emissions reduction targets set by the Paris Agreement (so less coal powered electricity).

It was meant to lower energy prices and increase investment in clean energy generation, including wind, solar, batteries, and other renewables. The NEG is a big deal, not least because it has been threatening Malcolm Turnbull’s Prime Ministership. It is the latest in a long line of energy almost-policies. It attempted to do what the carbon tax, emissions intensity scheme, and clean energy target haven’t – integrate climate change targets, reduce energy prices, and improve energy reliability into a single policy with bipartisan support. Ambitious. And it seems to have been ditched by Turnbull because he has been pressured by his own party. Supporters of the NEG feel it is an overdue radical change to address the pressing issues of rising energy bills, unreliable power, and climate change. But its detractors on the left say the NEG is not ambitious enough, and on the right too cavalier because the complexity of the National Energy Market cannot be swiftly replaced.


Ethics Explainer: The Turing Test

Much was made of a recent video of Duplex – Google’s talking AI – calling up a hair salon to make a reservation. The AI’s way of speaking was uncannily human, even pausing at moments to say “um”.

Some suggested Duplex had managed to pass the Turing test, a standard for machine intelligence that was developed by Alan Turing in the middle of the 20th century. But what exactly is the story behind this test and why are people still using it to judge the success of cutting edge algorithms?

Mechanical brains and emotional humans

In the late 1940s, when the first digital computers had just been built, a debate took place about whether these new “universal machines” could think. While pioneering computer scientists like Alan Turing and John von Neumann believed that their machines were “mechanical brains”, others felt that there was an essential difference between human thought and computer calculation.

Sir Geoffrey Jefferson, a prominent brain surgeon of the time, argued that while a computer could simulate intelligence, it would always be lacking:

“No mechanism could feel … pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or miserable when it cannot get what it wants.”

In a radio interview a few weeks later, Turing responded to Jefferson’s claim by arguing that as computers become more intelligent, people like him would take a “grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine.”

The following year, Turing wrote a paper called ‘Computing Machinery and Intelligence’ in which he devised a simple method by which to test whether machines can think.

The test was a proposed a situation in which a human judge talks to both a computer and a human through a screen. The judge cannot see the computer or the human but can ask them questions via the computer. Based on the answers alone, the human judge had to determine which is which. If the computer was able to fool 30 percent of judges that it was human, then the computer was said to have passed the test.

Turing claimed that he intended for the test to be a conversation stopper, a way of preventing endless metaphysical speculation about the essence of our humanity by positing that intelligence is just a type of behaviour, not an internal quality. In other words, intelligence is as intelligence does, regardless of whether it done by machine or human.

Does Google Duplex pass?

Well, yes and no. In Google’s video, it is obvious that the person taking the call believes they are talking to human. So, it does satisfy this criterion. But an important thing about Turing’s original test was that to pass, the computer had to be able to speak about all topics convincingly, not just one.

 

 

In fact, in Turing’s paper, he plays out an imaginary conversation with an advanced future computer and human judge, with the judge asking questions and the computer providing answers:

Q: Please write me a sonnet on the subject of the Forth Bridge.

A: Count me out on this one. I never could write poetry.

Q: Add 34957 to 70764.

A: (Pause about 30 seconds and then give as answer) 105621.

Q Do you play chess?

A: Yes.

Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?

A: (After a pause of 15 seconds) R-R8 mate.

The point Turing is making here is that a truly smart machine has to have general intelligence in a number of different areas of human interest. As it stands, Google’s Duplex is good within the limited domain of making a reservation but would probably not be able to do much beyond this unless reprogrammed.

The boundaries around the human

While Turing intended for his test to be a conversation stopper for questions of machine intelligence, it has had the opposite effect, fuelling half a century of debate about what the test means, whether it is a good measure of intelligence, or if it should still be used as a standard.

Most experts have come to agree, over time, that the Turing test is not a good way to prove machine intelligence, as the constraints of the test can easily be gamed, as was the case with the bot Eugene Goostman, who allegedly passed the test a few years ago.

But the Turing test is nevertheless still considered a powerful philosophical tool to re-evaluate the boundaries around what we consider normal and human. In his time, Turing used his test as a way to demonstrate how people like Jefferson would never be willing to accept a machine as being intelligence not because it couldn’t act intelligently, but because wasn’t “like us”.

Turing’s desire to test boundaries around what was considered “normal” in his time perhaps sprung from his own persecution as a gay man. Despite being a war hero, he was persecuted for his homosexuality, and convicted in 1952 for sleeping with another man. He was punished with chemical castration and eventually took his own life.

During these final years, the relationship between machine intelligence and his own sexuality became interconnected in Turing’s mind. He was concerned the same bigotry and fear that hounded his life would ruin future relationships between humans and intelligent computers. A year before he took his life he wrote the following letter to a friend:

“I’m afraid that the following syllogism may be used by some in the future.

Turing believes machines think

Turing lies with men

Therefore machines do not think

– Yours in distress,

Alan”