Ready or not – the future is coming

We are living in an exceptional era of human history. In a blink of the historical time scale, our species now have the skills to explore the universe, map and modify human genes and develop forms of intelligence that may far surpass their creators.

In fact, given the speed, unpredictability and sheer scale of what was previously unthinkable change, it’s actually better to talk of the future in the plural rather than the singular: the ‘futures’ are coming.

With the convergence of genetic engineering, AI and neurotechnology, entirely unique challenges arise that could test our assumptions about human identity and what connects us together as a species. What does the human experience mean in an era of augmentation, implantation, enhancement and editing of the very building blocks of our being in the future? What happens when we not only hack the human body, but the human mind?

The possible futures that are coming will arrive with such speed that those not ready for them will find themselves struggling to know how to navigate and respond to a world unlike the one we currently know. Those that invest in exploring what the future holds will be well placed to proactively shape their current and future state so they can traverse the complexity, weather the challenges, and maximise the opportunities the future presents.

The Ethics Centre’s Future State Framework is a tailored, future-focused platform for change management, cultural alignment and staff engagement. It draws on futuring methodologies, including trend mapping and future scenario casting, alongside a number of design thinking and innovation methodologies. What sets Future State apart is that it incorporates ethics as the bedrock for strategic and organisational assessment and design. Ethics underpins every aspect of an organisation. An organisation’s purpose, values and principles set the foundation for its culture, gives guidance to leadership, and sets the compass needed to execute strategy.

The future world we inhabit will be built on the choices we make in the present. Yet due to the sheer complexity of the multitudes of decisions we make every day, it’s a future that is both unpredictable and emergent. The laws, processes, methods and current ways of thinking in the present may not serve us well in the future, nor contribute best to the future that we want to create. By envisioning the challenges of the future through an ethics-centred design process, the Future State Framework ensures that organisations and their culture are future-proof.

The Future State Framework has supported numerous organisations in reimagining their purpose and their unique economic and social role in a world where profit is rapidly and radically being redefined. Shareholder demand is moving beyond financial return, and social expectations toward the role of corporations are shifting dramatically into the future.

The methodology helps organisations chart a course through this transformation by mapping and targeting their desired future state and developing pathways for realising it. It been designed to support and guide both organisations facing an imminent burning platform and those wanting to be future forward and leading – giving them the insight to act with purpose – fit for the many possibilities the future might hold.

If you are interested in discussing any of the topics raised in this article in more depth with The Ethics Centre’s consulting team, please make an enquiry via our website.

Join the conversation

What is the future of business?


The youth are rising. Will we listen?

When we settled on Town Hall as the venue for the Festival of Dangerous Ideas (FODI) 2020, my first instinct was to consider a choir. The venue lends itself to this so perfectly and the image of a choir – a group of unified voices – struck me as an excellent symbol for the activism that is defining our times.

I attended Spinifex Gum in Melbourne last year, and instantly knew that this was the choral work for the festival this year. The music and voices were incredibly beautiful but what struck me most was the authenticity of the young women in Marliya Choir. The song cycle created by Felix Riebel and Lyn Gardner for Marliya Choir embarks on a truly emotional journey through anger, sadness, indignation and hope.

A microcosm of a much larger phenomenon, Marliya’s work shows us that within these groups of unified voices the power of youth is palpable.

Every city, suburb and school has their own Greta Thurnbergs: young people acutely aware of the dangerous reality we are now living in, who are facing the future knowing that without immediate and significant change their future selves will risk incredible hardship.

In 2012, FODI presented a session with Shiv Malik and Ed Howker on the coming inter-generational war, and it seems this war has well and truly begun. While a few years ago the provocations were mostly around economic power, the stakes have quickly risen. Now power, the environment, quality of life, and the future of the planet are all firmly on the table. This has escalated faster than our speakers in 2012 were predicting.

For a decade now the FODI stage has been a place for discussing uncomfortable truths. And it doesn’t get more uncomfortable than thinking about the future world and systems the young will inherit.

What value do we place on a world we won’t be participating in?

Our speakers alongside Marliya Choir will be tackling big issues from their perspective: mental health, gender, climate change, indigenous incarceration, and governance.

First Nation Youth Activist Dujuan Hoosan, School Strike for Climate’s Daisy Jeffery, TEDx speaker Audrey Mason-Hyde , mental health advocate Seethal Bency and journalist Dylan Storer add their voices to this choir of young Australians asking us to pay attention.

Aged from 12 to 21, their courage in stepping up to speak in such a large forum is to be commended and supported.

With a further FODI twist, you get to choose how much you wish to pay for this session. You choose how important you think it is to listen to our youth. What value do you put on the opinions of the young compared to our established pundits?

Unforgivable is a new commission, combining the music from the incredible Spinifex Gum show I saw, with new songs from the choir and some of the boldest young Australian leaders, all coming together to share their hopes and fears about the future.

It is an invitation to come and to listen. To consider if you share the same vision of the future these young leaders see. Unforgivable is an opportunity to see just what’s at stake in the war that is raging between young and old.

These are not tomorrow’s leaders, these young people are trying to lead now.

Tickets to Unforgivable, at the Festival of Dangerous Ideas on Saturday 4 April are on sale now. 

Join the conversation

What value do you place on hearing our youth?


This is what comes after climate grief

I can’t really lie about this. Like so many other people in the climate community hailing from Australia, I expected the impacts of climate change to come later. I didn’t define ‘later’ as much other than ‘not now, not next year, but some time after that’.

Instead, I watched in horror as Australia burst into flames. As the worst of the fire season passes, a simple question has come to the fore. What made these bushfires so bad?

The Bureau of Meteorology confirms that weather conditions have been tilting in favour of worsening fire for many decades. The ‘Forest Fire Danger Index’, a metric for this, hit records in many parts of Australia, this summer.

The Earth Systems and Climate Change Hub is unequivocal: “Human-caused climate change has resulted in more dangerous weather conditions for bushfires in recent decades for many regions of Australia…These trends are very likely to increase into the future”.

 

 

Bushfire has been around for centuries, but the burning of fossil fuels by humans has catalysed and worsened it.

Having moved away from Australia, I didn’t experience the physical impacts of the crisis. Not the air thick with smoke, or the dark brown sky or the bone-dry ground.

But I am permanently plugged into the internet, and the feelings expressed there fed into my feed every day. There was shock at the scale and at the science fictionness of it all. Fire plumes that create their own lightning? It can’t be real.

The world grieved at the loss of human life, the loss of beautiful animals and ecosystems, and the permanent damage to homes and businesses.

Rapidly, that grief pivoted into action. The fundraisers were numerous and effective. Comedian Celeste Barber, who set out to raise an impressive $30,000 AUD, ended up at around $51 million. Erin Riley’s ‘Find a Bed’ program worked tirelessly to help displaced Australians find somewhere to sleep. Australians put their heads down and got to work.

It’s inspiring to be a part of. But that work doesn’t stop with funding. Early estimates on the emissions produced by the fires are deeply unsettling. “Our preliminary estimates show that by now, CO2 emissions from this fire season are as high or higher than the CO2 emissions from all anthropogenic emissions in Australia. So effectively, they are at least doubling this year’s carbon footprint of Australia”, research scientist Pep Canadell told Future Earth.

There is some uncertainty about whether the forests destroyed by the blaze will grow back and suck that released carbon back into the Earth. But it is likely that as fire seasons get worse, the balance of the natural flow of carbon between the ground and the sky will begin to tip in a bad direction.

Like smoke plumes that create their own ‘dry lightning’ that ignite new fires, there is a deep cyclical horror to the emissions of bushfire.

It taps into a horror that is broader and deeper than the immediate threat; something lingers once the last flames flicker out. We begin to feel that the planet’s physical systems are unresponsive. We start to worry that if we stopped emissions, these ‘positive feedbacks’ (a classic scientific misnomer) mean we’re doomed regardless of our actions.

“An epidemic of giving up scares me far more than the predictions of climate scientists”, I told an international news journalist, as we sat in a coffee shop in Oslo. It was pouring rain, and it was warm enough for a single layer and a raincoat – incredibly strange for the city in January.

She seemed surprised. “That scares you?” she asked, bemused. Yes. If we give up, emissions become higher than they would be otherwise, and so we are more exposed to the uncertainties and risks of a planet that starts to warm itself. That is paralysing, to me.

It is scarier than the climate change denial of the 2010s, because it has far greater mass appeal. It’s just as pseudoscientific as denialism. “Climate change isn’t a cliff we fall off, but a slope we slide down”, wrote climate scientist Kate Marvel, in late 2018.

In response to Jonathan Franzen’s awful 2019 essay in which he urges us to give up, Marvel explained why ‘positive feedbacks’ are more reason to work hard to reduce emissions, not less. “It is precisely the fact that we understand the potential driver of doom that changes it from a foregone conclusion to a choice”.

A choice. Just as the immediate horrors of the fires translated into copious and unstoppable fundraising, the longer-term implications of this global shift in our habitat could precipitate aggressive, passionate action to place even more pressure on the small collection of companies and governments that are contributing to our increasing danger.

There are so many uncertainties inherent in the way the planet will respond to a warming atmosphere. I know, with absolute certainty, that if we succumb to paralysis and give up on change, then our exposure to these risks will increase greatly.

We can translate the horror of those dark red months into a massive effort to change the future. Our worst fears will only be realised if we persist with the intensely awful idea that things are so bad that we ought to give up.

Join the conversation

Is action the antidote to despair?


A burning question about the bushfires

At the height of the calamity that has been the current bushfire season, people demanded to know why large parts of our country were being ravaged by fires of a scale and intensity seldom seen.

In answer, blame has been sheeted home to the mounting effects of climate change, to failures in land management, to our burgeoning population, to the location of our houses, to the pernicious deeds of arsonists…

However, one thing has not made the list, ethical failure.

I suspect that few people have recognised the fires as examples of ethical failure. Yet, that is what they are. The flames were fuelled not just by high temperatures, too little rain and an overabundance of tinder-dry scrub. They were also the product of unthinking custom and practice and the mutation of core values and principles into their ‘shadow forms’.

Bushfires are natural phenomena. However, their scale and frequency are shaped by human decisions. We know this to be true through the evidence of how Indigenous Australians make different decisions – and in doing so – produce different effects.

Our First Nations people know how to control fire and through its careful application help the country to thrive. They have demonstrated (if only we had paid attention) that there was nothing inevitable about the destruction unleashed over the course of this summer. It was always open to us to make different choices which, in turn, would have led to different outcomes.

This is where ethics comes in. It is the branch of philosophy that deals with the character and quality of our decisions; decisions that shape the world. Indeed, constrained only by the laws of nature, the most powerful force on this planet is human choice. It is the task of ethics to help people make better choices by challenging norms that tend to be accepted without question.

This process asks people to go back to basics – to assess the facts of the matter, to challenge assumptions, to make conscious decisions that are informed by core values and principles. Above all, ethics requires people to accept responsibility for their decisions and all that follows.

This catastrophe was not inevitable. It is a product of our choices.

For example, governments of all persuasions are happy to tell us that they have no greater obligation than to keep us safe. It is inconceivable that our politicians would ignore intelligence suggesting that a terrorist attack might be imminent. They would not wait until there was unanimity in the room. Instead, our governments would accept the consensus view of those presenting the intelligence and take preventative action.

So, why have our political leaders ignored the warnings of fire chiefs, defence analysts and climate scientists? Why have they exposed the community to avoidable risks of bushfires? Why have they played Russian Roulette with our future?

It can only be that some part of society’s ‘ethical infrastructure’ is broken.

In the case of the fires, we could have made better decisions. Better decisions – not least in relation to the challenges of global emissions, climate change, how and where we build our homes, etc. – will make a better world in which foreseeable suffering and destruction is avoided. That is one of the gifts of ethics.

Understood in this light, there is nothing intangible about ethics. It permeates our daily lives. It is expressed in phenomena that we can sense and feel.

So, if anyone is looking for a physical manifestation of ethical failure – breathe the smoke-filled air, see the blood-red sky, feel the slap from a wall of heat, hear the roar of the firestorm.

The fires will subside. The rains will come. The seasons will turn. However, we will still be left to decide for the future. Will our leaders have the moral courage to put the public interest before their political fortunes? Will we make the ethical choice and decide for a better world?

It is our task, at The Ethics Centre, to help society do just that.

Join the conversation

Are we all responsible for the fires?


To fix the problem of deepfakes we must treat the cause, not the symptoms

This article was written for, and first published by The Guardian. Republished with permission.

Once technology is released, it’s like herding cats. Why do we continue to let the tech sector manage its own mess?

We haven’t yet seen a clear frontrunner emerge as the Democratic candidate for the 2020 US election. But I’ve been interested in another race – the race to see which buzzword is going to be a pivotal issue in political reporting, hot takes and the general political introspection that elections bring. In 2016 it was “fake news”. “Deepfake” is shoring up as one of the leading candidates for 2020.

This week the US House of Representatives intelligence committee asked Facebook, Twitter and Google what they were planning to do to combat deepfakes in the 2020 election. And it’s a fair question. With a bit of work, deepfakes could be convincing and misleading enough to make fake news look like child’s play.

Deepfake, a portmanteau of “deep learning” and “fake”, refers to AI software that can superimpose a digital composite face on to an existing video (and sometimes audio) of a person.

The term first rose to prominence when Motherboard reported on a Reddit user who was using AI to superimpose the faces of film stars on to existing porn videos, creating (with varying degrees of realness) porn starring Emma Watson, Gal Gadot, Scarlett Johansson and an array of other female celebrities.

However, there are also a range of political possibilities. Filmmaker Jordan Peele highlighted some of the harmful potential in an eerie video produced with Buzzfeed, in which he literally puts his words in Barack Obama’s mouth. Satisfying or not, hearing Obama call US president Trump a “total and complete dipshit” is concerning, given he never said it.

Just as concerning as the potential for deepfakes to be abused is that tech platforms are struggling to deal with them. For one thing, their content moderation issues are well documented. Most recently, a doctored video of Nancy Pelosi, slowed and pitch-edited to make her appear drunk, was tweeted by Trump. Twitter did not remove the video, YouTube did, and Facebook de-ranked it in the news feed.

For another, they have already tried, and failed, to moderate deepfakes. In a laudably fast response to the non-consensual pornographic deepfakes, Twitter, Gfycat, Pornhub and other platforms quickly acted to remove them and develop technology to help them do it.

However, once technology is released it’s like herding cats. Deepfakes are a moving feast and as soon as moderators find a way of detecting them, people will find a workaround.

But while there are important questions about how to deal with deepfakes, we’re making a mistake by siloing it off from broader questions and looking for exclusively technological solutions. We made the same mistake with fake news, where the prime offender was seen to be tech platforms rather than the politicians and journalists who had created an environment where lies could flourish.

The furore over deepfakes is a microcosm for the larger social discussion about the ethics of technology. It’s pretty clear the software shouldn’t have been developed and has led – and will continue to lead – to disproportionately more harm than good. And the lesson wasn’t learned. Recently the creator of an app called “DeepNude”, designed to give a realistic approximation of how a woman would look naked based on a clothed image, cancelled the launch fearing “the probability that people will misuse it is too high”.

What the legitimate use for this app is, I don’t know, but the response is revealing in how predictable it is. Reporting triggers some level of public outcry, at which suddenly tech developers realise the error of their ways. Theirs is the conscience of hindsight: feeling bad after the fact rather than proactively looking for ways to advance the common good, treat people fairly and minimise potential harm. By now we should know better and expect more.

“Technology is a way of seeing the world. It’s a kind of promise – that we can bring the world under our control and bend it to our will.”

Why then do we continue to let the tech sector manage its own mess? Partly it’s because it is difficult, but it’s also because we’re still addicted to the promise of technology even as we come to criticise it. Technology is a way of seeing the world. It’s a kind of promise – that we can bring the world under our control and bend it to our will. Deepfakes afford us the ability to manipulate a person’s image. We can make them speak and move as we please, with a ready-made, if weak, moral defence: “No people were harmed in the making of this deepfake.”

But in asking for a technological fix to deepfakes, we’re fuelling the same logic that brought us here. Want to solve Silicon Valley? There’s an app for that! Eventually, maybe, that app will work. But we’re still treating the symptoms, not the cause.

The discussion around ethics and regulation in technology needs to expand to include more existential questions. How should we respond to the promises of technology? Do we really want the world to be completely under our control? What are the moral costs of doing this? What does it mean to see every unfulfilled desire as something that can be solved with an app?

Yes, we need to think about the bad actors who are going to use technology to manipulate, harm and abuse. We need to consider the now obvious fact that if a technology exists, someone is going to use it to optimise their orgasms. But we also need to consider what it means when the only place we can turn to solve the problems of technology is itself technological.

Big tech firms have an enormous set of moral and political responsibilities and it’s good they’re being asked to live up to them. An industry-wide commitment to basic legal standards, significant regulation and technological ethics will go a long way to solving the immediate harms of bad tech design. But it won’t get us out of the technological paradigm we seem to be stuck in. For that we don’t just need tech developers to read some moral philosophy. We need our politicians and citizens to do the same.

“At the moment we’re dancing around the edges of the issue, playing whack-a-mole as new technologies arise.”

At the moment we’re dancing around the edges of the issue, playing whack-a-mole as new technologies arise. We treat tech design and development like it’s inevitable. As a result, we aim to minimise risks rather than look more deeply at the values, goals and moral commitments built into the technology. As well as asking how we stop deepfakes, we need to ask why someone thought they’d be a good idea to begin with. There’s no app for that.

Join the conversation

Does the truth matter?


The new rules of ethical design in tech

This article was written for, and first published by Atlassian.

Because tech design is a human activity, and there’s no such thing as human behaviour without ethics.

One of my favourite memes for the last few years is This is Fine. It’s a picture of a dog sitting in a burning building with a cup of coffee. “This is fine,” the dog announces. “I’m okay with what’s happening. It’ll all turn out okay.” Then the dog takes a sip of coffee and melts into the fire.

Working in ethics and technology, I hear a lot of “This is fine.” The tech sector has built (and is building) processes and systems that exclude vulnerable users by designing “nudges” that influence users, users who end up making privacy concessions they probably shouldn’t. Or, designing by hardwiring preconceived notions of right and wrong into technologies that will shape millions of people’s lives.

But many won’t acknowledge they could have ethics problems.

Credit: KC Green. https://topatoco.com/collections/this-is-fine

This is partly because, like the dog, they don’t concede that the fire might actually burn them in the end. Lots of people working in tech are willing to admit that someone else has a problem with ethics, but they’re less likely to believe is that they themselves have an issue with ethics.

And I get it. Many times, people are building products that seem innocuous, fun, or practical. There’s nothing in there that makes us do a moral double-take.

The problem is, of course, that just because you’re not able to identify a problem doesn’t mean you won’t melt to death in the sixth frame of the comic. And there are issues you need to address in what you’re building, because tech design is a human activity, and there’s no such thing as human behaviour without ethics.

Your product probably already has ethical issues

To put it bluntly: if you think you don’t need to consider ethics in your design process because your product doesn’t generate any ethical issues, you’ve missed something. Maybe your product is still fine, but you can’t be sure unless you’ve taken the time to consider your product and stakeholders through an ethical lens.

Look at it this way: If you haven’t made sure there are no bugs or biases in your design, you haven’t been the best designer you could be. Ethics is no different – making people (and their products) the best they can be.

Take Pokémon Go, for example. It’s an awesome little mobile game that gives users the chance to feel like Pokémon trainers in the real world. And it’s a business success story, recording a profit of $3 billion at the end of 2018. But it’s exactly the kind of innocuous-seeming app most would think doesn’t have any ethical issues.

But it does. It distracted drivers, brought users to dangerous locations in the hopes of catching Pokémon, disrupted public infrastructure, didn’t seek the consent of the sites it included in the game, unintentionally excluded rural neighbourhoods (many populated by racial minorities), and released Pokémon in offensive locations (for instance, a poison gas Pokémon in the Holocaust Museum in Washington DC).

Quite a list, actually.

This is a shame, because all of this meant that Pokemon Go was not the best game it could be. And as designers, that’s the goal – to make something great. But something can’t be great unless it’s good, and that’s why designers need to think about ethics.

Here are a few things you can embed within your design processes to make sure you’re not going to burn to death, ethically speaking, when you finally launch.

1. Start with ethical pre-mortems

When something goes wrong with a product, we know it’s important to do a postmortem to make sure we don’t repeat the same mistakes. Postmortems happen all the time in ethics. A product is launched, a scandal erupts, and ethicists wind up as talking heads on the news discussing what went wrong.

As useful as postmortems are, they can also be ways of washing over negligent practices. When something goes wrong and a spokesperson says, “We’re going to look closely at what happened to make sure it doesn’t happen again.” I want to say, “Why didn’t you do that before you launched?” That’s what an ethical premortem does.

Sit down with your team and talk about what would make this product an ethical failure. Then work backwards to the root causes of that possible failure. How could you mitigate that risk? Can you reduce the risk enough to justify going forward with the project? Are your systems, processes and teams set up in a way that enables ethical issues to be identified and addressed?

Tech ethicist Shannon Vallor provides a list of handy premortem questions:

  • How Could This Project Fail for Ethical Reasons?
  • What Would be the Most Likely Combined Causes of Our Ethical Failure/Disaster?
  • What Blind Spots Would Lead Us Into It?
  • Why Would We Fail to Act?
  • Why/How Would We Choose the Wrong Action?

What Systems/Processes/Checks/Failsafes Can We Put in Place to Reduce Failure Risk?

2. Ask the Death Star question

The book Rogue One: Catalyst tells the story of how the galactic empire managed to build the Death Star. The strategy was simple: take many subject matter experts and get them working in silos on small projects. With no team aware of what other teams were doing, only a few managers could make sense of what was actually being built.

Small teams, working in a limited role on a much larger project, with limited connection to the needs, goals, objectives or activities of other teams. Sound familiar? Siloing is a major source of ethical negligence. Teams whose workloads, incentives, and interests are limited to their particular contribution seldom can identify the downstream effects of their contribution, or what might happen when it’s combined with other work.

While it’s unlikely you’re secretly working for a Sith Lord, it’s still worth asking:

  • What’s the big picture here? What am I actually helping to build?
  • What contribution is my work making and are there ethical risks I might need to know about?
  • Are there dual-use risks in this product that I should be designing against?
  • If there are risks, are they worth it, given the potential benefits?

3. Get red teaming

Anyone who has worked in security will know that one of the best ways to know if a product is secure is to ask someone else to try to break it. We can use a similar concept for ethics. Once we’ve built something we think is great, ask some people to try to prove that it isn’t.

Red teams should ask:

  • What are the ethical pressure points here?
  • Have you made trade-offs between competing values/ideals? If so, have you made them in the right way?
  • What happens if we widen the circle of possible users to include some people you may not have considered?
  • Was this project one we should have taken on at all? (If you knew you were building the Death Star, it’s unlikely you could ever make it an ethical product. It’s a WMD.)
  • Is your solution the only one? Is it the best one?

4. Decide what your product’s saying

Ever seen a toddler discover a new toy? Their first instinct is to test the limits of what they can do. They’re not asking What was the intention of the designer, they’re testing how the item can satisfy their needs, whatever they may be. In this case they chew it, throw it, paint with it, push it down a slide… a toddler can’t access the designer’s intention. The only prompts they have are those built into the product itself.

It’s easy to think about our products as though they’ll only be used in the way we want them to be used. In reality, though, technology design and usage is more like a two-way conversation than a set of instructions. Given this, it’s worth asking: if the user had no instructions on how to use this product, what would they infer purely from the design?

For example, we might infer from the hidden-away nature of some privacy settings on social media platforms that we shouldn’t tweak our privacy settings. Social platforms might say otherwise, but their design tells a different story. Imagine what your product would be saying to a user if you let it speak for itself.

This is doubly important, because your design is saying something. All technology is full of affordances – subtle prompts that invite the user to engage with it in some ways rather than others. They’re there whether you intend them to be or not, but if you’re not aware of what your design affords, you can’t know what messages the user might be receiving.

Design teams should ask:

  • What could a infer from the design about how a product can/should be used?
  • How do you want people to use this?
  • How don’t you want people to use this?
  • Do your design choices and affordances reflect these expectations?
  • Are you unnecessarily preventing other legitimate uses of the technology?

5. Don’t forget to show your work

One of the (few) things I remember from my high school math classes is this: you get one mark for getting the right answer, but three marks for showing the working that led you there.

It’s also important for learning: if you don’t get the right answer, being able to interrogate your process is crucial (that’s what a post-mortem is).

For ethical design, the process of showing your work is about being willing to publicly defend the ethical decisions you’ve made. It’s a practical version of The Sunlight Test – where you test your intentions by asking if you’d do what you were doing if the whole world was watching.

Ask yourself (and your team):

  • Are there any limitations to this product?
  • What trade-offs have you made (e.g. between privacy and user-customisation)?
  • Why did you build this product (what problems are you solving?)
  • Does this product risk being misused? If so, what have you done to mitigate those risks?
  • Are there any users who will have trouble using this product (for instance, people with disabilities)? If so, why can’t you fix this and why is it worth releasing the product, given it’s not universally accessible?
  • How probable is it that the good and bad effects are likely to happen?

Ethics is an investment

I’m constantly amazed at how much money, time and personnel organisations are willing to invest in culture initiatives, wellbeing days and the like, but who haven’t spent a penny on ethics. There’s a general sense that if you’re a good person, then you’ll build ethical stuff, but the evidence overwhelmingly proves that’s not the case. Ethics needs to be something you invest in learning about, building resources and systems around, recruiting for, and incentivising.

It’s also something that needs to be engaged in for the right reasons. You can’t go into this process because you think it’s going to make you money or recruit the best people, because you’ll abandon it the second you find a more effective way to achieve those goals. A lot of the talk around ethics in technology at the moment has a particular flavour: anti-regulation. There is a hope that if companies are ethical, they can self-regulate.

I don’t see that as the role of ethics at all. Ethics can guide us toward making the best judgements about what’s right and what’s wrong. It can give us precision in our decisions, a language to explain why something is a problem, and a way of determining when something is truly excellent. But people also need justice: something to rely on if they’re the least powerful person in the room. Ethics has something to say here, but so do law and regulation.

If your organisation says they’re taking ethics seriously, ask them how open they are to accepting restraint and accountability. How much are they willing to invest in getting the systems right? Are they willing to sack their best performer if that person isn’t conducting themselves the way they should?

Join the conversation

Can you accidentally create ethical failure in tech?


Following a year of scandals, what's the future for boards?

As guardians of moral behaviour, company boards continue to be challenged. After a year of wall-to-wall scandals, especially within the Banking and Finance sector, many are asking whether there are better ways to oversee what is going on in a business.

A series of damning inquiries, including the recent Royal Commission into Financial Services, has spurred much discussion about holding boards to account – but far less about the structure of boards and whose interests they serve.

Ethicist Lesley Cannold expressed her frustration at this state of affairs in a speech to the finance industry, saying the Royal Commission was a lost opportunity to look at “root and branch” reform.

“We need to think of changes that will propel different kinds of leaders into place and rate their performance according to different criteria – criteria that relate to the wellbeing of a range of stakeholders, not just shareholders,” she said at the Crossroads: The 2019 Banking and Finance Oath Conference in Sydney in August.

This issue is close to the heart of Andrew Linden, PhD researcher on German corporate governance and a sessional Lecturer in RMIT’s School of Management. Linden favours the German system of having an upper supervisory board, with 50 per cent of directors elected by employees, and a lower management board to handle the day-to-day operations.

This system was imposed on the Germans after World War II to ensure companies were more socially responsible but, despite its advantages, has not spread to the English-speaking world, says Linden.

“For 40 years, corporate Australia has been allowed to get away with the idea that all they had to do was to serve shareholders and to maximise the value returned to shareholders.

“Now, that’s never been a feature of the Corporate Law. And directors have had very specific duties, publicly imposed duties, that they ought to have been fulfilling – but they haven’t.”

It is the responsibility of directors of public companies to govern in the corporation’s best interests and also ensure that corporations do not impose costs on the wider community, he says.

“All these piecemeal responses to the Banking Royal Commission are just Band-Aids on bullet wounds. They are not actually going to fix the problem. All through these corporate governance debates, there has not been too much of a focus on board design.”

The German solution – a two-tier model

This board structure, proposed by Linden, would have non-executive directors on an upper (supervisory) board, which would be legally tasked with monitoring and control, approving strategy and appointing auditors.

A lower, management board would have executive directors responsible for implementing the approved strategy and day-to-day management.

This structure would separate non-executive from executive directors and create clear, legally separate roles for both groups, he says.

“Research into European banks suggests having employee and union representation on supervisory boards, combined with introduction of employee elected works councils to deal with management over day-to-day issues, reduces systemic risk and holds executives accountable,” according to Linden, who wrote about the subject with Warren Staples (senior lecturer in Management, RMIT University) in The Conversation last year.

Denmark, Norway and Sweden also have employee directors on corporate boards and the model is being proposed in the US by Democratic presidential hopefuls, including Senators Elizabeth Warren and Bernie Sanders.

Says Linden: “All the solutions that people in the English-speaking world typically think about are ownership-based solutions. So, you either go for co-operative ownership as an alternative to shareholder ownership, or, alternatively, it’s public ownership. All of these debates over decades have been about ‘who are the best owners’, not necessarily about the design of their governing bodies.”

Linden says research shows the riskiest banks are those that are English-speaking, for-profit, shareholder-dominated, overseen by an independent-director-dominated board.

“And they have been the ones that have imposed the most cost on communities,” he says.

 Outsourcing the board

Allowing consultant-like companies to oversee governance is a solution proposed by two law academics in the US, who say they are “trying to encourage people to innovate in governance in ways that are fundamentally different than just little tweaks at the edges”.

Law professors Stephen Bainbridge (UCLA) and  Todd Henderson (University of Chicago) say organisations are familiar with the idea of outsourcing responsibilities to lawyers, accountants, financial service providers.

“We envision a corporation, say Microsoft or ExxonMobil, hiring another company, say Boards-R-Us, to provide it with director services, instead of hiring 10 or so separate ‘companies’ to do so,” Henderson explained in an article.

 “Just as other service firms, like Kirkland and Ellis, McKinsey and Company, or KPMG, are staffed by professionals with large support networks, so too would BSPs [board service providers] bring the various aspects of director services under a single roof. We expect the gains to efficiency from such a move to be quite large.

“We argue that hiring a BSP to provide board services instead of a loose group of sole proprietorships [non-executive directors] will increase board accountability, both from markets and judicial supervision.”

Outsourcing to specialists is a familiar concept, said Bainbridge in a video interview with The Conference Board.

“Would you rather deal with you know twelve part-timers who get hired in off the street, or would you rather deal with a professional with a team of professionals?”

Your director is a robot

A Hong Kong venture capital firm, Deep Knowledge Ventures, appointed the first-ever robot director to its board in 2014, giving it the power to veto investment decisions deemed as too risky by its artificial intelligence.

Australia’s Chief Scientist, Dr Alan Finkel, told company directors that he had initially thought the robo-director, named Vital, was a mere publicity stunt.

However, five years on “… the company is still in business. Vital is still on the Board. And waiting in the wings is her successor: Vital 2.0,” Finkel said at a governance summit held by the Australian Institute of Company Directors in March.

“The experiment was so successful that the CEO predicts we’ll see fully autonomous companies – able to operate without any human involvement – in the coming decade.

Stop and think about it: fully autonomous companies able to operate without any human involvement. There’d be no-one to come along to AICD summits!”

Dr Finkel reassured his audience that their jobs were safe … for now.

“… those director-bots would still lack something vital – something truly vital – and that’s what we call artificial general intelligence: the digital equivalent of the package deal of human abilities, human insights and human experiences,” he said.

“The experts tell us that the world of artificial general intelligence is unlikely to be with us until 2050, perhaps longer. Thus, shareholders, customers and governments who want that package deal will have to look to you for quite some time,” he told the audience.

“They will rely on the value that you, and only you, can bring, as a highly capable human being, to your role.”

Linden agrees that robo-directors have limitations and that, before people get too excited about the prospect of technology providing the solution to governance, they need to get back to basics.

“All these issues to do with governance failures get down to questions of ethics and morality and lawfulness – on making judgments about what is appropriate conduct,” he says, adding that it was “hopelessly naïve” to expect machines to be able to make moral judgements.

“These systems depend on who designs them, what kind of data goes into them. That old analogy ‘garbage in, garbage out’ is just as applicable to artificial intelligence as it is to human systems.”

This article was originally written for The Ethics Alliance. Find out more about this corporate membership program. Already a member? Log in to the membership portal for more content and tools here.

Join the conversation

Boards. Are there better ways to oversee what is going on in a business?


Should you be afraid of apps like FaceApp?

Until last week, you would have been forgiven for thinking a meme couldn’t trigger fears about international security.

But since the widespread concerns over FaceApp last week, many are asking renewed questions about privacy, data ownership and transparency in the tech sector. But most of the reportage hasn’t gotten to the biggest ethical risk the FaceApp case reveals.

What is FaceApp?

In case you weren’t in the know, FaceApp is a ‘neural transformation filter’.

Basically, it uses AI to take a photo of your face and make it look different. The recent controversy centred on its ability to age people, pretty realistically, in just a short photo. Use of the app was widespread, creating a viral trend – there were clicks and engagements to be made out of the app, so everyone started to hop on board.

Where does your data go?

With the increasing popularity comes increasing scrutiny. A number of people soon noticed that FaceApp’s terms of use seemed to give them a huge range of rights to access and use the photos they’d collected. There were fears the app could access all the photos in your photo stream, not just the one you chose to upload.

There were questions about how you could delete your data from the service. And worst of all for many, the makers of the app, Wireless Labs, are based in Russia. US Minority Leader Chuck Schumer even asked the FBI to investigate the app.

The media commentary has been pretty widespread, suggesting that the app sends data back to Russia, lacks transparency about how it will or won’t be used and has no accessible data ethics principles. At least two of those are true. There isn’t much in FaceApp’s disclosure that would give a user any sense of confidence in the app’s security or respect for privacy.

Unsurprisingly, this hasn’t amounted to much. Giving away our data in irresponsible ways has become a bit like comfort eating. You know it’s bad, but you’re still going to do it.

The reasons are likely similar to the reasons we indulge other petty vices: the benefits are obvious and immediate; the harms are distant and abstract. And whilst we’d all like to think we’ve got more self-control than the kids in those delayed gratification psychology experiments, more often than not our desire for fun or curiosity trumps any concern we have over how our data is used.

Should you be worried?

Is this a problem? To the extent that this data – easily accessed – can be used for a range of goals we likely don’t support, yes. It also gives rise to a range of complex ethical questions concerning our responsibility.

Let’s say I willingly give my data to FaceApp. This data is then aggregated and on-sold in a data marketplace. A dataset comprising of millions of facial photos is then used to train facial recognition AI, which is used to track down political dissidents in Russia. To what extent should I consider myself responsible for political oppression on the other side of the world?

In climate change ethics, there is a school of thought that suggests even if our actions can’t change an outcome – for instance, by making a meaningful reduction to emissions – we still have a moral obligation not to make the problem worse.

It might be true that a dataset would still be on sold without our input, but that alone doesn’t seem to justify adding our information or throwing up our arms and giving up. In this hypothetical, giving up – or not caring – means abandoning my (admittedly small) role in human rights violations and political injustice.

A troubling peek into the future

In reality, it’s really unlikely that’s what FaceApp is actually using your data to do. It’s far more likely, according to the MIT Technology Review, that your face might be used to train FaceApp to get even better at what it does.

It might use your face to help improve software that analyses faces to determine age and gender. Or it might be used – perhaps most scarily – to train AI to create deepfakes or faces of people who don’t exist. All of this is a far cry from the nightmare scenario sketched out above.

But even if my horror story was accurate, would it matter? It seems unlikely.

By the time tech journalists were talking about the potential data issues with FaceApp, millions had already uploaded their photos into the app. The ship had sailed, and it set off with barely a question asked of it. It’s also likely that plenty of people read about the data issues and then installed the app just to see what all the fuss is about.

Who is responsible?

I’m pulled in two directions when I wonder who we should hold responsible here. Of course, designers are clever and intentionally design their apps in ways that make them smooth and easy to use. They eliminate the friction points that facilitate serious thinking and reflection.

But that speed and efficiency is partly there because we want it to be there. We don’t want to actually read the terms of use agreement, and the company willingly give us a quick way to avoid doing so (whilst lying, and saying we have).

This is a Faustian pact – we let tech companies sell us stuff that’s potentially bad for us, so long as it’s fun.

 

 

The important reflection around FaceApp isn’t that the Russians are coming for us – a view that, as Kaitlyn Tiffany noted for Vox, smacks slightly of racism and xenophobia. The reflection is how easily we give up our principled commitments to ethics, privacy and wokeful use of technology as soon as someone flashes some viral content at us.

In Ethical by Design: Principles for Good Technology, Simon Longstaff and I made the point that technology isn’t just a thing we build and use. It’s a world view. When we see the world technologically, our central values are things like efficiency, effectiveness and control. That is, we’re more interesting in how we do things than what we’re doing.

Two sides of the story

For me, that’s the FaceApp story. The question wasn’t ‘is this app safe to use?’ (probably no less so than most other photo apps), but ‘how much fun will I have?’ It’s a worldview where we’re happy to pay any price for our kicks, so long as that price is hidden from us. FaceApp might not have used this impulse for maniacal ends, but it has demonstrated a pretty clear vulnerability.

Is this how the world ends, not with a bang, but with a chuckle and a hashtag?

Join the conversation

As technology advances, what will happen with online privacy?


Injecting artificial intelligence with human empathy

Injecting artificial intelligence with human empathy

Injecting artificial intelligence with human empathy

The great promise of artificial intelligence is efficiency. The finely tuned mechanics of AI will free up societies to explore new, softer skills while industries thrive on automation.

However, if we’ve learned anything from the great promise of the Internet – which was supposed to bring equality by leveling the playing field – it’s clear new technologies can be rife with complications unwittingly introduced by the humans who created them.

The rise of artificial intelligence is exciting, but the drive toward efficiency must not happen without a corresponding push for strong ethics to guide the process. Otherwise, the advancements of AI will be undercut by human fallibility and biases. This is as true for AI’s application in the pursuit of social justice as it is in basic business practices like customer service.

Empathy

The ethical questions surrounding AI have long been the subject of science fiction, but today they are quickly becoming real-world concerns. Human intelligence has a direct relationship to human empathy. If this sensitivity doesn’t translate into artificial intelligence the consequences could be dire. We must examine how humans learn in order to build an ethical education process for AI.

AI is not merely programmed – it is trained like a human. If AI doesn’t learn the right lessons, ethical problems will inevitably arise. We’ve already seen examples, such as the tendency of facial recognition software to misidentify people of colour as criminals.

 

 

Biased AI

In the United States, a piece of software called Correctional Offender Management Profiling for Alternative Sanctions (Compas) was used to assess the risk of defendants reoffending and had an impact on their sentencing. Compas was found to be twice as likely to misclassify non-white defendants as higher risk offenders, while white defendants were misclassified as lower risk much more often than non-white defendants. This is a training issue. If AI is predominantly trained in Caucasian faces, it will disadvantage minorities.

This example might seem far removed from us here in Australia but consider the consequences if it were in place here. What if a similar technology was being used at airports for customs checks, or part of a pre-screening process used by recruiters and employment agencies?

“Human intelligence has a direct relationship to human empathy.”

If racism and other forms of discrimination are unintentionally programmed into AI, not only will it mirror many of the failings of analog society, but it could magnify them.

While heightened instances of injustice are obviously unacceptable outcomes for AI, there are additional possibilities that don’t serve our best interests and should be avoided. The foremost example of this is in customer service.

AI vs human customer service

Every business wants the most efficient and productive processes possible but sometimes better is actually worse. Eventually, an AI solution will do a better job at making appointments, answering questions, and handling phone calls. When that time comes, AI might not always be the right solution.

Particularly with more complex matters, humans want to talk to other humans. Not only do they want their problem resolved, but they want to feel like they’ve been heard. They want empathy. This is something AI cannot do.

AI is inevitable. In fact, you’re probably already using it without being aware of it. There is no doubt that the proper application of AI will make us more efficient as a society, but the temptation to rely blindly on AI is unadvisable.

We must be aware of our biases when creating new technologies and do everything in our power to ensure they aren’t baked into algorithms. As more functions are handed over to AI, we must also remember that sometimes there’s no substitute for human-to-human interaction.

After all, we’re only human.

Allan Waddell is founder and Co-CEO of Kablamo, an Australian cloud based tech software company.

Join the conversation

Are we so innately flawed we’re incapable of teaching others to be good?


universal-basic-income

Why the future is workless

Predictions for the future of work can make grim reading – depending on your point of view. Many of our jobs are being automated out of existence, however, it looks like we will have a lot more free time.

Writer and Doctor of Philosophy, Tim Dunlop, says people and governments are going to have to rethink how we support ourselves when there isn’t enough paid work to go around.

Dunlop does not ascribe to the view often put forward by economists that technology will generate enough jobs to replace the ones that are destroyed by robotics and artificial intelligence.

“I don’t know if that’s necessarily true in the medium term… I think there’s going to be a really nasty transition for more than a generation,” says Dunlop, the author of Why the Future is Workless and The Future of Everything.

“We are going through this huge period of transition at the moment and we don’t really know where it’s heading. We’re at the bottom of the curve, in terms of what [new technologies] are going to be capable of.”

Dunlop says framing question around the future of work as “will a robot take my job?”, is reductive. Instead, we should be looking at what sort of job will be available and what the conditions will be for the jobs that are offered.

“If we are working less hours, or there is less work, or the economy just needs fewer people, and then we don’t have a technology problem, we’ve got a distribution problem,” he says.

The “hollowing out” of the job market means that middle-skilled jobs are disappearing because they can be automated. Trying to “upskill” people who have been displaced, or redirect them into jobs that need a human touch (such as caring jobs) is not an answer for everyone.

“Not everybody can have a high-skill, high-paid sort of job. You need those middle-level jobs as well. And if you don’t have those, then society’s got a problem.” he says.

Dunlop says one way of addressing the issue is a universal basic income: where everybody gets a standard payment to cover their basic needs.

“I don’t think you can rely on wages to distribute wealth in an equitable way, in the way that might have been in the recent past,” he says.

The idea of a Universal Basic Income has been around since the 16th Century and is unconditional – not based on household income.

In Australia, the single-person pension (now just over $24,000 per annum) might be seen as an appropriate level of payment, according to Dunlop, in an article written for the Inside Story  website.

“It is basic also in the sense that it provides an income floor below which no one can fall. The payment is unconditional in that no one has to fulfil any obligations in order to receive it, and even if you earn other income you’re still eligible. That makes it universal, equally available to the poorest member of society as it is to the start-up billionaire,” he writes.

Much of the discomfort often voiced about such a scheme centres around the idea that people are being paid to “do nothing” and that it removes the incentive to work.

However, trials show that in developing countries, people use the money to improve their situation, starting businesses, sending children to school and avoiding prostitution. In Europe and Canada, people receiving the payment tend to stay in their jobs and entrepreneurship increases.

Trials of the Universal Basic Income are now taking place globally – from Switzerland to Canada to Kenya – but most are limited to the unemployed or financially needy, rather than being universal.

Dunlop says that, rather than worrying about whether people “deserve” the payment, we should accept the concept of “shared citizenship”. Whether we do paid work, or not, we are all contributing to the overall wealth of society.

Inequality comes when wealth gets divided up by those who do work that is paid and those who own the means of production. With a Universal Basic Income, everybody’s contribution is valued and people get a benefit from the roles they play in the formal and informal economy, he says.

So what will we be doing in the future if we are not doing paid work? Dunlop says we will still have our hobbies, passions and families – and we can derive just as much (if not more) meaning from those things as we do from our jobs.

We are already seeing evidence of efforts to reduce the hours of work, with companies trying four-day work weeks (paid for five), the Swedish Government trialling a six-hour workday, a French law banning work emails after hours.

Dunlop says a “work ethic” culture makes it hard for these reforms to succeed and unions tend to see a push for reduced hours as a “trojan horse” threat of increasing casualisation and insecure work.

“That’s where things like the French rule about emails probably comes in handy. It sets some parameters around what society sees as acceptable and maybe it needs some government leadership in this area.”

This article was originally written for The Ethics Alliance. Find out more about this corporate membership program. Already a member? Log in to the membership portal for more content and tools here.

Join the conversation

What will we be doing in the future if we are not doing paid work?