The new rules of ethical design in tech

This article was written for, and first published by Atlassian.

Because tech design is a human activity, and there’s no such thing as human behaviour without ethics.

One of my favourite memes for the last few years is This is Fine. It’s a picture of a dog sitting in a burning building with a cup of coffee. “This is fine,” the dog announces. “I’m okay with what’s happening. It’ll all turn out okay.” Then the dog takes a sip of coffee and melts into the fire.

Working in ethics and technology, I hear a lot of “This is fine.” The tech sector has built (and is building) processes and systems that exclude vulnerable users by designing “nudges” that influence users, users who end up making privacy concessions they probably shouldn’t. Or, designing by hardwiring preconceived notions of right and wrong into technologies that will shape millions of people’s lives.

But many won’t acknowledge they could have ethics problems.

Credit: KC Green.

This is partly because, like the dog, they don’t concede that the fire might actually burn them in the end. Lots of people working in tech are willing to admit that someone else has a problem with ethics, but they’re less likely to believe is that they themselves have an issue with ethics.

And I get it. Many times, people are building products that seem innocuous, fun, or practical. There’s nothing in there that makes us do a moral double-take.

The problem is, of course, that just because you’re not able to identify a problem doesn’t mean you won’t melt to death in the sixth frame of the comic. And there are issues you need to address in what you’re building, because tech design is a human activity, and there’s no such thing as human behaviour without ethics.

Your product probably already has ethical issues

To put it bluntly: if you think you don’t need to consider ethics in your design process because your product doesn’t generate any ethical issues, you’ve missed something. Maybe your product is still fine, but you can’t be sure unless you’ve taken the time to consider your product and stakeholders through an ethical lens.

Look at it this way: If you haven’t made sure there are no bugs or biases in your design, you haven’t been the best designer you could be. Ethics is no different – making people (and their products) the best they can be.

Take Pokémon Go, for example. It’s an awesome little mobile game that gives users the chance to feel like Pokémon trainers in the real world. And it’s a business success story, recording a profit of $3 billion at the end of 2018. But it’s exactly the kind of innocuous-seeming app most would think doesn’t have any ethical issues.

But it does. It distracted drivers, brought users to dangerous locations in the hopes of catching Pokémon, disrupted public infrastructure, didn’t seek the consent of the sites it included in the game, unintentionally excluded rural neighbourhoods (many populated by racial minorities), and released Pokémon in offensive locations (for instance, a poison gas Pokémon in the Holocaust Museum in Washington DC).

Quite a list, actually.

This is a shame, because all of this meant that Pokemon Go was not the best game it could be. And as designers, that’s the goal – to make something great. But something can’t be great unless it’s good, and that’s why designers need to think about ethics.

Here are a few things you can embed within your design processes to make sure you’re not going to burn to death, ethically speaking, when you finally launch.

1. Start with ethical pre-mortems

When something goes wrong with a product, we know it’s important to do a postmortem to make sure we don’t repeat the same mistakes. Postmortems happen all the time in ethics. A product is launched, a scandal erupts, and ethicists wind up as talking heads on the news discussing what went wrong.

As useful as postmortems are, they can also be ways of washing over negligent practices. When something goes wrong and a spokesperson says, “We’re going to look closely at what happened to make sure it doesn’t happen again.” I want to say, “Why didn’t you do that before you launched?” That’s what an ethical premortem does.

Sit down with your team and talk about what would make this product an ethical failure. Then work backwards to the root causes of that possible failure. How could you mitigate that risk? Can you reduce the risk enough to justify going forward with the project? Are your systems, processes and teams set up in a way that enables ethical issues to be identified and addressed?

Tech ethicist Shannon Vallor provides a list of handy premortem questions:

  • How Could This Project Fail for Ethical Reasons?
  • What Would be the Most Likely Combined Causes of Our Ethical Failure/Disaster?
  • What Blind Spots Would Lead Us Into It?
  • Why Would We Fail to Act?
  • Why/How Would We Choose the Wrong Action?

What Systems/Processes/Checks/Failsafes Can We Put in Place to Reduce Failure Risk?

2. Ask the Death Star question

The book Rogue One: Catalyst tells the story of how the galactic empire managed to build the Death Star. The strategy was simple: take many subject matter experts and get them working in silos on small projects. With no team aware of what other teams were doing, only a few managers could make sense of what was actually being built.

Small teams, working in a limited role on a much larger project, with limited connection to the needs, goals, objectives or activities of other teams. Sound familiar? Siloing is a major source of ethical negligence. Teams whose workloads, incentives, and interests are limited to their particular contribution seldom can identify the downstream effects of their contribution, or what might happen when it’s combined with other work.

While it’s unlikely you’re secretly working for a Sith Lord, it’s still worth asking:

  • What’s the big picture here? What am I actually helping to build?
  • What contribution is my work making and are there ethical risks I might need to know about?
  • Are there dual-use risks in this product that I should be designing against?
  • If there are risks, are they worth it, given the potential benefits?

3. Get red teaming

Anyone who has worked in security will know that one of the best ways to know if a product is secure is to ask someone else to try to break it. We can use a similar concept for ethics. Once we’ve built something we think is great, ask some people to try to prove that it isn’t.

Red teams should ask:

  • What are the ethical pressure points here?
  • Have you made trade-offs between competing values/ideals? If so, have you made them in the right way?
  • What happens if we widen the circle of possible users to include some people you may not have considered?
  • Was this project one we should have taken on at all? (If you knew you were building the Death Star, it’s unlikely you could ever make it an ethical product. It’s a WMD.)
  • Is your solution the only one? Is it the best one?

4. Decide what your product’s saying

Ever seen a toddler discover a new toy? Their first instinct is to test the limits of what they can do. They’re not asking What was the intention of the designer, they’re testing how the item can satisfy their needs, whatever they may be. In this case they chew it, throw it, paint with it, push it down a slide… a toddler can’t access the designer’s intention. The only prompts they have are those built into the product itself.

It’s easy to think about our products as though they’ll only be used in the way we want them to be used. In reality, though, technology design and usage is more like a two-way conversation than a set of instructions. Given this, it’s worth asking: if the user had no instructions on how to use this product, what would they infer purely from the design?

For example, we might infer from the hidden-away nature of some privacy settings on social media platforms that we shouldn’t tweak our privacy settings. Social platforms might say otherwise, but their design tells a different story. Imagine what your product would be saying to a user if you let it speak for itself.

This is doubly important, because your design is saying something. All technology is full of affordances – subtle prompts that invite the user to engage with it in some ways rather than others. They’re there whether you intend them to be or not, but if you’re not aware of what your design affords, you can’t know what messages the user might be receiving.

Design teams should ask:

  • What could a infer from the design about how a product can/should be used?
  • How do you want people to use this?
  • How don’t you want people to use this?
  • Do your design choices and affordances reflect these expectations?
  • Are you unnecessarily preventing other legitimate uses of the technology?

5. Don’t forget to show your work

One of the (few) things I remember from my high school math classes is this: you get one mark for getting the right answer, but three marks for showing the working that led you there.

It’s also important for learning: if you don’t get the right answer, being able to interrogate your process is crucial (that’s what a post-mortem is).

For ethical design, the process of showing your work is about being willing to publicly defend the ethical decisions you’ve made. It’s a practical version of The Sunlight Test – where you test your intentions by asking if you’d do what you were doing if the whole world was watching.

Ask yourself (and your team):

  • Are there any limitations to this product?
  • What trade-offs have you made (e.g. between privacy and user-customisation)?
  • Why did you build this product (what problems are you solving?)
  • Does this product risk being misused? If so, what have you done to mitigate those risks?
  • Are there any users who will have trouble using this product (for instance, people with disabilities)? If so, why can’t you fix this and why is it worth releasing the product, given it’s not universally accessible?
  • How probable is it that the good and bad effects are likely to happen?

Ethics is an investment

I’m constantly amazed at how much money, time and personnel organisations are willing to invest in culture initiatives, wellbeing days and the like, but who haven’t spent a penny on ethics. There’s a general sense that if you’re a good person, then you’ll build ethical stuff, but the evidence overwhelmingly proves that’s not the case. Ethics needs to be something you invest in learning about, building resources and systems around, recruiting for, and incentivising.

It’s also something that needs to be engaged in for the right reasons. You can’t go into this process because you think it’s going to make you money or recruit the best people, because you’ll abandon it the second you find a more effective way to achieve those goals. A lot of the talk around ethics in technology at the moment has a particular flavour: anti-regulation. There is a hope that if companies are ethical, they can self-regulate.

I don’t see that as the role of ethics at all. Ethics can guide us toward making the best judgements about what’s right and what’s wrong. It can give us precision in our decisions, a language to explain why something is a problem, and a way of determining when something is truly excellent. But people also need justice: something to rely on if they’re the least powerful person in the room. Ethics has something to say here, but so do law and regulation.

If your organisation says they’re taking ethics seriously, ask them how open they are to accepting restraint and accountability. How much are they willing to invest in getting the systems right? Are they willing to sack their best performer if that person isn’t conducting themselves the way they should?

Join the conversation

Can you accidentally create ethical failure in tech?

Following a year of scandals, what's the future for boards?

As guardians of moral behaviour, company boards continue to be challenged. After a year of wall-to-wall scandals, especially within the Banking and Finance sector, many are asking whether there are better ways to oversee what is going on in a business.

A series of damning inquiries, including the recent Royal Commission into Financial Services, has spurred much discussion about holding boards to account – but far less about the structure of boards and whose interests they serve.

Ethicist Lesley Cannold expressed her frustration at this state of affairs in a speech to the finance industry, saying the Royal Commission was a lost opportunity to look at “root and branch” reform.

“We need to think of changes that will propel different kinds of leaders into place and rate their performance according to different criteria – criteria that relate to the wellbeing of a range of stakeholders, not just shareholders,” she said at the Crossroads: The 2019 Banking and Finance Oath Conference in Sydney in August.

This issue is close to the heart of Andrew Linden, PhD researcher on German corporate governance and a sessional Lecturer in RMIT’s School of Management. Linden favours the German system of having an upper supervisory board, with 50 per cent of directors elected by employees, and a lower management board to handle the day-to-day operations.

This system was imposed on the Germans after World War II to ensure companies were more socially responsible but, despite its advantages, has not spread to the English-speaking world, says Linden.

“For 40 years, corporate Australia has been allowed to get away with the idea that all they had to do was to serve shareholders and to maximise the value returned to shareholders.

“Now, that’s never been a feature of the Corporate Law. And directors have had very specific duties, publicly imposed duties, that they ought to have been fulfilling – but they haven’t.”

It is the responsibility of directors of public companies to govern in the corporation’s best interests and also ensure that corporations do not impose costs on the wider community, he says.

“All these piecemeal responses to the Banking Royal Commission are just Band-Aids on bullet wounds. They are not actually going to fix the problem. All through these corporate governance debates, there has not been too much of a focus on board design.”

The German solution – a two-tier model

This board structure, proposed by Linden, would have non-executive directors on an upper (supervisory) board, which would be legally tasked with monitoring and control, approving strategy and appointing auditors.

A lower, management board would have executive directors responsible for implementing the approved strategy and day-to-day management.

This structure would separate non-executive from executive directors and create clear, legally separate roles for both groups, he says.

“Research into European banks suggests having employee and union representation on supervisory boards, combined with introduction of employee elected works councils to deal with management over day-to-day issues, reduces systemic risk and holds executives accountable,” according to Linden, who wrote about the subject with Warren Staples (senior lecturer in Management, RMIT University) in The Conversation last year.

Denmark, Norway and Sweden also have employee directors on corporate boards and the model is being proposed in the US by Democratic presidential hopefuls, including Senators Elizabeth Warren and Bernie Sanders.

Says Linden: “All the solutions that people in the English-speaking world typically think about are ownership-based solutions. So, you either go for co-operative ownership as an alternative to shareholder ownership, or, alternatively, it’s public ownership. All of these debates over decades have been about ‘who are the best owners’, not necessarily about the design of their governing bodies.”

Linden says research shows the riskiest banks are those that are English-speaking, for-profit, shareholder-dominated, overseen by an independent-director-dominated board.

“And they have been the ones that have imposed the most cost on communities,” he says.

 Outsourcing the board

Allowing consultant-like companies to oversee governance is a solution proposed by two law academics in the US, who say they are “trying to encourage people to innovate in governance in ways that are fundamentally different than just little tweaks at the edges”.

Law professors Stephen Bainbridge (UCLA) and  Todd Henderson (University of Chicago) say organisations are familiar with the idea of outsourcing responsibilities to lawyers, accountants, financial service providers.

“We envision a corporation, say Microsoft or ExxonMobil, hiring another company, say Boards-R-Us, to provide it with director services, instead of hiring 10 or so separate ‘companies’ to do so,” Henderson explained in an article.

 “Just as other service firms, like Kirkland and Ellis, McKinsey and Company, or KPMG, are staffed by professionals with large support networks, so too would BSPs [board service providers] bring the various aspects of director services under a single roof. We expect the gains to efficiency from such a move to be quite large.

“We argue that hiring a BSP to provide board services instead of a loose group of sole proprietorships [non-executive directors] will increase board accountability, both from markets and judicial supervision.”

Outsourcing to specialists is a familiar concept, said Bainbridge in a video interview with The Conference Board.

“Would you rather deal with you know twelve part-timers who get hired in off the street, or would you rather deal with a professional with a team of professionals?”

Your director is a robot

A Hong Kong venture capital firm, Deep Knowledge Ventures, appointed the first-ever robot director to its board in 2014, giving it the power to veto investment decisions deemed as too risky by its artificial intelligence.

Australia’s Chief Scientist, Dr Alan Finkel, told company directors that he had initially thought the robo-director, named Vital, was a mere publicity stunt.

However, five years on “… the company is still in business. Vital is still on the Board. And waiting in the wings is her successor: Vital 2.0,” Finkel said at a governance summit held by the Australian Institute of Company Directors in March.

“The experiment was so successful that the CEO predicts we’ll see fully autonomous companies – able to operate without any human involvement – in the coming decade.

Stop and think about it: fully autonomous companies able to operate without any human involvement. There’d be no-one to come along to AICD summits!”

Dr Finkel reassured his audience that their jobs were safe … for now.

“… those director-bots would still lack something vital – something truly vital – and that’s what we call artificial general intelligence: the digital equivalent of the package deal of human abilities, human insights and human experiences,” he said.

“The experts tell us that the world of artificial general intelligence is unlikely to be with us until 2050, perhaps longer. Thus, shareholders, customers and governments who want that package deal will have to look to you for quite some time,” he told the audience.

“They will rely on the value that you, and only you, can bring, as a highly capable human being, to your role.”

Linden agrees that robo-directors have limitations and that, before people get too excited about the prospect of technology providing the solution to governance, they need to get back to basics.

“All these issues to do with governance failures get down to questions of ethics and morality and lawfulness – on making judgments about what is appropriate conduct,” he says, adding that it was “hopelessly naïve” to expect machines to be able to make moral judgements.

“These systems depend on who designs them, what kind of data goes into them. That old analogy ‘garbage in, garbage out’ is just as applicable to artificial intelligence as it is to human systems.”

This article was originally written for The Ethics Alliance. Find out more about this corporate membership program. Already a member? Log in to the membership portal for more content and tools here.

Join the conversation

Boards. Are there better ways to oversee what is going on in a business?

Should you be afraid of apps like FaceApp?

Until last week, you would have been forgiven for thinking a meme couldn’t trigger fears about international security.

But since the widespread concerns over FaceApp last week, many are asking renewed questions about privacy, data ownership and transparency in the tech sector. But most of the reportage hasn’t gotten to the biggest ethical risk the FaceApp case reveals.

What is FaceApp?

In case you weren’t in the know, FaceApp is a ‘neural transformation filter’.

Basically, it uses AI to take a photo of your face and make it look different. The recent controversy centred on its ability to age people, pretty realistically, in just a short photo. Use of the app was widespread, creating a viral trend – there were clicks and engagements to be made out of the app, so everyone started to hop on board.

Where does your data go?

With the increasing popularity comes increasing scrutiny. A number of people soon noticed that FaceApp’s terms of use seemed to give them a huge range of rights to access and use the photos they’d collected. There were fears the app could access all the photos in your photo stream, not just the one you chose to upload.

There were questions about how you could delete your data from the service. And worst of all for many, the makers of the app, Wireless Labs, are based in Russia. US Minority Leader Chuck Schumer even asked the FBI to investigate the app.

The media commentary has been pretty widespread, suggesting that the app sends data back to Russia, lacks transparency about how it will or won’t be used and has no accessible data ethics principles. At least two of those are true. There isn’t much in FaceApp’s disclosure that would give a user any sense of confidence in the app’s security or respect for privacy.

Unsurprisingly, this hasn’t amounted to much. Giving away our data in irresponsible ways has become a bit like comfort eating. You know it’s bad, but you’re still going to do it.

The reasons are likely similar to the reasons we indulge other petty vices: the benefits are obvious and immediate; the harms are distant and abstract. And whilst we’d all like to think we’ve got more self-control than the kids in those delayed gratification psychology experiments, more often than not our desire for fun or curiosity trumps any concern we have over how our data is used.

Should you be worried?

Is this a problem? To the extent that this data – easily accessed – can be used for a range of goals we likely don’t support, yes. It also gives rise to a range of complex ethical questions concerning our responsibility.

Let’s say I willingly give my data to FaceApp. This data is then aggregated and on-sold in a data marketplace. A dataset comprising of millions of facial photos is then used to train facial recognition AI, which is used to track down political dissidents in Russia. To what extent should I consider myself responsible for political oppression on the other side of the world?

In climate change ethics, there is a school of thought that suggests even if our actions can’t change an outcome – for instance, by making a meaningful reduction to emissions – we still have a moral obligation not to make the problem worse.

It might be true that a dataset would still be on sold without our input, but that alone doesn’t seem to justify adding our information or throwing up our arms and giving up. In this hypothetical, giving up – or not caring – means abandoning my (admittedly small) role in human rights violations and political injustice.

A troubling peek into the future

In reality, it’s really unlikely that’s what FaceApp is actually using your data to do. It’s far more likely, according to the MIT Technology Review, that your face might be used to train FaceApp to get even better at what it does.

It might use your face to help improve software that analyses faces to determine age and gender. Or it might be used – perhaps most scarily – to train AI to create deepfakes or faces of people who don’t exist. All of this is a far cry from the nightmare scenario sketched out above.

But even if my horror story was accurate, would it matter? It seems unlikely.

By the time tech journalists were talking about the potential data issues with FaceApp, millions had already uploaded their photos into the app. The ship had sailed, and it set off with barely a question asked of it. It’s also likely that plenty of people read about the data issues and then installed the app just to see what all the fuss is about.

Who is responsible?

I’m pulled in two directions when I wonder who we should hold responsible here. Of course, designers are clever and intentionally design their apps in ways that make them smooth and easy to use. They eliminate the friction points that facilitate serious thinking and reflection.

But that speed and efficiency is partly there because we want it to be there. We don’t want to actually read the terms of use agreement, and the company willingly give us a quick way to avoid doing so (whilst lying, and saying we have).

This is a Faustian pact – we let tech companies sell us stuff that’s potentially bad for us, so long as it’s fun.



The important reflection around FaceApp isn’t that the Russians are coming for us – a view that, as Kaitlyn Tiffany noted for Vox, smacks slightly of racism and xenophobia. The reflection is how easily we give up our principled commitments to ethics, privacy and wokeful use of technology as soon as someone flashes some viral content at us.

In Ethical by Design: Principles for Good Technology, Simon Longstaff and I made the point that technology isn’t just a thing we build and use. It’s a world view. When we see the world technologically, our central values are things like efficiency, effectiveness and control. That is, we’re more interesting in how we do things than what we’re doing.

Two sides of the story

For me, that’s the FaceApp story. The question wasn’t ‘is this app safe to use?’ (probably no less so than most other photo apps), but ‘how much fun will I have?’ It’s a worldview where we’re happy to pay any price for our kicks, so long as that price is hidden from us. FaceApp might not have used this impulse for maniacal ends, but it has demonstrated a pretty clear vulnerability.

Is this how the world ends, not with a bang, but with a chuckle and a hashtag?

Join the conversation

As technology advances, what will happen with online privacy?

Injecting artificial intelligence with human empathy

Injecting artificial intelligence with human empathy

Injecting artificial intelligence with human empathy

The great promise of artificial intelligence is efficiency. The finely tuned mechanics of AI will free up societies to explore new, softer skills while industries thrive on automation.

However, if we’ve learned anything from the great promise of the Internet – which was supposed to bring equality by leveling the playing field – it’s clear new technologies can be rife with complications unwittingly introduced by the humans who created them.

The rise of artificial intelligence is exciting, but the drive toward efficiency must not happen without a corresponding push for strong ethics to guide the process. Otherwise, the advancements of AI will be undercut by human fallibility and biases. This is as true for AI’s application in the pursuit of social justice as it is in basic business practices like customer service.


The ethical questions surrounding AI have long been the subject of science fiction, but today they are quickly becoming real-world concerns. Human intelligence has a direct relationship to human empathy. If this sensitivity doesn’t translate into artificial intelligence the consequences could be dire. We must examine how humans learn in order to build an ethical education process for AI.

AI is not merely programmed – it is trained like a human. If AI doesn’t learn the right lessons, ethical problems will inevitably arise. We’ve already seen examples, such as the tendency of facial recognition software to misidentify people of colour as criminals.



Biased AI

In the United States, a piece of software called Correctional Offender Management Profiling for Alternative Sanctions (Compas) was used to assess the risk of defendants reoffending and had an impact on their sentencing. Compas was found to be twice as likely to misclassify non-white defendants as higher risk offenders, while white defendants were misclassified as lower risk much more often than non-white defendants. This is a training issue. If AI is predominantly trained in Caucasian faces, it will disadvantage minorities.

This example might seem far removed from us here in Australia but consider the consequences if it were in place here. What if a similar technology was being used at airports for customs checks, or part of a pre-screening process used by recruiters and employment agencies?

“Human intelligence has a direct relationship to human empathy.”

If racism and other forms of discrimination are unintentionally programmed into AI, not only will it mirror many of the failings of analog society, but it could magnify them.

While heightened instances of injustice are obviously unacceptable outcomes for AI, there are additional possibilities that don’t serve our best interests and should be avoided. The foremost example of this is in customer service.

AI vs human customer service

Every business wants the most efficient and productive processes possible but sometimes better is actually worse. Eventually, an AI solution will do a better job at making appointments, answering questions, and handling phone calls. When that time comes, AI might not always be the right solution.

Particularly with more complex matters, humans want to talk to other humans. Not only do they want their problem resolved, but they want to feel like they’ve been heard. They want empathy. This is something AI cannot do.

AI is inevitable. In fact, you’re probably already using it without being aware of it. There is no doubt that the proper application of AI will make us more efficient as a society, but the temptation to rely blindly on AI is unadvisable.

We must be aware of our biases when creating new technologies and do everything in our power to ensure they aren’t baked into algorithms. As more functions are handed over to AI, we must also remember that sometimes there’s no substitute for human-to-human interaction.

After all, we’re only human.

Allan Waddell is founder and Co-CEO of Kablamo, an Australian cloud based tech software company.

Join the conversation

Are we so innately flawed we’re incapable of teaching others to be good?


Why the future is workless


Predictions for the future of work can make grim reading – depending on your point of view. Many of our jobs are being automated out of existence, however, it looks like we will have a lot more free time.

Writer and Doctor of Philosophy, Tim Dunlop, says people and governments are going to have to rethink how we support ourselves when there isn’t enough paid work to go around.

Dunlop does not ascribe to the view often put forward by economists that technology will generate enough jobs to replace the ones that are destroyed by robotics and artificial intelligence.

“I don’t know if that’s necessarily true in the medium term… I think there’s going to be a really nasty transition for more than a generation,” says Dunlop, the author of Why the Future is Workless and The Future of Everything.

“We are going through this huge period of transition at the moment and we don’t really know where it’s heading. We’re at the bottom of the curve, in terms of what [new technologies] are going to be capable of.”



Dunlop says framing question around the future of work as “will a robot take my job?”, is reductive. Instead, we should be looking at what sort of job will be available and what the conditions will be for the jobs that are offered.

“If we are working less hours, or there is less work, or the economy just needs fewer people, and then we don’t have a technology problem, we’ve got a distribution problem,” he says.

The “hollowing out” of the job market means that middle-skilled jobs are disappearing because they can be automated. Trying to “upskill” people who have been displaced, or redirect them into jobs that need a human touch (such as caring jobs) is not an answer for everyone.

“Not everybody can have a high-skill, high-paid sort of job. You need those middle-level jobs as well. And if you don’t have those, then society’s got a problem.” he says.

Dunlop says one way of addressing the issue is a universal basic income: where everybody gets a standard payment to cover their basic needs.

“I don’t think you can rely on wages to distribute wealth in an equitable way, in the way that might have been in the recent past,” he says.

The idea of a Universal Basic Income has been around since the 16thCentury and is unconditional – not based on household income.

In Australia, the single-person pension (now just over $24,000 per annum) might be seen as an appropriate level of payment, according to Dunlop, in an article written for the Inside Story  website.

“It is basic also in the sense that it provides an income floor below which no one can fall. The payment is unconditional in that no one has to fulfil any obligations in order to receive it, and even if you earn other income you’re still eligible. That makes it universal, equally available to the poorest member of society as it is to the start-up billionaire,” he writes.

Much of the discomfort often voiced about such a scheme centres around the idea that people are being paid to “do nothing” and that it removes the incentive to work.

However, trials show that in developing countries, people use the money to improve their situation, starting businesses, sending children to school and avoiding prostitution. In Europe and Canada, people receiving the payment tend to stay in their jobs and entrepreneurship increases.

Trials of the Universal Basic Income are now taking place globally – from Switzerland to Canada to Kenya – but most are limited to the unemployed or financially needy, rather than being universal.

Dunlop says that, rather than worrying about whether people “deserve” the payment, we should accept the concept of “shared citizenship”. Whether we do paid work, or not, we are all contributing to the overall wealth of society.

Inequality comes when wealth gets divided up by those who do work that is paid and those who own the means of production. With a Universal Basic Income, everybody’s contribution is valued and people get a benefit from the roles they play in the formal and informal economy, he says.

So what will we be doing in the future if we are not doing paid work? Dunlop says we will still have our hobbies, passions and families – and we can derive just as much (if not more) meaning from those things as we do from our jobs.

We are already seeing evidence of efforts to reduce the hours of work, with companies trying four-day work weeks (paid for five), the Swedish Government trialling a six-hour workday, a French law banning work emails after hours.

Dunlop says a “work ethic” culture makes it hard for these reforms to succeed and unions tend to see a push for reduced hours as a “trojan horse” threat of increasing casualisation and insecure work.

“That’s where things like the French rule about emails probably comes in handy. It sets some parameters around what society sees as acceptable and maybe it needs some government leadership in this area.”

This article was originally written for The Ethics Alliance. Find out more about this corporate membership program. Already a member? Log in to the membership portal for more content and tools here.

Join the conversation

What will we be doing in the future if we are not doing paid work?

Climate change and the next migrant crisis

Are we prepared for climate change and the next migrant crisis?

Climate change and the next migrant crisis

A powerful infographic published in 2014, predicted how many years it would take for a world city to drown.

It used data from NASA, Sea Level Explorer, and the Intergovernmental Panel on Climate Change. Venice will be the first to go under apparently, its canals rising to wetly throttle the city of love. Amsterdam is set to follow, Hamburg next.

Other tools play out the encroachment of rising tides on our coasts. This one developed by EarthTime shows Sydney airport as a large puddle if temperatures increase by four degrees. There’s also research suggesting our ancestors may one day look down to see fish nibbling on the Opera House sails.

Climate change refugees will become reality

Sea level rise is just one effect of anthropogenic climate change that would make a place uninhabitable or inhospitable to humankind. It’s also relatively slow. Populations in climate vulnerable hotspots face a slew of other shove factors, too.

Already, we are seeing a rising frequency of extreme weather events. Climate change was linked to increasingly destructive tropical cyclones in a report published in Nature last year, and Australia’s Climate Council attributed the same to earlier and more dangerous fire seasons. Rapidly changing ecosystems will impact water resources, crop productivity, and patterns of familiar and unfamiliar disease. Famine, drought, poverty and illness are the horsemen saddling up.

Some will die as a result of these events. Others, if they are able, will choose to stay. The far sighted and privileged may pre-empt them, relocating in advance of crisis or discomfort.

These migrants can be expected to move through the ‘correct’ channels, under the radar of nativist suspicion. (‘When is an immigrant not an immigrant?’ asks Afua Hirsch. ‘When they’re rich’.)

But many more will become displaced peoples, forcibly de-homed. Research estimates this number could be anywhere between 50 million and 1 billion in the 21st century. This will prompt new waves of interstate and international flows, and a resultant redistribution and intensification of pressures and tensions on the global map.

How will the world respond?

Where will they go? What is the ethical obligation of states to welcome and provide for them? With gross denialism characterising global policies towards climate change, and intensifying hostility locking down national borders, how prepared are we to contend with this challenge to come?

“You can’t wall them out,” Obama recently told the BBC. “Not for long.”

While interstate climate migration (which may already be occurring in Tasmania) will incur infrastructural and cultural problems, international migration is a whole and humongous other ethical conundrum. Not least because currently, climate change migrants have almost no legal protections.

Is a person who moves because of a sudden, town levelling cyclone more entitled to the status of climate migrant or refugee (and the protection it affords) than someone who migrates as a result of the slow onset attrition of their livelihood due to climate change?

Who makes the rules?

Does sudden, violent circumstance carry a greater ethical demand for hospitality than if, after many years of struggle, a Mexican farmer can no longer put food on the table because his land has turned to dust? Does the latter qualify as a climate or economic migrant, or both?

Somewhat ironically (and certainly depressingly), the movement of people to climate ‘havens’ will place stress on those environmental sanctuaries themselves, potentially leading to concentrated degradation, pollution and threat to non-human nature. (On the other hand, climate migration could allow for nature to reclaim the places these migrants have left.)

There is also the argument that, once migrants from developing countries have been integrated into a host country, their carbon footprint will increase to resemble that of their new fellow citizenry – resulting in larger CO2 emissions. From this perspective, put forward by Philip Cafaro and Winthrop Staples, it is in the interests of the planet for prosperous countries to limit their welcome.

Not that privileged populations need much convincing. Jealous fear of future scarcity, a globalisation inflamed resentment towards the Other, a sense that modernity has failed to deliver on its promise of wholesale bounty: all these are conspiring to create increasingly tribalised societies that enable the xenophobic agendas of their governments. A recent poll showed that 46 percent of Australians believe immigration should be reduced, a percentage consistent with attitudes worldwide.



A divided world

In the US, there’s Trump’s grand ‘us vs them’ symbol of a wall. As reported in the Times, German lawmakers are comparing refugees to wolves. In Italy, tilting towards populism and the right, a mayor was arrested after transforming his small town into a migrant sanctuary.

Closer to home, in a country where the 27 years without recession could be linked to immigration, there’s Scott Morrison’s newly proposed immigration cuts. There’s Senator Anning blaming the Christchurch massacre on Muslim immigration. There’s the bipartisan support for the prospects, wellbeing and mental health of asylum seekers to deteriorate to such an extent, the UN human rights council described it as ‘massive abuse’.

Yet the local effects of climate change don’t have a local origin. Causality extends beyond borders, piling miles high at the feet of industrialised countries. Nations like the US and Australia enjoy high standards of living largely because we have been pillaging and burning fossil fuels for more than a century. Yet those least culpable will bear the heaviest cost.

This, argues the author of a paper published in Ethics, Policy and Environment, warrants a different ethical framework than that which applies to other kinds of migration. He concludes that industrialised nations “have a moral responsibility … to compensate for harms that their actions have caused”.

This responsibility may include investing in less developed countries to mitigate climate change effects, writes the author. But it also morally obliges giving access, security and residence to those with nowhere else to go.

Join the conversation

What level of protection do you expect from another country?


Australia, it’s time to curb immigration


A majority of Australians welcome immigrants. So why then do opinion polls of young and old voters alike across the political divide, now find majority support for reducing our immigration intake?

Perhaps it could be for the same reason that faith in our political system is dwindling at a time of strong economic growth. Australia is the ‘lucky country’ that hasn’t had a recession in the last 28 years.

Yet we’ve actually had two recessions in this time if we consider GDP on a per-capita basis. This, combined with stagnant real wage growth and sharp increases in congestion and the price of housing and electricity in our major cities, could explain why the Australian success story is inconsistent with the lived experience of so many of us.


The decline of the Australian dream?

Our current intake means immigration now acts as a ponzi scheme.

The superficial figure of a growing headline GDP fuelled by an increasing population masks the reality of an Australian dream that is becoming increasingly out of reach for immigrants and native-born Australians alike.

We’ve been falsely told we’ve weathered economic calamities that have stunned the rest of the world. When taken on a per-capita basis, our economy has actually experienced negative growth periods that closely mirror patterns in the United States.

We’re rightly told we need hardworking immigrants to help foot the bill for our ageing population by raising productivity and tax revenue. Yet this cost is also offset when their ageing family members or other dependents are brought over. Since preventing them from doing so may be cruel, surely it’s fairer to lessen our dependence on their intake if we can?

A lack of infrastructure

Over 200,000 people settle in Australia every year, mostly in the major cities of Sydney and Melbourne. That’s the equivalent of one Canberra or greater Newcastle area a year.

Unlike the United States, most economic opportunities are concentrated in a few major cities dotting our shores. This combined with the failures of successive state and federal governments to build the infrastructure and invest in the services needed to cater for record population growth levels driven majorly by immigration.

A failure to rezone for an appropriate supply of land, mean our schools are becoming crowded, our real estate prohibitively expensive, our commutes are longer and more depressing, and our roads are badly congested.

Today, infrastructure is being built, land is finally being rezoned to accommodate higher population density and more housing stock in the outer suburbs, and the Prime Minister has made regional job growth one of his major priorities.

But these issues should have been fixed ten years ago and it’s increasingly unlikely that they will be executed efficiently and effectively enough to catch up to where they need to be should current immigration intake levels continue for the years to come.

Our governments have proven to be terrible central planners, often rejecting or watering down the advice of independent expert bodies like Infrastructure Australia and the Productivity Commission due to political factors.

Why would we trust them to not only get the answer right now, but to execute it correctly? Our newspapers are filled daily with stories about light rail and road link projects that are behind schedule.

All of it paid for by taxpayers like us.

Foreign workers or local graduates?

Consider also the perverse reality of foreign workers brought to our shores to fill supposed skill gaps who then struggle to find work in their field and end up in whatever job they can get.

Meanwhile, you’ll find two separate articles in the same week. One from industry groups cautioning against cutting skilled immigration due to shortages in the STEM fields. The other reporting that Australian STEM graduates are struggling to find work in their field.

Why would employers invest resources in training local graduates when there’s a ready supply of experienced foreign workers? What incentive do universities have to step in and fill this gap when their funding isn’t contingent on employability outcomes?

This isn’t about nativism. The immigrants coming here certainly have a stake in making sure their current or future children can find meaningful work and obtain education and training to make them job ready.

There’s only one way to hold our governments accountable so the correct and sometimes tough decisions needed to sustain our way of life and make the most of the boon that immigration has been for the country, are made. It’s to wean them off their addiction to record immigration levels.

Lest the ponzi scheme collapse.

And frank conversations about the quantity and quality of immigration that the sensible centre of politics once held, increasingly become the purview of populist minor parties who have experienced resurgence on the back of widespread, unanswered frustrations about unsustainable immigration that we are ill-prepared for.

Join the conversation

Do you have a right to protect your good fortune?


People with dementia need to be heard – not bound and drugged


It began in Oakden. Or, it began with the implosion of one of the most monstrously run aged care facilities in Australia, as tales of abuse and neglect finally came to light.

That was May 2017. Two years on, we are in the midst of the first Royal Commission into Aged Care Quality and Safety, announced following a recommendation by the Scott Morrison government.

The first hearings began this year in Oakland’s city of Adelaide. They have seen countless brave witnesses come forward to share their experiences of what it’s like to live within the aged care system or see a loved one deteriorate or die – sometimes peacefully, sometimes painfully – within it.

In May, the third hearing round will take place in Sydney. This round will hear from people in residential aged care, with a focus on people living with dementia – who make up over 50 percent of residents in these facilities.

With our burgeoning ageing population, the number of people being diagnosed with dementia is expected to increase to 318 people per day by 2025 and more than 650 people by 2056.

Encompassing a range of different illnesses, including Alzheimer’s disease, vascular dementia and Lewy body disease, its symptoms are particularly cruel, dissolving intellect, memory and identity. In essence, dementia describes the gradual estrangement of a person from themselves – and from everyone who knew them.

It is one of the most prevalent health problems affecting developed nations today – and one of the most feared. Contrary to widespread belief, one in 15 sufferers are in their thirties, forties and fifties.


Physical restraints

How do you manage these incurable conditions? How can you humanely care for the remnants of a person who becomes more and more unrecognisable?

One thing the Royal Commission has made clear: you don’t do it by defaulting to dehumanising mechanisms of restraint.

Unlike in the UK or the US, there are currently no regulations around use of restraints in aged care facilities. It is commonly resorted to by aged care workers if a patient displays physical aggression, or is a danger to themselves or others.

Yet it is also used in order to manage patients perceived as unruly in chronically understaffed facilities, when the risk of leaving them unsupervised is seen to be greater than the cost of depriving them their free movement and self-esteem. The problem of how to minimise harm in these conditions is an ongoing and high-pressure dilemma for staff.

Readers may remember the distressing footage from January’s 7.30 Report, in which dementia patients were seen sedated and strapped to chairs. One of them was the 72-year-old Terry Reeves. Following acts of aggression towards a male nurse, he was restrained for a total of 14 hours in a single day. His wife, however, had authorised that her husband be restrained with a lap belt if he was “a danger to himself or others”.

Maree McCabe, director of Dementia Australia, is vocal about why physical restraints should only be used as a last resort.

“We know from the research that physical restraint overall shows that it does not prevent falls,” she says. “In fact it may cause injury, and it may cause death.”

While there are circumstances where restraint may be appropriate McCabe says, “it is not there as a prolonged intervention”. Doing so, she says, “is an infringement of their human rights”.

After the 7.30program aired and one day before the Royal Commission hearings began, the federal government committed to stronger regulations around restraint, including that homes must document the alternatives they tried first.

Restraint by drugging

Another kind of restraint which has come into focus through the Royal Commission is chemical restraint. Psychotropic medication is currently prescribed to 80 percent of people with dementia in residential care – but it is only effective 10 percent of the time.

“We need to look at other interventions,” says McCabe. “The first to look at is: why is the person behaving in the way that they are? Why are they responding that way? It could be that they’re in pain. It could be something in the environment that is distressing them.”

She notes people with dementia often have “perceptual disturbances” – “things in the environment that look completely fine to us might not to someone living with dementia”. Wouldn’t you act out of character if your blue floor suddenly became a miniature sea, or a coat hanging on the door turned into the Babadook?

“It’s about people understanding of what it’s like to stand in the world of people living with dementia and simulate that experience for them,” says McCabe.

Whether through physical force or prescription, a dependence on restraint shows the extent to which dementia is misunderstood at the detriment of the autonomy and dignity of the sufferers. This misunderstanding is compounded by the fact that dementia is often present among other complex health problems.

Yet, and as the media may sensationally suggest, the aged care sector isn’t staffed by the callous or malicious. It is filled with good people, who are often overstretched, emotionally taxed and exhausted.

Dementia Australia is advocating for mandatory training on dementia for all people who work in aged care. This covers residential aged care, but could also extend to hospitals. Crucially, it encompasses community workers, too.

“Of the 447,000 Australians living with dementia, 70 percent live in the community and 30 percent live alone,” notes McCabe. “It’s harder to monitor community care, it’s less visible and less transparent. We have to make sure that the standards are across the board.”

It is only through listening to people living with dementia – recognising that while yes, they have a degenerative cognitive disease, they deserve to participate in the decision-making around their life and wellbeing – that our approach to it has evolved. Previously, people believed that it was dangerous to allow sufferers to cook, even to go out unaccompanied.

Likewise, it is crucial that we continue to afford people with dementia the full rights of personhood, however unfamiliar they may become. Only then can meaningful reform be made possible.

Besides, if for no other reason (and there are many other reasons), action is in our own selfish interest. The chances, after all, that you or someone you love will develop dementia are high.

Join the conversation

What is a better solution to care?


Is technology destroying your workplace culture?


If you were to put together a list of all the buzzwords and hot topics in business today, you’d be hard pressed to leave off culture, innovation or disruption.

They might even be the top three. In an environment of constant technological change, we’re continuously promised a new edge. We can have sleeker service, faster communication or better teamwork.

This all makes sense. Technology is the future of work. Whether it’s remote work, agile work flows or AI enhanced research, we’re going to be able to do more with less, and do it better.

For organisations who are doing good work, that’s great. And if those organisations are working for the good of society (as they should), that’s great for us all.

Without looking a gift horse in the mouth though, we should be careful technology enhances our work rather than distracting us from it.

Most of us can probably think of a time when our office suddenly had to work with a totally new, totally pointless bit of software. Out of nowhere, you’ve got a new chatbot, all your info has been moved to ‘the cloud’ or customer emails are now automated.

This is usually the result of what the comedian Eddie Izzard calls “techno-joy”. It’s the unthinking optimism that technology is a cure for all woes.

Unfortunately, it’s not. Techno-joyful managers are more headache than helper. But more than that, they can also put your culture – or worse, your ethics – in a tricky spot.

Here’s the thing about technology. It’s more than hardware or code. Technology carries a set of values with it. This happens in a few ways.


All technology works through a worldview we call ‘techno-logic’. Basically, technology aims to help us control things by making the world more efficient and effective. As we explained in our recent publication, Ethical by Design:

Techno-logic sees the world as though it is something we can shape, control, measure, store and ultimately use. According to this view, techno-logic is the ‘logic of control’. No matter the question, techno-logic has one overriding concern: how can we measure, alter, control or use this to serve our goals?

Whenever you’re engaging with technology, you’re being invited and encouraged to see the world in a really narrow way. That can be useful – problem solving happens by ignoring what doesn’t matter and focussing on what’s important. But it can also mean we ignore stuff that matters more than just getting the job done as fast or effectively as we can.

A great example of this comes from Up in the Air, a film in which Ryan Bingham (George Clooney) works for a company who specialise in sacking people. When there are mass layoffs to be made, Bingham is there. Until technology comes to call. Research suggests video conferencing would be cheaper and more effective. Why fly people around America when you can sack someone from the comfort of your own office?

As Bingham points out, you do it because sometimes making something efficient destroys it. Imagine going on an efficient date or keeping every conversation as efficient as possible. We’d lose something essential, something rich and human.

With so much technology available to help with recruitment, performance management and customer relations, we need to be mindful that technology is fit for purpose. It’s very easy for us to be sucked into the logic of technology until suddenly, it’s not serving us, we’re serving it. Just look at journalism.

Drinking the affordance Kool-Aid

Journalism has always evolved alongside media. From newspaper to radio, podcasting and online, it’s a (sometimes) great example of an industry adapting to technological change. But at times, it over adapts, and the technological cart starts to pull the journalistic horse.



Today, online articles are ‘optimised’ to drive engagement and audience. This means stories are designed to hit a sweet spot in word count to ensure people don’t tune out, they’re given titles that are likely to generate clicks and traffic, and the kinds of things people are likely to read tend to get more attention.

A lot of that is common sense, but when it turns out that what drives engagement is emotion and conflict, this can put journalists in a bind. Are they impartial reporters of truth, lacking an audience, or do they massage journalistic principles a little so they can get the most readers they can?

I’ll leave it to you to decide which way journalism as an industry has gone. What’s worth noting is that many working in media weren’t aware of some of these changes whilst they were happening. That’s partly because they’re so close to the day-to-day work, but it can also be explained by something called ‘affordance theory’.

Affordance theory suggests that technological design contains little prompts, suggesting to users how they should interact with it. They invite users to behave in certain ways and not others. For example, Facebook makes it easier for you to respond to an article with feelings than thinking. How? All you need to do to ‘like’ a post is click a button but typing out a thought requires work.

Worse, Facebook doesn’t require you to read an article at all before you respond. It encourages quick, emotional, instinctive reactions and discourages slow thinking (through features like automatic updates to feeds and infinite scroll).

These affordances are the water we swim in when we’re using technology. As users, we need to be aware of them, but we also need to be mindful of how they can affect purpose.

Technology isn’t just a tool, it’s loaded with values, invitations and ethical judgements. If organisations don’t know what kind of ethical judgements are in the tools they’re using, they shouldn’t be surprised when they end up building something they don’t like.

Join the conversation

Are we capable of creating technology without negative outcomes?

Blockchain: Some ethical considerations

The development and application of blockchain technologies gives rise to two major ethical issues to do with:

  • Meeting expectations – in terms of security, privacy, efficiency and the integrity of the system, and
  • The need to avoid the inadvertent facilitation of unconscionable conduct: crime and oppressive conduct that would otherwise be offset by a mediating institution

Neither issue is unique to blockchain. Neither is likely to be fatal to its application. However, both involve considerable risks if not anticipated and proactively addressed.

At the core of blockchain technology lies the operation of a distributed ledger in which multiple nodes independently record and verify changes on the block. Those changes can signify anything – a change in ownership, an advance in understanding or consensus, an exchange of information. That is, the coding of the blockchain is independent and ‘symbolic’ of a change in a separate and distinct real-world artefact (a physical object, a social fact – such as an agreement, a state of affairs, etc.).

The potential power of blockchain technology lies in a form of distribution associated with a technically valid equivalent of ‘intersubjective agreement’. Just as in language the meaning of a word remains stable because the agreement of multiple users of that word, so blockchain ‘democratises’ agreement that a certain state of affairs exists. Prior to the evolution of blockchain, the process of verification was undertaken by one (or a few) sources of authority – exchanges and the like. They were the equivalent of the old mainframe computers that formerly dominated the computing landscape until challenged by PC enabled by the internet and world wide web.

Blockchain promises greater efficiency (perhaps), security, privacy and integrity by removing the risk (and friction) that arises out of dependence on just one or a few nodes of authority. Indeed, at least some of the appeal of blockchain is its essentially ‘anti-authoritarian’ character.

However, the first ethical risk to be managed by blockchain advocates is that they not over-hype the technology’s potential and then over-promise in terms of what it can deliver. The risk of doing either can be seen at work in an analogous field – that of medical research. Scientists and technologists often feel compelled to announce ‘breakthroughs’ that, on closer inspection, barely merit that description. Money, ego, peer group pressure – these and other factors contribute to the tendency for the ‘new’ to claim more than can be delivered.

“However, the first ethical risk to be managed by blockchain advocates is that they not over-hype the technology’s potential and then over-promise in terms of what it can deliver.”

It’s not just that this can lead to disappointment – very real harm can befall the gullible. One can foresee an indeterminate period of time during which the potential of blockchain is out of step with what is technically possible. It all depends on the scope of blockchain’s ambitions – and the ability of the distributed architecture to maintain the communications and processing power needed to manage and process an explosion in blockchain related information.

Yet, this is the lesser of blockchain’s two major ethical challenges. The greater problem arises in conditions of asymmetry of power (bargaining power, information, kinetic force, etc.) – where blockchain might enable ‘transactions’ that are the product of force, fear and fraud. All three ‘evils’ destroy the efficiency of free markets – and from an ethical point of view, that is the least of the problems.

“The greater problem arises in conditions of asymmetry of power (bargaining power, information, kinetic force, etc.) – where blockchain might enable ‘transactions’ that are the product of force, fear and fraud.”

One advantage of mediating institutions is that they can provide a measure of supervision intended to identify and constrain the misuse of markets. They can limit exploitation or the use of systems for criminal or anti-social activity. The ‘dark web’ shows what can happen when there is no mediation. Libertarians applaud the degree of freedom it accords. However, others are justifiably concerned by the facilitation of conduct that violates the fundamental norms on which any functional society must be based. It is instructive that crypto-currencies (based on blockchain) are the media of exchange in the rankest regions of the dark web.

So, how do the designers and developers of blockchain avoid becoming complicit in evil? Can they do better than existing mediating institutions? May they ‘wash their hands’ even when their tools are used in the worst of human deeds?

This article was first published here. Dr Simon Longstaff presented at The ADC Global Blockchain Summit in Adelaide on Monday 18 March on the issue of trust and the preservation of ethics in the transition to a digital world. 

Join the conversation

Can blockchain designers prevent dark-web misuse?