Free speech has failed us

I used to believe in free speech. I used to believe in the power of rational discourse.

I used to believe in what Jurgen Habermas called the “unforced force” of the better argument. I used to believe in John Stuart Mill’s riff that free speech is what keeps superstition and stifling tradition at bay. I used to believe the solution to an abundance of bad speech was more speech.

However, I’m currently going through a somewhat unsettling process of reconsidering these deeply held views. The last two decades has been one long demonstration of the failures of public discourse to drive towards better solutions to the problems we face.

I hardly need to cite the failure of public discourse to prevent the folly of the wars following September 11 and the catastrophic regional destabilisation they caused, or to reform the economic institutions that caused the Global Financial Crisis, or to improve the response to it that ended up bailing out the perpetrators at the expense of the victims, or bring peace to the escalating culture wars that are fracturing nations, or prevent the national self immolation that is Brexit, or stop the election of a dangerously ill-informed narcissist of dubious moral character to the Presidency of the United States, or combat the ongoing misinformation campaign that is resisting action on one of the world’s most urgent challenges: dealing with climate change. And that’s not even an exhaustive list.

Free speech has failed to live up to its liberal promise. It has failed to float the best reasons and arguments to the top and sink the worst ones to the bottom.

Free speech has failed to live up to its liberal promise. It has failed to float the best reasons and arguments to the top and sink the worst ones to the bottom. It has failed to prevent those who actively work to pervert speech from winning over the voting public. It has empowered those who would wilfully employ their reach to promote their ideological agenda. Meanwhile, those who stick to the rules of rational discourse are left shuffling footnotes and politely yelling reasons into the void.

So when The Conversation announced recently that it will be taking a “zero-tolerance approach to moderating climate change deniers, and sceptics” in the comments on its articles, I was not shocked.

 

Only a year ago, I would have agreed with the ABC’s Media Watch that “The Conversation is wrong to ban anyone’s views, unless they’re abusive, hateful or inciting violence.” I would have defended the right of the ignorant, misinformed and outright malicious to say their piece and have it shredded by other less ignorant, more informed and hopefully charitable readers.

After all, what’s the alternative? We shut down speech and enable one particular narrative to dominate? Who’s in charge of that narrative? Can we trust them to have more reliable access to the truth? What do we do if that narrative is wrong?

I’m still painfully aware of the importance of these questions, and how hard they are to answer in any satisfying way. And yet, the evidence is clear to me now that in many cases more speech is not always in the best interests of truth or humanity.

In good faith

What has fundamentally changed in recent months is the way I think about free speech and the possibility of rational discourse in general.

Free speech is not an absolute good; it is not an end unto itself. Free speech is an instrumental good, one that promotes a higher good: seeking the truth. That’s the canonical account from John Stuart Mill that still underlies much of our thinking around free speech today.

But free speech only fulfils its truth-seeking function when all agents are speaking in good faith: when they all agree that the truth is the goal of the conversation, that the facts matter, that there are certain standards of evidence and argumentation that are admissible, that speakers have a duty to be open to criticism, and that there are many modes of discourse that are inadmissible, such as intimidation, insults, threats and the wilful spread of misinformation. Mill assumed all too readily that such good will was commonplace.

But free speech only fulfils its truth-seeking function when all agents are speaking in good faith.

This doesn’t mean that all speech is truth-seeking. In fact, most everyday speech is not about the truth at all. Usually the correct answer to “does my bum look big in this” is “no”, irrespective of the truth. Most speech is about reinforcing relationships, establishing identity or passing the time. Some speech is about subjective issues or values which may not admit true or false answers. Free speech protects this too. But for some speech, the facts do matter, and that’s where free speech is failing us.

In order for truth-seeking free speech to work, we need strong social norms that promote good faith. And it’s precisely these norms that have broken down in recent years (not that they were ever very strong). And this is because humans are not nearly as rational as we (or Mill) would like to think. And we’re painfully easy to manipulate.

When I teach critical thinking, I give the usual cautionary spiel about not flinging out argumentative fallacies, like the old ad hominem, ad populum or slippery slope. But the very fact that I have to give that spiel is because these fallacies work. When employed effectively, you’ll “win” a lot more arguments using fallacies than by playing by the rules – if you consider persuading/intimidating/misleading someone to accept your point of view a “win.”

I similarly caution students to be wary of the power of appeals to emotion and the force of social pressure, and to be mindful of cognitive biases that can lead our thinking astray. Again, these are important because they are the mechanisms that actually motivate many of our beliefs and that can be most effectively used to persuade others.

What I don’t say – but maybe I should – is that critical thinking is useful when it comes to policing one’s own thoughts, but it’s pitifully impotent when it comes to changing others’ minds. If you start throwing syllogisms across the dinner table, or politely point out that someone is affirming the consequent at the pub, or hope that revealing the contradictions embedded in someone’s assumptions in a comment thread on Facebook is going to change their mind, you’re quickly going to find out you’ve joined the ranks of those politely yelling reasons into the void. I know only too well what that feels like.

Anti-social media

Compounding this is the fact that we have gone to great lengths to build new technologies that promote the worst features of bad faith discourse. If you wanted to design a means of communication that made rational argumentation as difficult as possible yet rewarded the use of every argumentative fallacy under the sun, you’d be hard pressed to top Twitter. It only allows enough characters to express conclusions, not premises. The Like button gives the same weight to the expert as the ignoramus. Status is earned through number of followers, which is like institutionalising the fallacy of ad populum.

Facebook is just as bad for different reasons. Not long ago, if you had a penchant for conspiracy theories, racial vilification or fringe anti-science theories, you’d be hard pressed to find enough like-minded nutjobs in your neighbourhood to hold a bi-monthly tin foil hat dinner. Now, you can join with thousands of like-minded cranks from all around the world on a daily basis to reinforce and radicalise your views.

There’s also evidence that a group of people with diverse views will tend to gravitate towards the most extreme views in the group. And that people who believe one conspiracy theory tend to believe in and share many. And that cultivating outrage only promotes more animosity towards one’s perceived opponents and encourages greater retributive invective and bad faith.

There’s also abundant evidence that people find falsehoods to be more credible the more often they encounter them, even if they’re posted by someone who’s debunking them. As such, falsehoods spread more easily on social media than facts. So the outrage industry is self-sustaining, as even those raging against them share them with their friends, thus spreading the poison even further. (This is why I urge everyone to STOP SHARING GARBAGE on social media, even if you’re intending to debunk it, but particularly if it just pisses you off. The beast feeds on your friends’ eyeballs.)

Add the well known bubble effect, which filters out dissenting views and reinforces in-group identity, and bad faith is all but guaranteed. Free speech in this context only facilitates a slide away from the truth.

That being so, it’s with some alarm that I note that Facebook is now exempting politicians from its normal community standards – the same standards that are intended to prevent bad faith discourse like hate speech and harassment. According to Facebook’s new VP of Global Affairs and Communications, the former UK politician Nick Clegg, it’s because Facebook’s crew “are champions of free speech and defend it in the face of attempts to restrict it. Censoring or stifling political discourse would be at odds with what we are about.”

Clegg invokes a tennis analogy to describe Facebook’s approach to speech: “our job is to make sure the court is ready – the surface is flat, the lines painted, the net at the correct height. But we don’t pick up a racket and start playing. How the players play the game is up to them, not us.”

Here’s a better analogy. The court was never flat. Human psychology means it started off warped, and it’s twisted even more by the technology created by Facebook itself. The players know this, and some are exploiting it to their advantage, and to humanity’s disadvantage. If free speech is meant to do anything at all positive, then it takes active intervention to flatten the court, not a hands-off wilful abdication of responsibility.

If free speech is meant to do anything at all positive, then it takes active intervention to flatten the court, not a hands-off wilful abdication of responsibility.

Hit prediction: this policy is going to be a disaster, and Facebook will revoke it with grievous apologies either before or after the 2020 US Presidential election after a spate of heinous posts by politicians are left to fester on our feeds.

What worries me the most is that those who understand the failings of free speech the best are the ones who know how to use speech to manipulate us to their ends. It’s the political operatives, the Cambridge Analytica’s, the climate change deniers, the alt-right meme machines, the political demagogues. They are the ones who benefit the most from free speech, not the experts, not the scientists, not the academics, not those individuals who are willing to do some homework and engage in good faith with an open mind.

So, to those who care about facts, evidence and the power of rational discourse to help us arrive at the truth, I say: WAKE UP. We’re losing, and with us, truth and humanity. The mongers of misinformation and agents of bad faith are driving us into the dirt, and we’re down here tilling the very soil that allows the weeds to flourish.

Shut it down

Free speech has failed us. That’s why I think The Conversation is justified in banning certain forms of well established misinformation on its site, whether it’s delivered by trolls or people who are misguided enough to believe it.

The Conversation as a website is predicated on the importance of expertise. It only publishes articles by qualified academics, and only allows them to write on areas in which they are experts. It encourages a diversity of views and debate amongst its authors. I know, because (full disclosure) I used to work there, editing the science and technology section. True to its name, The Conversation also wants to encourage discussion and debate amongst readers. But it is in no way obliged to give a platform to anyone. It is able to determine the standards of discourse in its own domain.

So if the “conversation” in the comments section slips well outside the bounds of respect for evidence, reason or good faith discourse that The Conversation seeks to promote, then it ought to be allowed to disqualify it. It’s not like the readers can’t spread their views on other platforms with lower standards. Sadly, there’s an abundance of those around these days. Freedom of speech doesn’t require everyone to allow just anyone to walk into in their garden and plant whatever they want.

Freedom of speech doesn’t require everyone to allow just anyone to walk into in their garden and plant whatever they want.

That said, I am convinced that we need some forums where free speech can operate in an expansive way. At the moment, I think universities are the best candidates, which is why I resist the deplatforming of academics or speakers on campus, no matter how controversial their views. Universities need to get better at managing this, because they’re one of the last bastions of truly open enquiry.

But I’m increasingly coming to believe that we need to get real about the failures of free speech in many public forums, and fight back against those who would pollute discourse with bad faith. I’m not convinced we should go as far as Herbert Marcuse, who argued that civil discussion was futile, so all that’s left is violence. But I do think we can’t maintain the naive position that “more speech is always a good thing.”

The paradox of free speech

I’ll be the first to say I don’t know what to do. But I have some initial ideas. I suspect there’s a short term and a long term solution. In the long term, we want to rehabilitate public discourse by encouraging good faith. That’s a decades long project, and one that will require the rebuilding of a great deal of social capital that has been degrading over decades.

This is not necessarily a project that is conducted through rational discourse itself. Rather, it’s something that requires constructive social discourse. It requires relationship building, the restoration of trust, the separation of belief from identity, and the buttressing of social norms that make rational discourse a possibility.

And in the short term, I’m increasingly open to shutting down speech that is not only conducted in bad faith, but is polluting discourse itself, so encouraging more bad faith. The problem of dealing with speech that corrupts speech is related to Karl Popper’s famed “paradox of tolerance.” If we believe that tolerance is good, how should we treat those who are intolerant? If we tolerate them, won’t they end tolerance? And if we don’t tolerate them, doesn’t that make us intolerant?

The solution to this paradox is elegantly expressed by Peter Godfrey-Smith and Benjamin Kerr in an article published on – ironically enough – The Conversation. Godfrey-Smith argues that in order to protect “first-order” of tolerance of individual actions – say, certain religious practices or having a homosexual relationship – means we must be “second-order” intolerant of actions that are themselves intolerant of these actions, and it’s in no way contradictory or hypocritical to do so. That’s why we have anti-discrimination laws that are explicitly intolerant of intolerance.

Similarly, in order to protect the power of free speech to help seek the truth, I think we need to be more intolerant of speech that subverts the very possibility of speech to seek the truth. Crucially, that doesn’t mean shutting down speech by people who are just ignorant or wrong. What it does mean is shutting down bad faith speech that muddies the waters, that spreads misinformation, that threatens or coerces, or that exploits known psychological biases to mislead.

in order to protect the power of free speech to help seek the truth, I think we need to be more intolerant of speech that subverts the very possibility of speech to seek the truth.

There’ll be a price to shutting down certain types of discourse in some forums. Some legitimate speech may be hampered. But if the price of not doing it is more polluted discourse, more bad faith, more wars, more Brexits, more Trumps, and a world that is more than 4 degrees warmer, then I know which side of the trade-off I prefer.

I still believe in the potential of free speech to seek the truth. I still believe it ought to be practiced in some forums. I still believe it’s something worth fighting to enable and preserve. But I think you only need to look at the last two decades – and speculate about the two decades to come – to realise that our current approach to free speech has failed. And the stakes are high.


Ethics Explainer: Peter Singer on charitable giving

Most people believe it is a good idea to help out others in need, if and when we can. If someone falls over in front of us, we usually stop to see if they need a hand or to check if they are OK.

Donating to charity is also considered to be helping others in need, but we may not always see the person we are helping in this case. Even so, charitable donations are viewed as praiseworthy in our society. We receive a sticker for placing spare change into a coin collection tin, and our donations are tax deductible.

Yet most people see donating to charity as a ‘nice thing to do’, but perhaps not a ‘duty’, obligation or requirement. In Kantian terms, it is ‘supererogatory’, meaning that it is praiseworthy, but above and beyond the call of duty.

However, Peter Singer defends a stronger stance. He argues that we should help others – however we can. All of us. This may look different for different people. It could involve donating money, time, signing petitions, or passing along old clothes to those who need them, for example.

If we can help, then we should, Singer argues, because it results in the greatest overall good. The small efforts of those who can do something greatly reduce the pain and suffering of those who need welfare.

In order to illustrate this argument, Singer provides us with a compelling thought experiment.

The ‘drowning child’ thought experiment

From his 1972 article, ‘Famine, Affluence, and Morality’, Singer starts with a basic principle:

“if it is in our power to prevent something very bad from happening, without thereby sacrificing anything morally significant, we ought, morally, to do it.”

This seems reasonable. He backs this claim up with the following concrete example:

“An application of this principle would be as follows: if I am walking past a shallow pond and see a child drowning in it, I ought to wade in and pull the child out. This will mean getting my clothes muddy, but this is insignificant, while the death of the child would presumably be a very bad thing”.

Obviously, we agree, we should save the child from drowning, even if it comes with the inconvenience and cost of ruining some of our favourite, expensive clothes and shoes. The moral ‘weight’ of saving a life far outweighs the cost in this scenario.

Yet, Singer extends this claim even further.

He notes that if we agree with this principle, then what follows from it is quite radical. If we act on the idea that we should always prevent very bad things from happening, provided we are not sacrificing anything too costly to ourselves, this makes a moral demand upon us.

The biggest implication is that, for Singer, it does not matter whether the drowning child is right in front of us, or in another country on the other side of the world. The principle is one of impartiality, universalizability and equality.

I can easily donate the cost of a new pair of shoes to a respected charitable organisation and save a life with the funds from that donation. In the same way as a moral agent would wade into the pond to rescue the drowning child, we can make some relatively small effort that prevents a very bad thing from occurring.

The ‘very bad thing’ may be that a child in a developing country starves to death or dies because their family cannot afford the treatment for a simple disease.

The expanding moral circle

Now, even for those in favour of charitable giving, some may argue that our duty to help does not extend beyond national borders. It is easier to help the child ‘right in front of us’, they may say. Our moral circle of concern includes our family and friends, and perhaps our fellow Australians.

But I am convinced by Singer’s argument that we ought to expand our moral circle of consideration to those in other countries, to those who live on planet Earth with us. The moral obligation to alleviate suffering has no borders.

And we are now most certainly ‘global citizens’. Thanks to our technology and growing awareness of what occurs around the globe, we have outgrown a nationalistic model and clearly inhabit an international world.

Global Citizens

In his 2002 book, One World: the ethics of globalisation, Singer supports the notion of the global citizen which views all human beings as members of a single, global community. The global citizen is someone who recognises others as more similar to rather than different from oneself, even while taking seriously individual, social, cultural and political differences between people.

In a pragmatic sense, global citizens will support policies that extend aid beyond national borders and cultivate respectful and reciprocal relationships with others regardless of geographical distance or other differences (such as those related to race, religion, ethnicity, disability, sexuality, or gender identification).

For a long time now, Singer has also been pointing out that we are all responsible for important issues that are affecting each and every one of us. Back in 1972, he claimed, “unfortunately, most of the major evils – poverty, overpopulation, pollution – are problems in which everyone is almost equally involved.”

And, with our technology, media and the 24 hour news cycle, we are now confronted with the pain and suffering of those distant others in ways that ensure they are immediately present to us. We can no longer claim ignorance of the help required by others, as social media brings their images and pleas directly to our handheld devices.

So, do we have an obligation to alleviate suffering wherever it is found? Does this obligation extend beyond national borders? Should we do what we can to prevent very bad things from happening, provided in doing so we do not have to sacrifice anything too drastic or comparable? (for instance, we need not reduce ourselves to the levels of poverty of those we seek to assist in doing so).

If you answered yes, then you may already think of yourself as a global citizen.


Accountability the missing piece in Business Roundtable statement

Over the past few weeks a lot has been written about the “Statement on the Purpose of a Corporation” issued by the Business Roundtable in the United States.

The Business Roundtable, an association of chief executive officers from America’s leading companies, has shifted its position on who a corporation principally serves.

The original statement, published in 1997, suggested that companies exist to serve its shareholders. The new statement, signed by 163 chief executive officers, states that “While each of our individual companies serves its own corporate purpose, we share a fundamental commitment to all of our stakeholders.”

This “stakeholder approach” to corporate responsibility is not in itself ground-breaking. Nor is it a recent invention. In Johnson and Johnson’s corporate credo developed in 1943, the company lists patients, doctors and nurses as its primary stakeholders, followed by employees, customers, communities and finally shareholders.

Indeed, the shift to a stakeholder approach may not be as profound in practice as some have suggested. Even the Business Roundtable have said that the previous statement “does not accurately describe the ways in which we and our fellow CEOs endeavour every day to create value for all our stakeholders, whose long-term interests are inseparable.”

Given this, it is possible that chief executive officers only support the stakeholder approach to the extent that it benefits both themselves and the shareholder. And we should not necessarily decry this. Adam Smith, sometimes referred to as the “father of economics”, argued that individual self-interest can produce optimal outcomes, the source of his so-called “invisible hand”.

Even Milton Friedman, the much-maligned University of Chicago economist who is often held out as being the most vocal advocate for shareholder primacy, was not ignorant to the possibility that looking after the needs of stakeholders is not necessarily at odds with generating superior returns for shareholders in the long run. Famously, Friedman wrote:

“It may well be in the long-run interest of a corporation that is a major employer in a small community to devote resources to providing amenities to that community or to improving its government. That may make it easier to attract desirable employees, it may reduce the wage bill or lessen losses from pilferage and sabotage or have other worthwhile effects.”

However, as committed as the Business Roundtable might be, circumstances will prevail that are not supportive of the stakeholder approach. Uncompetitive markets result in companies benefiting at the expense of consumers. Seemingly sensible incentive schemes can drive perverse outcomes. And a company’s products, despite being highly valued by its customers, can have broader, deleterious consequences (fossil fuel companies producing carbon dioxide, social media companies empowering covert actors, and technology companies producing “e-waste” are three examples of the latter).

The signatories to the revamped Statement on the Purpose of a Corporation would have you believe that they can be trusted to manage these types of scenarios. We should be cautious taking them at their word. History shows that even well-intentioned chief executives find it extraordinarily difficult to drive the required change in a system where the incentives endorse the status quo. And in some cases, regardless of how hard they might try, they do not have the ability to do so. The most lucid corporate purpose statement won’t save us here.

It is therefore noteworthy that the Business Roundtable has omitted the idea of accountability from its statement. If chief executive officers are serious about serving all stakeholders, how will they be held accountable?

Milton Friedman also had something to say about this. He believed that corporations should conform “to the basic rules of society, both those embodied in law and those embodied in ethical custom.” But more importantly, as laissez faire as he was, he acknowledged that there was a role for government to “enforce compliance” and hold those who don’t “play the game” accountable.

Arguably this is the most important piece of the puzzle. Strong public institutions that develop good policy and hold corporations accountable. It is also the piece that is currently missing.

The recent financial services Royal Commission was a demonstration of what can happen when boundaries are established but not enforced. In a recent speech delivered by Commissioner Kenneth Hayne, he asked us to “grapple closely” with what the seemingly endless calls for Royal Commissions in Australia “are telling us about the state of our democratic institutions.”

But more relevant to this essay, Commissioner Hayne also provided his view on purpose statements and industry codes in the Royal Commission’s final report. He labelled them as mere “public relations puffs”, proposing that the only way they can be effective is by making them enforceable:

“If industry codes are to be more than public relations puffs, the promises made must be made seriously. If they are made seriously (and those bound by the codes say that they are), the promises that are set out in the code … must be kept. This must entail that the promises can be enforced by those to whom the promises are made.”

To be sure, the stance taken by the Business Roundtable should be applauded. Their intentions are without question noble. But more powerful would be a description of how they are going to hold themselves accountable to the statement and create the conditions that deliver value for all their stakeholders over the long-term.

Of course, this exercise would reveal the costs (financial and otherwise) that are associated with being genuinely committed to positive outcomes for all stakeholders. For some chief executive officers, the price would be too high. And because, like all of us, chief executives have their limits, so too does self-regulation.


The new rules of ethical design in tech

This article was written for, and first published by Atlassian.

Because tech design is a human activity, and there’s no such thing as human behaviour without ethics.

One of my favourite memes for the last few years is This is Fine. It’s a picture of a dog sitting in a burning building with a cup of coffee. “This is fine,” the dog announces. “I’m okay with what’s happening. It’ll all turn out okay.” Then the dog takes a sip of coffee and melts into the fire.

Working in ethics and technology, I hear a lot of “This is fine.” The tech sector has built (and is building) processes and systems that exclude vulnerable users by designing “nudges” that influence users, users who end up making privacy concessions they probably shouldn’t. Or, designing by hardwiring preconceived notions of right and wrong into technologies that will shape millions of people’s lives.

But many won’t acknowledge they could have ethics problems.

Credit: KC Green. https://topatoco.com/collections/this-is-fine

This is partly because, like the dog, they don’t concede that the fire might actually burn them in the end. Lots of people working in tech are willing to admit that someone else has a problem with ethics, but they’re less likely to believe is that they themselves have an issue with ethics.

And I get it. Many times, people are building products that seem innocuous, fun, or practical. There’s nothing in there that makes us do a moral double-take.

The problem is, of course, that just because you’re not able to identify a problem doesn’t mean you won’t melt to death in the sixth frame of the comic. And there are issues you need to address in what you’re building, because tech design is a human activity, and there’s no such thing as human behaviour without ethics.

Your product probably already has ethical issues

To put it bluntly: if you think you don’t need to consider ethics in your design process because your product doesn’t generate any ethical issues, you’ve missed something. Maybe your product is still fine, but you can’t be sure unless you’ve taken the time to consider your product and stakeholders through an ethical lens.

Look at it this way: If you haven’t made sure there are no bugs or biases in your design, you haven’t been the best designer you could be. Ethics is no different – making people (and their products) the best they can be.

Take Pokémon Go, for example. It’s an awesome little mobile game that gives users the chance to feel like Pokémon trainers in the real world. And it’s a business success story, recording a profit of $3 billion at the end of 2018. But it’s exactly the kind of innocuous-seeming app most would think doesn’t have any ethical issues.

But it does. It distracted drivers, brought users to dangerous locations in the hopes of catching Pokémon, disrupted public infrastructure, didn’t seek the consent of the sites it included in the game, unintentionally excluded rural neighbourhoods (many populated by racial minorities), and released Pokémon in offensive locations (for instance, a poison gas Pokémon in the Holocaust Museum in Washington DC).

Quite a list, actually.

This is a shame, because all of this meant that Pokemon Go was not the best game it could be. And as designers, that’s the goal – to make something great. But something can’t be great unless it’s good, and that’s why designers need to think about ethics.

Here are a few things you can embed within your design processes to make sure you’re not going to burn to death, ethically speaking, when you finally launch.

1. Start with ethical pre-mortems

When something goes wrong with a product, we know it’s important to do a postmortem to make sure we don’t repeat the same mistakes. Postmortems happen all the time in ethics. A product is launched, a scandal erupts, and ethicists wind up as talking heads on the news discussing what went wrong.

As useful as postmortems are, they can also be ways of washing over negligent practices. When something goes wrong and a spokesperson says, “We’re going to look closely at what happened to make sure it doesn’t happen again.” I want to say, “Why didn’t you do that before you launched?” That’s what an ethical premortem does.

Sit down with your team and talk about what would make this product an ethical failure. Then work backwards to the root causes of that possible failure. How could you mitigate that risk? Can you reduce the risk enough to justify going forward with the project? Are your systems, processes and teams set up in a way that enables ethical issues to be identified and addressed?

Tech ethicist Shannon Vallor provides a list of handy premortem questions:

  • How Could This Project Fail for Ethical Reasons?
  • What Would be the Most Likely Combined Causes of Our Ethical Failure/Disaster?
  • What Blind Spots Would Lead Us Into It?
  • Why Would We Fail to Act?
  • Why/How Would We Choose the Wrong Action?

What Systems/Processes/Checks/Failsafes Can We Put in Place to Reduce Failure Risk?

2. Ask the Death Star question

The book Rogue One: Catalyst tells the story of how the galactic empire managed to build the Death Star. The strategy was simple: take many subject matter experts and get them working in silos on small projects. With no team aware of what other teams were doing, only a few managers could make sense of what was actually being built.

Small teams, working in a limited role on a much larger project, with limited connection to the needs, goals, objectives or activities of other teams. Sound familiar? Siloing is a major source of ethical negligence. Teams whose workloads, incentives, and interests are limited to their particular contribution seldom can identify the downstream effects of their contribution, or what might happen when it’s combined with other work.

While it’s unlikely you’re secretly working for a Sith Lord, it’s still worth asking:

  • What’s the big picture here? What am I actually helping to build?
  • What contribution is my work making and are there ethical risks I might need to know about?
  • Are there dual-use risks in this product that I should be designing against?
  • If there are risks, are they worth it, given the potential benefits?

3. Get red teaming

Anyone who has worked in security will know that one of the best ways to know if a product is secure is to ask someone else to try to break it. We can use a similar concept for ethics. Once we’ve built something we think is great, ask some people to try to prove that it isn’t.

Red teams should ask:

  • What are the ethical pressure points here?
  • Have you made trade-offs between competing values/ideals? If so, have you made them in the right way?
  • What happens if we widen the circle of possible users to include some people you may not have considered?
  • Was this project one we should have taken on at all? (If you knew you were building the Death Star, it’s unlikely you could ever make it an ethical product. It’s a WMD.)
  • Is your solution the only one? Is it the best one?

4. Decide what your product’s saying

Ever seen a toddler discover a new toy? Their first instinct is to test the limits of what they can do. They’re not asking What was the intention of the designer, they’re testing how the item can satisfy their needs, whatever they may be. In this case they chew it, throw it, paint with it, push it down a slide… a toddler can’t access the designer’s intention. The only prompts they have are those built into the product itself.

It’s easy to think about our products as though they’ll only be used in the way we want them to be used. In reality, though, technology design and usage is more like a two-way conversation than a set of instructions. Given this, it’s worth asking: if the user had no instructions on how to use this product, what would they infer purely from the design?

For example, we might infer from the hidden-away nature of some privacy settings on social media platforms that we shouldn’t tweak our privacy settings. Social platforms might say otherwise, but their design tells a different story. Imagine what your product would be saying to a user if you let it speak for itself.

This is doubly important, because your design is saying something. All technology is full of affordances – subtle prompts that invite the user to engage with it in some ways rather than others. They’re there whether you intend them to be or not, but if you’re not aware of what your design affords, you can’t know what messages the user might be receiving.

Design teams should ask:

  • What could a infer from the design about how a product can/should be used?
  • How do you want people to use this?
  • How don’t you want people to use this?
  • Do your design choices and affordances reflect these expectations?
  • Are you unnecessarily preventing other legitimate uses of the technology?

5. Don’t forget to show your work

One of the (few) things I remember from my high school math classes is this: you get one mark for getting the right answer, but three marks for showing the working that led you there.

It’s also important for learning: if you don’t get the right answer, being able to interrogate your process is crucial (that’s what a post-mortem is).

For ethical design, the process of showing your work is about being willing to publicly defend the ethical decisions you’ve made. It’s a practical version of The Sunlight Test – where you test your intentions by asking if you’d do what you were doing if the whole world was watching.

Ask yourself (and your team):

  • Are there any limitations to this product?
  • What trade-offs have you made (e.g. between privacy and user-customisation)?
  • Why did you build this product (what problems are you solving?)
  • Does this product risk being misused? If so, what have you done to mitigate those risks?
  • Are there any users who will have trouble using this product (for instance, people with disabilities)? If so, why can’t you fix this and why is it worth releasing the product, given it’s not universally accessible?
  • How probable is it that the good and bad effects are likely to happen?

Ethics is an investment

I’m constantly amazed at how much money, time and personnel organisations are willing to invest in culture initiatives, wellbeing days and the like, but who haven’t spent a penny on ethics. There’s a general sense that if you’re a good person, then you’ll build ethical stuff, but the evidence overwhelmingly proves that’s not the case. Ethics needs to be something you invest in learning about, building resources and systems around, recruiting for, and incentivising.

It’s also something that needs to be engaged in for the right reasons. You can’t go into this process because you think it’s going to make you money or recruit the best people, because you’ll abandon it the second you find a more effective way to achieve those goals. A lot of the talk around ethics in technology at the moment has a particular flavour: anti-regulation. There is a hope that if companies are ethical, they can self-regulate.

I don’t see that as the role of ethics at all. Ethics can guide us toward making the best judgements about what’s right and what’s wrong. It can give us precision in our decisions, a language to explain why something is a problem, and a way of determining when something is truly excellent. But people also need justice: something to rely on if they’re the least powerful person in the room. Ethics has something to say here, but so do law and regulation.

If your organisation says they’re taking ethics seriously, ask them how open they are to accepting restraint and accountability. How much are they willing to invest in getting the systems right? Are they willing to sack their best performer if that person isn’t conducting themselves the way they should?


Ethics Explainer: Agape

How many people do you think we can love? Can we love everyone? Can we love everyone equallyThe answers to these questions obviously depend on what the nature of this kind of love is, and what it looks like or demands of us in practice.  

Love is all you need”  

Agape is a form of love that is commonly referred to as ‘neighbourly love, the love ethic, or sometimes ‘universal love’. It rests on the idea that all people are our ‘brothers and sisters’ who deserve our care and respect. Agape invites us to actively consider and act upon the interests of other people, in more-or-less the same proportion as you consider (and usually act upon) your own interests.  

We can trace the concept back to Ancient Greece, a time in which they had more than one word to describe various kinds of love. Commonly, useful distinctions can be made between eros, philia, and agape. 

Eros is the kind of love we most often associate with romantic partners, particularly in the early stages of a love affair. It’s the source of English words like ‘erotic’ and ‘erotica’.  

Philia generally refers to the affection felt between friends or family members. It is non-sexual in nature and usually reciprocal. It is characterised by a mutual good will that manifests in friendship.  

Although both eros and philia have others as their focus, they can both entail a kind of self-interest or self-gratification (after allin an ideal world our friends and lovers both give us pleasure).  

Agape is often contrasted to these kinds of love because it lacks self-interest, self-gratification or self-preservation. It is motivated by the interest and welfare of all others. It is global and compassionate, rather than focussed on a single individual or a few people. 

Another significant difference between agape and other forms of love is that we choose and cultivate agape. It’s not something that ‘happens’ to us like becoming a friend or falling romantically in love, it’s something we work toward. It is often considered praiseworthy and holds the lover to a high moral standard.  

Agape is a form of love that values each person regardless of their individual characteristics or behaviour. In this way it is usually contrasted to eros or philia, where we usually value and like a person because of their characteristics.  

 

Agape in traditional texts  

The concept of agape we now have has been strongly influenced by the Christian tradition. It symbolises the love God has for people, and the love we (should) have for God in return. By extension, if we love our ‘neighbours’ (others) as we do God, then we should also love everyone else in a universal and unconditional manner, simply because they are created in the likeness of God. 

The Jesus narrative asks followers to act with love (agape) regardless of how they feel. This early Christian ethical tradition encourages us to “love thy neighbour as thyself”. In the Buddhist tradition K’ung Fu-tzu (Confucius) similarly says, “Work for the good of others as you would work for your own good.”  

Another great exponent of this ethic of love is Mahatma Gandhi who lived, worked, and died to keep this transcendent idea of universal love alive. Gandhi was known for saying, “Hate the sin, love the sinner”.  

Advocates for non-violent resistance and pacifism that include Gandhi, Martin Luther King, Jr., and John Lennon and Yoko Ono also refer to the power of love as a unifying force that can overcome hate and remind us of our common humanity, regardless of our individual differences.   

Such ideology rests on principles that are resonant with agape, urging us to love all people and forgive them for whatever wrongs we believe they have committed. In this way, agape sets a very high moral standard for us to follow.  

However, this idea of generalised, unconditional love leaves us with an important and challenging question: is it possible for human beings to achieve? And if so, how far may it extend? Can we really love the whole of humanity? 


MIT Media Lab: look at the money and morality behind the machine

When convicted sex offender, alleged sex trafficker and financier to the rich and famous Jeffrey Epstein was arrested and subsequently died in prison, there was a sense that some skeletons were about to come out of the closet.

However, few would have expected that the death of a well-connected, social high-flying predator would call into disrepute one of the world’s most reputable AI research labs. But this is 2019, so anything can happen. And happen it has.

Two weeks ago, New Yorker magazine’s Ronan Farrow reported that Joi Ito, the director of MIT’s prestigious Media Lab, which aims to “focus on the study, invention, and creative use of digital technologies to enhance the ways that people think, express, and communicate ideas, and explore new scientific frontiers,” had accepted $7.5 million in anonymous funding from Epstein, despite knowing MIT had him listed as a “disqualified donor” – presumably because of his previous convictions for sex offences.

Emails obtained by Farrow suggest Ito wrote to Epstein asking for funding to continue to pay staff salaries. Epstein allegedly procured donations from other philanthropists – including Bill Gates – for the Media Lab, but all record of Epstein’s involvement was scrubbed.

Since this has been made public, Ito – who lists one of his areas of expertise as “the ethics and governance of technology” – has resigned. The funding director who worked with Ito at MIT, Peter Cohen, now working at another university, has been placed on administrative leave. Staff at MIT Media Lab have resigned in protest and others are feeling deeply complicit, betrayed and disenchanted at what has transpired.

What happened at MIT’s Media Lab is an important case study in how the public conversation around the ethics of technology needs to expand to consider more than just the ethical character of systems themselves. We need to know who is building these systems, why they’re doing so and who is benefitting. In short, ethical considerations need to include a supply chain analysis of how the technology came to be created.

This is important is because technology ethics – especially AI ethics – is currently going through what political philosopher Annette Zimmerman calls a “gold rush”. A range of groups, including The Ethics Centre, are producing guides, white papers, codes, principles and frameworks to try to respond to the widespread need for rigorous, responsive AI ethics. Some of these parties genuinely want to solve the issues; others just want to be able to charge clients and have retail products ready to go. In either case, the underlying concern is that the kind of ethics that gets paid gets made.

For instance, funding is likely to dictate where the world’s best talent is recruited and what problems they’re asked to solve. Paying people to spend time thinking about these issues, providing the infrastructure for multidisciplinary (or in MIT Media Lab’s case, “anti disciplinary”) groups to collaborate is expensive. Those with money will have a much louder voice in public and social debates around AI ethics and have considerable power to shape the norms that will eventually shape the future.

This is not entirely new. Academic research – particularly in the sciences – has always been fraught. It often requires philanthropic support, and it’s easy to rationalise the choice to take this from morally questionable people and groups (and, indeed, the downright contemptible). Vox’s Kelsey Piper summarised the argument neatly: “Who would you rather have $5 million: Jeffrey Epstein, or a scientist who wants to use it for research? Presumably the scientist, right?”

What this argument misses, as Piper points out, is that when it comes to these kinds of donations, we want to know where they’re coming from. Just as we don’t want to consume coffee made by slave labour, we don’t want to chauffeured around by autonomous vehicles whose AI was paid for by money that helped boost the power and social standing of a predator.

More significantly, it matters that survivors of sexual violence – perhaps even Epstein’s own – might step into vehicles, knowingly or not, whose very existence stemmed from the crimes whose effects they now live with.

Paying attention to these concerns is simply about asking the same questions technology ethicists already ask in a different context. For instance, many already argue that the provenance of a tech product should be made transparent. In Ethical by Design: Principles for Good Technology, we argue that:

The complete history of artefacts and devices, including the identities of all those who have designed, manufactured, serviced and owned the item, should be freely available to any current owner, custodian or user of the device.

It’s a natural extension of this to apply the same requirements to the funding and ownership of tech products. We don’t just need to know who built them, perhaps we also need to know who paid for them to be built, and who is earning capital (financial or social) as a result.

AI and data ethics have recently focused on concerns around the unfair distribution of harms. It’s not enough, many argue, that an algorithm is beneficial 95% of the time, if the 5% who don’t benefit are all (for example) people with disabilities or from another disadvantaged, minority group. We can apply the same principle to the Epstein funding: if the moral costs of having AI funded by a repeated sex offender are borne by survivors of sexual violence, then this is an unacceptable distribution of risks.

MIT Media Lab, like other labs around the world, literally wants to design the future for all of us. It’s not unreasonable to demand that MIT Media Lab and other groups in the business of designing the future, design it on our terms – not those of a silent, anonymous philanthropist.


Ageing well is the elephant in the room when it comes to aged care

I recently came across a quote from philosopher Jean Jacques Rousseau, talking about what it means to live well:

“To live is not to breathe but to act. It is to make use of our organs, our senses, our faculties, of all the parts of ourselves which give us the sentiment of our existence. The man who has lived the most is not he who has counted the most years but he who has most felt life. Men have been buried at one hundred who have died at their birth.”

Perhaps unsurprisingly, I found myself nodding sagely along as I read. Because life isn’t something we have, it’s something we do. It is a set of activities that we can fuse with meaning. There doesn’t seem much value to living if all we do with it is exist. More is demanded of us.

Rousseau’s quote isn’t just sage; it’s inspiring. It makes us want to live better – more fully. It captures an idea that moral philosophers have been exploring for thousands of years: what it means to ‘live well’ – to have a life worth living.

Unfortunately, it also illustrates a bigger problem. Because in our current reality, not everyone is able to live the way Rousseau outlines as being the gold standard for Really Good LivingTM.

This is a reality that professionals working in the aged care sector should know all too well. They work directly with people who don’t have full use of their organs, their faculties or their senses. And yet when I presented Rousseau’s thought to a room full of aged care professionals recently, they felt the same inspiration and agreement that I’d felt.

That’s a problem.

If the good life looks like a robust, activity-filled life, what does that tell us about the possibility for the elderly to live well? And if we don’t believe that the elderly can live well, what does that mean for aged care?

If you have been following the testimony around the Aged Care Royal Commission, you’ll be aware of the galling evidence of misconduct, negligence and at times outright abuse. The most vulnerable members of our communities, and our families, have been subject to mistreatment due in part to a commercial drive to increase the profitability of aged care facilities at the expense of person-centred care .

Absent from the discussion thus far has been the question of ‘the good life’. That’s understandable given the range of much more immediate and serious concerns facing the aged care sector, but it is one that cannot be ignored.

In 2015, celebrity chef and aged care advocate Maggie Beer told The Ethics Centre that she wanted “to create a sense of outrage about [elderly people] who are merely existing”. Since then she has gone on to provide evidence to the Royal Commission, because she believes that food is about so much more than nutrition. It’s about memory, community, pleasure and taking care and pride in your work.

Consider the evidence given around food standards in aged care. There have been suggestions that uneaten food is being collected and reused in the kitchens for the next meal; that there is a “race to the bottom” to cut costs of meals at the expense of quality, and that the retailers selling to aged care facilities wildly inflate their prices. The result? Bad food for premium prices.

We should be disturbed by this. This food doesn’t even permit people to exist, let alone flourish. It leaves them wasting away, undernourished. It’s abhorrent. But what should be the appropriate standard for food within aged care? How should we determine what’s acceptable? Do we need food that is merely nutritious and of an acceptable standard, or does it need to do more than that?

Answering that question requires us to confront an underlying question:

 Do we believe aged care is simply about providing people’s basic needs until they eventually die? 

Or is it much more than that? Is it about ensuring that every remaining moment of life provides the “sentiment of existence” that Rousseau was concerned with?

When you look at the approximately 190,000 words of testimony that’s been given to the Royal Commission thus far, a clear answer begins to emerge. Alongside terms like ‘rights’, ‘harms’ and ‘fairness’ –which capture the bare minimum of ethical treatment for other people – appear words such as ‘empathy’, ‘love’ and ‘connection’. These words capture more than basic respect for persons, they capture a higher standard of how we should relate to other people. They’re compassionate words. People are expressing a demand not just for the elderly to be cared for, but to be cared about.

Counsel assisting the Royal Commission, Peter Gray QC, recently told the commission that “a philosophical shift is required, placing the people receiving care at the centre of quality and safety regulation. This means a new system, empowering them and respecting their rights.”

It’s clear that a philosophical shift is necessary. However, I would argue that what’s not clear is if ‘person-centred care’ is enough. Because unless we are able to confront the underlying social belief that at a certain age, all that remains for you in life is to die, we won’t be able to provide the kind of empowerment you felt reading Rousseau at the start of this article.

There is an ageist belief embedded within our society that all of the things that make life worth living are unavailable to the elderly. As long as we accept that to be true, we’ll be satisfied providing a level of care that simply avoids harm, rather than one that provides for a rich, meaningful and satisfying life.


Big Thinker: Peter Singer

Peter Singer (1946—present), one of world’s most influential living philosophers, is best known for applying rigorous logic to a range of practical issues from animal rights, giving to charity to the ethics of abortion and infanticide.

Singer was born in Melbourne in 1946 to Austrian Jewish Holocaust survivors. As a teen he declared his atheism and refused to celebrate his Bar Mitzvah. After studying law, history and philosophy at Melbourne University, he won a scholarship to Oxford University, writing his thesis on civil disobedience. In 1996 he ran unsuccessfully for the Greens in the Victorian State Parliament, and he has held posts at Melbourne, Monash, New York, London and Princeton Universities. His impact on public debate and academic philosophy cannot be overstated.

A key aspect of Singer’s contributions is the idea of ‘equal consideration of interests’. This informs both his views towards animals and charity. It means that we should consider the interests of any sentient beings who have the capacity to suffer and feel pleasure and pain.

Singer is a consequentialist, which means he defines ethical actions as ones that maximise overall pleasure and reduce overall pain. Part of what makes him such a challenging and influential thinker is his application of utilitarianism to real-world problems to offer counter-intuitive yet compelling solutions.’

Are you speciesist?

While at Oxford, Singer recalls a conversation with a friend over lunch that was the “most formative experience of [his] life”. Singer had the meat spaghetti, whereas his friend opted for the salad. His friend was the “first ethical vegetarian” he’d met. Two weeks later, Singer became a vegetarian and several years later published his seminal work Animal Liberation (1975).

Singer’s argument for not eating meat is more-or-less the same as another utilitarian philosopher, Jeremy Bentham. Bentham wrote that “the question is not can they reason or can they talk, but can they suffer?” Similarly, Singer argues that animals have the capacity to suffer. Just as we rightly condemn torture, we should also condemn practices like factory farming that inflict unjustifiable pain on non-human animals. He coined the term ‘speciesism’ to describe the privileging of humans over other animals.

Giving to charity

In Singer’s essay “Famine, Affluence and Morality” (1972), he argues that people in rich countries have a moral obligation to give to charities that help people in poverty overseas. He uses an analogy of a drowning child: if we were walking past a shallow pond and saw a child drowning, we would wade in and save the child, even if this meant wrecking our favourite and most expensive pair of shoes. Likewise, because we know there are children dying overseas from preventable poverty-related diseases, we should be giving at least some of our income to charities that fight this.

Opponents to Singer argue that his view about giving to charity is psychologically untenable, and that there are differences between giving to charity and saving a drowning child. For example, the physical act of pulling a child out from water is more morally compelling than sending a cheque overseas. Other arguments include: we don’t know the child will definitely be saved when we send the cheque, fighting poverty requires a collective global effort not just an individual donation, and charities are ineffective and have high overhead costs.

Singer concedes that there may be psychological reasons why people would save the drowning child yet don’t give much to charity, but he says even if it seems strange, rationally there are no relevant moral differences between the cases.

Responding to the criticism that charities may not be effective has led Singer to be a proponent of ‘effective altruism’. In his book The Most Good You Can Do (2015) he describes how a number of charity evaluators can recommend the most cost-effective way to do good. Singer recommends giving on a progressive scale, depending on one’s income.

Instead of pursuing careers in academia, some of Singer’s brightest past students have decided to work for Wall Street to make as much money as possible to then give this away to effective charities.

Controversy around infanticide

Singer has faced sustained criticism and protest throughout his career for his views on the sanctity of life and disability – especially in Germany, where in the 90’s, his views were compared to Nazism and university courses that set his books were boycotted. While he has always been a staunch supporter of abortion on the grounds that a foetus lacks self-consciousness and the criteria of personhood, he argues there is no moral difference between abortion in the womb and killing a newborn. Furthermore, because a newborn cannot yet be classified as a person, if its parents do not want it to survive, or if it has an extreme disability meaning that keeping it alive would be very costly, there is potential justification in killing it.

Religious sanctity-of-life critics argue that Singer’s ethics ignore the fundamental sanctity of human life. Disability rights advocates argue that Singer’s views are ableist, explaining that the quality of life of a disabled person is less than that of a non-disabled person ignores the socially-constructed nature of disability – its harms and inconveniences are largely because the built environment is made for able-bodied people.


The invisible middle: why middle managers aren't represented

The empty chair on stage was more than symbolic when The Banking and Finance Oath (BFO) was hosting a panel discussion on who holds the responsibility of culture within an organisation. In months of preparation, I had not found one middle manager who was willing or able to contribute to the discussion.

A chairman, director, CEO, HR specialist and a professor settled into their places, ready to give their opinions on the role they played in developing culture. The empty space at this event, three years ago, spoke volumes about the invisibility and voicelessness of those who have been promoted to manage others, but have little actual decision-making power.

Middle managers are often in the crossfire when it comes to apportioning blame for the failure to transform an organisation’s culture or to enact strategy. I have heard them derisively called “permafrost”, as if they are frozen into position and will only move with the application of a blowtorch.

“Culture Blockers” is another well-used epithet.

Middle managers are typically those people who head departments, business units, or who are project managers. It is their responsibility to implement the strategy that is imposed from above them and may have two management levels below them.

Over the past 20 years, the ranks of the middle managers have been slashed as organisations cut out unnecessary costs and aim towards flatter hierarchies. Those occupying the surviving positions may be characterised like this:

  • They are often managing people for the first time and offered little training to deal with professional development, project management, time management and conflict resolution
  • They may have been promoted because of their technical competence, rather than management ability
  • Their management responsibilities may be added on top of what they were already doing before being promoted
  • They have responsibility, but little formal authority
  • They may have limited budget
  • They are charged with enacting policy and embedding values, but may not be given the context or the “why”
  • They have little autonomy or flexibility and may lack a sense of purpose.

All these characteristics make middle management a highly stressful position. Two large US studies found that people who work at this level are more likely to suffer depression (18 per cent of supervisors and managers) and had the lowest levels of job satisfaction.

“I don’t know any middle manager that feels like they’re doing a good job”, a middle manager recently told me.

However, the reason we need to pay attention to our middle managers is more than just concern for their welfare. Strategies and cultural change will fail if they are not supported and motivated. They are the custodians of culture and some would argue the creators, as people observe their behaviour as guidance for their own.

“We know what good looks like, but we’re not set up for success”, confided another middle manager.

Stanford University professor, Behnam Tabrizi, studied large-scale change and innovation efforts in 56 randomly selected companies and found that the 32 per cent that succeeded in their efforts could thank the involvement of their middle managers.

“In those cases, mid-level managers weren’t merely managing incremental change; they were leading it by working levers of power up, across and down in their organisations,” he wrote in the Harvard Business Review in 2016.

As more evidence that middle managers are intrinsic to a business’ success, Google founders Larry Page and Sergey Brin decided they could do without managers in the early days of the company in 2002. However, their experiment with a manager-less organisation only lasted a few months.

“They relented when too many people went directly to Page with questions about expense reports, interpersonal conflicts, and other nitty-gritty issues. And as the company grew, the founders soon realized that managers contributed in many other, important ways—for instance, by communicating strategy, helping employees prioritise projects, facilitating collaboration, supporting career development, and ensuring that processes and systems aligned with company goals,” wrote David Garvin, the C. Roland Christensen Professor at Harvard Business School.

With all of this in mind, you may think business leaders would now be seeking the views of their middle managers, to engage them in the cultural change required to regain public trust after the Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry and other recent scandals. But sadly, no.

Just this month at The BFO conference, I was again presenting a panel discussion on the plight of middle managers. Prior to the day, two of the middle management participants – despite one being nominated by a senior leader – were pulled and additionally, the discussion was ruled Chatham House with journalists being asked to leave the room. Although I saw a glimpse of positivity, my research leading up to the discussion would suggest very little has changed and this issue is not limited to financial services.

While senior leaders are working tirelessly to overcome challenges in this transitional time, part of the answer is right in front of them (well, below them) – their hard-working middle managers. But first, they have to make the effort to engage them with appreciation, seek their views with empathy, and involve them in the formulation of strategy.

This article was originally written for The Ethics Alliance. Find out more about this corporate membership program. Already a member? Log in to the membership portal for more content and tools here.


What are millennials looking for at work?

Kat Dunn had a big life, but it wasn’t fulfilling. She was the youngest executive to serve on the senior leadership team of fund manager Perpetual Limited, but she went home each night feeling empty.

The former mergers and acquisitions lawyer tossed in the job two years ago and found her way into the non-profit sector, as CEO of the charity and social business promoter Grameen Australia.

Grameen Australia aims to take Social Business mainstream in Australia by scaling and starting up social businesses and advising socially-minded institutions on how to do the same.

Dunn says Millennials are more interested in “purpose” than money and security. She was speaking at the Crossroads: The 2019 Banking and Finance Oath Conference in Sydney in August.

Dunn said Perpetual tried to talk her out of leaving the fund manager. “I think they thought that I was going through some sort of early-onset midlife crisis.

“Because, after all, what sane person would give up a prestigious job, good money at the age of 33 when my priority should have been financial security, even more status, and chasing those last two rungs to get to be CEO of a listed company?”

‘I was living the dream’

Dunn said she was conditioned to believe she should want to climb the corporate ladder and make a lot of money.

“At 32 years old, I was appointed to be the youngest senior executive on the senior leadership team. The year before, I had just done $3 billion worth of deals in 18 months. I was, as some would say, living the dream,” said Dunn.

“So, you can imagine how disillusioned I felt when I went home every night feeling like I was a fraud. I was wondering how I could possibly reconcile my career with my identity of myself as an ethical person”.

Dunn had been put in charge of building the company’s continuous improvement program, but the move proved a disappointment. “I was so green because I thought [the role] meant I had the privilege of actually making things better for my colleagues.

“Later, I realised that it was just code for riskless cost-cutting … and impossible-to-achieve growth targets.”

Dunn said she had childhood aspirations to help create a sustainable future. “But, instead, I found myself perpetuating the very system of greed that I had vowed to change.”

“My whole career, I was told I had to make a choice between making a living or making a difference. I couldn’t do both and I found that deeply unsettling. I had cognitive dissonance.”

A desire to do work that matters

Dunn made the point that her motivations are shared by many – and not just be Millennials (she just scrapes over the line into Generation X).

By 2025, 75 per cent of the workforce will be Millennials (born between 1980 and 2000) and only 13 per cent of millennials say that their career goal involves climbing the corporate ladder, 60 per cent have aspirations to leave their companies in the next three years.

Moreover, 66 per cent of Millennials say their career goals involve starting their own business, according to a study by Bentley University.

“A steady paycheque and self-interest are not the primary drivers for many Millennials any more. The desire to do work that matters is,” said Dunn.

“Growing up poor, I thought that money would make me happy. I thought it would give me

security and social standing. I thought that if I ticked all of the boxes, that I would be free.

“At the height of my corporate career, though, I was anything but. I felt that making profits for profit’s sake was just deeply unfulfilling. For me, it was just the opposite of fulfilling – it caused me fear, distress and this stinging sense of isolation.

“What was strange is that no one else seemed to be outwardly admitting to feeling the same.”

The vision was impaired

Dunn recalled talking to a peer about strategy at the time and saying to him ‘I think our vision is wrong’.

She told him: “Our vision is to be Australia’s largest and most trusted independent wealth manager.  I think it’s wrong. It’s not actually a vision. It’s a metric on some imaginary league table and it’s all about us.

“It doesn’t say anything about creating anything of value for anyone else.”

Her colleague retorted: “Kat, we have bigger fish to fry than our vision”.

She knew, at that point, she would not realise her potential in that environment.

Aaron Hurst, the author of the book, The Purpose Economy, predicts that purpose is going to be the primary organising principle for the fourth [entrepreneurial] economy.

He defines “purpose” as the experience of three things: personal growth, connection and impact.

“When he wrote the book, five years ago, Hurst said that by 2020, CEOs expected

demands for purpose in the consumer marketplace would increase by 300 per cent,” said Dunn.

“Now, what that means is that consumers deprioritise cost, convenience and function and make decisions based on their need to increase meaning in their lives.”

Dunn says that, as Millennials take on more leadership roles, this trend will become more evident in the job market.

“When you talk about how hard it is to find top talent to work in the industry, it is worthwhile knowing that for the top talent – the future leaders of the industry, of our country, our planet – work isn’t just about money.

“It is a vehicle to self-actualisation. They don’t just want to work nine-to-five for a secure income, they actually want to run through brick walls if it means they get to do work that they believe in, within a culture of integrity, for a purpose that leaves the world in a better place than they found it,

And they want to work in a place that develops not only their skills, but sharpens their character.”

Dunn said that when she left her corporate job, she would not have believed that the financial services industry could build a better society and a sustainable future.

However, she changed her mind when she learned about Grameen Bank, microfinance and social business.

This article was originally written for The Ethics Alliance. Find out more about this corporate membership program. Already a member? Log in to the membership portal for more content and tools here.