Accountability the missing piece in Business Roundtable statement

Over the past few weeks a lot has been written about the “Statement on the Purpose of a Corporation” issued by the Business Roundtable in the United States.

The Business Roundtable, an association of chief executive officers from America’s leading companies, has shifted its position on who a corporation principally serves.

The original statement, published in 1997, suggested that companies exist to serve its shareholders. The new statement, signed by 163 chief executive officers, states that “While each of our individual companies serves its own corporate purpose, we share a fundamental commitment to all of our stakeholders.”

This “stakeholder approach” to corporate responsibility is not in itself ground-breaking. Nor is it a recent invention. In Johnson and Johnson’s corporate credo developed in 1943, the company lists patients, doctors and nurses as its primary stakeholders, followed by employees, customers, communities and finally shareholders.

Indeed, the shift to a stakeholder approach may not be as profound in practice as some have suggested. Even the Business Roundtable have said that the previous statement “does not accurately describe the ways in which we and our fellow CEOs endeavour every day to create value for all our stakeholders, whose long-term interests are inseparable.”

Given this, it is possible that chief executive officers only support the stakeholder approach to the extent that it benefits both themselves and the shareholder. And we should not necessarily decry this. Adam Smith, sometimes referred to as the “father of economics”, argued that individual self-interest can produce optimal outcomes, the source of his so-called “invisible hand”.

Even Milton Friedman, the much-maligned University of Chicago economist who is often held out as being the most vocal advocate for shareholder primacy, was not ignorant to the possibility that looking after the needs of stakeholders is not necessarily at odds with generating superior returns for shareholders in the long run. Famously, Friedman wrote:

“It may well be in the long-run interest of a corporation that is a major employer in a small community to devote resources to providing amenities to that community or to improving its government. That may make it easier to attract desirable employees, it may reduce the wage bill or lessen losses from pilferage and sabotage or have other worthwhile effects.”

However, as committed as the Business Roundtable might be, circumstances will prevail that are not supportive of the stakeholder approach. Uncompetitive markets result in companies benefiting at the expense of consumers. Seemingly sensible incentive schemes can drive perverse outcomes. And a company’s products, despite being highly valued by its customers, can have broader, deleterious consequences (fossil fuel companies producing carbon dioxide, social media companies empowering covert actors, and technology companies producing “e-waste” are three examples of the latter).

The signatories to the revamped Statement on the Purpose of a Corporation would have you believe that they can be trusted to manage these types of scenarios. We should be cautious taking them at their word. History shows that even well-intentioned chief executives find it extraordinarily difficult to drive the required change in a system where the incentives endorse the status quo. And in some cases, regardless of how hard they might try, they do not have the ability to do so. The most lucid corporate purpose statement won’t save us here.

It is therefore noteworthy that the Business Roundtable has omitted the idea of accountability from its statement. If chief executive officers are serious about serving all stakeholders, how will they be held accountable?

Milton Friedman also had something to say about this. He believed that corporations should conform “to the basic rules of society, both those embodied in law and those embodied in ethical custom.” But more importantly, as laissez faire as he was, he acknowledged that there was a role for government to “enforce compliance” and hold those who don’t “play the game” accountable.

Arguably this is the most important piece of the puzzle. Strong public institutions that develop good policy and hold corporations accountable. It is also the piece that is currently missing.

The recent financial services Royal Commission was a demonstration of what can happen when boundaries are established but not enforced. In a recent speech delivered by Commissioner Kenneth Hayne, he asked us to “grapple closely” with what the seemingly endless calls for Royal Commissions in Australia “are telling us about the state of our democratic institutions.”

But more relevant to this essay, Commissioner Hayne also provided his view on purpose statements and industry codes in the Royal Commission’s final report. He labelled them as mere “public relations puffs”, proposing that the only way they can be effective is by making them enforceable:

“If industry codes are to be more than public relations puffs, the promises made must be made seriously. If they are made seriously (and those bound by the codes say that they are), the promises that are set out in the code … must be kept. This must entail that the promises can be enforced by those to whom the promises are made.”

To be sure, the stance taken by the Business Roundtable should be applauded. Their intentions are without question noble. But more powerful would be a description of how they are going to hold themselves accountable to the statement and create the conditions that deliver value for all their stakeholders over the long-term.

Of course, this exercise would reveal the costs (financial and otherwise) that are associated with being genuinely committed to positive outcomes for all stakeholders. For some chief executive officers, the price would be too high. And because, like all of us, chief executives have their limits, so too does self-regulation.


The new rules of ethical design in tech

This article was written for, and first published by Atlassian.

Because tech design is a human activity, and there’s no such thing as human behaviour without ethics.

One of my favourite memes for the last few years is This is Fine. It’s a picture of a dog sitting in a burning building with a cup of coffee. “This is fine,” the dog announces. “I’m okay with what’s happening. It’ll all turn out okay.” Then the dog takes a sip of coffee and melts into the fire.

Working in ethics and technology, I hear a lot of “This is fine.” The tech sector has built (and is building) processes and systems that exclude vulnerable users by designing “nudges” that influence users, users who end up making privacy concessions they probably shouldn’t. Or, designing by hardwiring preconceived notions of right and wrong into technologies that will shape millions of people’s lives.

But many won’t acknowledge they could have ethics problems.

Credit: KC Green. https://topatoco.com/collections/this-is-fine

This is partly because, like the dog, they don’t concede that the fire might actually burn them in the end. Lots of people working in tech are willing to admit that someone else has a problem with ethics, but they’re less likely to believe is that they themselves have an issue with ethics.

And I get it. Many times, people are building products that seem innocuous, fun, or practical. There’s nothing in there that makes us do a moral double-take.

The problem is, of course, that just because you’re not able to identify a problem doesn’t mean you won’t melt to death in the sixth frame of the comic. And there are issues you need to address in what you’re building, because tech design is a human activity, and there’s no such thing as human behaviour without ethics.

Your product probably already has ethical issues

To put it bluntly: if you think you don’t need to consider ethics in your design process because your product doesn’t generate any ethical issues, you’ve missed something. Maybe your product is still fine, but you can’t be sure unless you’ve taken the time to consider your product and stakeholders through an ethical lens.

Look at it this way: If you haven’t made sure there are no bugs or biases in your design, you haven’t been the best designer you could be. Ethics is no different – making people (and their products) the best they can be.

Take Pokémon Go, for example. It’s an awesome little mobile game that gives users the chance to feel like Pokémon trainers in the real world. And it’s a business success story, recording a profit of $3 billion at the end of 2018. But it’s exactly the kind of innocuous-seeming app most would think doesn’t have any ethical issues.

But it does. It distracted drivers, brought users to dangerous locations in the hopes of catching Pokémon, disrupted public infrastructure, didn’t seek the consent of the sites it included in the game, unintentionally excluded rural neighbourhoods (many populated by racial minorities), and released Pokémon in offensive locations (for instance, a poison gas Pokémon in the Holocaust Museum in Washington DC).

Quite a list, actually.

This is a shame, because all of this meant that Pokemon Go was not the best game it could be. And as designers, that’s the goal – to make something great. But something can’t be great unless it’s good, and that’s why designers need to think about ethics.

Here are a few things you can embed within your design processes to make sure you’re not going to burn to death, ethically speaking, when you finally launch.

1. Start with ethical pre-mortems

When something goes wrong with a product, we know it’s important to do a postmortem to make sure we don’t repeat the same mistakes. Postmortems happen all the time in ethics. A product is launched, a scandal erupts, and ethicists wind up as talking heads on the news discussing what went wrong.

As useful as postmortems are, they can also be ways of washing over negligent practices. When something goes wrong and a spokesperson says, “We’re going to look closely at what happened to make sure it doesn’t happen again.” I want to say, “Why didn’t you do that before you launched?” That’s what an ethical premortem does.

Sit down with your team and talk about what would make this product an ethical failure. Then work backwards to the root causes of that possible failure. How could you mitigate that risk? Can you reduce the risk enough to justify going forward with the project? Are your systems, processes and teams set up in a way that enables ethical issues to be identified and addressed?

Tech ethicist Shannon Vallor provides a list of handy premortem questions:

  • How Could This Project Fail for Ethical Reasons?
  • What Would be the Most Likely Combined Causes of Our Ethical Failure/Disaster?
  • What Blind Spots Would Lead Us Into It?
  • Why Would We Fail to Act?
  • Why/How Would We Choose the Wrong Action?

What Systems/Processes/Checks/Failsafes Can We Put in Place to Reduce Failure Risk?

2. Ask the Death Star question

The book Rogue One: Catalyst tells the story of how the galactic empire managed to build the Death Star. The strategy was simple: take many subject matter experts and get them working in silos on small projects. With no team aware of what other teams were doing, only a few managers could make sense of what was actually being built.

Small teams, working in a limited role on a much larger project, with limited connection to the needs, goals, objectives or activities of other teams. Sound familiar? Siloing is a major source of ethical negligence. Teams whose workloads, incentives, and interests are limited to their particular contribution seldom can identify the downstream effects of their contribution, or what might happen when it’s combined with other work.

While it’s unlikely you’re secretly working for a Sith Lord, it’s still worth asking:

  • What’s the big picture here? What am I actually helping to build?
  • What contribution is my work making and are there ethical risks I might need to know about?
  • Are there dual-use risks in this product that I should be designing against?
  • If there are risks, are they worth it, given the potential benefits?

3. Get red teaming

Anyone who has worked in security will know that one of the best ways to know if a product is secure is to ask someone else to try to break it. We can use a similar concept for ethics. Once we’ve built something we think is great, ask some people to try to prove that it isn’t.

Red teams should ask:

  • What are the ethical pressure points here?
  • Have you made trade-offs between competing values/ideals? If so, have you made them in the right way?
  • What happens if we widen the circle of possible users to include some people you may not have considered?
  • Was this project one we should have taken on at all? (If you knew you were building the Death Star, it’s unlikely you could ever make it an ethical product. It’s a WMD.)
  • Is your solution the only one? Is it the best one?

4. Decide what your product’s saying

Ever seen a toddler discover a new toy? Their first instinct is to test the limits of what they can do. They’re not asking What was the intention of the designer, they’re testing how the item can satisfy their needs, whatever they may be. In this case they chew it, throw it, paint with it, push it down a slide… a toddler can’t access the designer’s intention. The only prompts they have are those built into the product itself.

It’s easy to think about our products as though they’ll only be used in the way we want them to be used. In reality, though, technology design and usage is more like a two-way conversation than a set of instructions. Given this, it’s worth asking: if the user had no instructions on how to use this product, what would they infer purely from the design?

For example, we might infer from the hidden-away nature of some privacy settings on social media platforms that we shouldn’t tweak our privacy settings. Social platforms might say otherwise, but their design tells a different story. Imagine what your product would be saying to a user if you let it speak for itself.

This is doubly important, because your design is saying something. All technology is full of affordances – subtle prompts that invite the user to engage with it in some ways rather than others. They’re there whether you intend them to be or not, but if you’re not aware of what your design affords, you can’t know what messages the user might be receiving.

Design teams should ask:

  • What could a infer from the design about how a product can/should be used?
  • How do you want people to use this?
  • How don’t you want people to use this?
  • Do your design choices and affordances reflect these expectations?
  • Are you unnecessarily preventing other legitimate uses of the technology?

5. Don’t forget to show your work

One of the (few) things I remember from my high school math classes is this: you get one mark for getting the right answer, but three marks for showing the working that led you there.

It’s also important for learning: if you don’t get the right answer, being able to interrogate your process is crucial (that’s what a post-mortem is).

For ethical design, the process of showing your work is about being willing to publicly defend the ethical decisions you’ve made. It’s a practical version of The Sunlight Test – where you test your intentions by asking if you’d do what you were doing if the whole world was watching.

Ask yourself (and your team):

  • Are there any limitations to this product?
  • What trade-offs have you made (e.g. between privacy and user-customisation)?
  • Why did you build this product (what problems are you solving?)
  • Does this product risk being misused? If so, what have you done to mitigate those risks?
  • Are there any users who will have trouble using this product (for instance, people with disabilities)? If so, why can’t you fix this and why is it worth releasing the product, given it’s not universally accessible?
  • How probable is it that the good and bad effects are likely to happen?

Ethics is an investment

I’m constantly amazed at how much money, time and personnel organisations are willing to invest in culture initiatives, wellbeing days and the like, but who haven’t spent a penny on ethics. There’s a general sense that if you’re a good person, then you’ll build ethical stuff, but the evidence overwhelmingly proves that’s not the case. Ethics needs to be something you invest in learning about, building resources and systems around, recruiting for, and incentivising.

It’s also something that needs to be engaged in for the right reasons. You can’t go into this process because you think it’s going to make you money or recruit the best people, because you’ll abandon it the second you find a more effective way to achieve those goals. A lot of the talk around ethics in technology at the moment has a particular flavour: anti-regulation. There is a hope that if companies are ethical, they can self-regulate.

I don’t see that as the role of ethics at all. Ethics can guide us toward making the best judgements about what’s right and what’s wrong. It can give us precision in our decisions, a language to explain why something is a problem, and a way of determining when something is truly excellent. But people also need justice: something to rely on if they’re the least powerful person in the room. Ethics has something to say here, but so do law and regulation.

If your organisation says they’re taking ethics seriously, ask them how open they are to accepting restraint and accountability. How much are they willing to invest in getting the systems right? Are they willing to sack their best performer if that person isn’t conducting themselves the way they should?


MIT Media Lab: look at the money and morality behind the machine

When convicted sex offender, alleged sex trafficker and financier to the rich and famous Jeffrey Epstein was arrested and subsequently died in prison, there was a sense that some skeletons were about to come out of the closet.

However, few would have expected that the death of a well-connected, social high-flying predator would call into disrepute one of the world’s most reputable AI research labs. But this is 2019, so anything can happen. And happen it has.

Two weeks ago, New Yorker magazine’s Ronan Farrow reported that Joi Ito, the director of MIT’s prestigious Media Lab, which aims to “focus on the study, invention, and creative use of digital technologies to enhance the ways that people think, express, and communicate ideas, and explore new scientific frontiers,” had accepted $7.5 million in anonymous funding from Epstein, despite knowing MIT had him listed as a “disqualified donor” – presumably because of his previous convictions for sex offences.

Emails obtained by Farrow suggest Ito wrote to Epstein asking for funding to continue to pay staff salaries. Epstein allegedly procured donations from other philanthropists – including Bill Gates – for the Media Lab, but all record of Epstein’s involvement was scrubbed.

Since this has been made public, Ito – who lists one of his areas of expertise as “the ethics and governance of technology” – has resigned. The funding director who worked with Ito at MIT, Peter Cohen, now working at another university, has been placed on administrative leave. Staff at MIT Media Lab have resigned in protest and others are feeling deeply complicit, betrayed and disenchanted at what has transpired.

What happened at MIT’s Media Lab is an important case study in how the public conversation around the ethics of technology needs to expand to consider more than just the ethical character of systems themselves. We need to know who is building these systems, why they’re doing so and who is benefitting. In short, ethical considerations need to include a supply chain analysis of how the technology came to be created.

This is important is because technology ethics – especially AI ethics – is currently going through what political philosopher Annette Zimmerman calls a “gold rush”. A range of groups, including The Ethics Centre, are producing guides, white papers, codes, principles and frameworks to try to respond to the widespread need for rigorous, responsive AI ethics. Some of these parties genuinely want to solve the issues; others just want to be able to charge clients and have retail products ready to go. In either case, the underlying concern is that the kind of ethics that gets paid gets made.

For instance, funding is likely to dictate where the world’s best talent is recruited and what problems they’re asked to solve. Paying people to spend time thinking about these issues, providing the infrastructure for multidisciplinary (or in MIT Media Lab’s case, “anti disciplinary”) groups to collaborate is expensive. Those with money will have a much louder voice in public and social debates around AI ethics and have considerable power to shape the norms that will eventually shape the future.

This is not entirely new. Academic research – particularly in the sciences – has always been fraught. It often requires philanthropic support, and it’s easy to rationalise the choice to take this from morally questionable people and groups (and, indeed, the downright contemptible). Vox’s Kelsey Piper summarised the argument neatly: “Who would you rather have $5 million: Jeffrey Epstein, or a scientist who wants to use it for research? Presumably the scientist, right?”

What this argument misses, as Piper points out, is that when it comes to these kinds of donations, we want to know where they’re coming from. Just as we don’t want to consume coffee made by slave labour, we don’t want to chauffeured around by autonomous vehicles whose AI was paid for by money that helped boost the power and social standing of a predator.

More significantly, it matters that survivors of sexual violence – perhaps even Epstein’s own – might step into vehicles, knowingly or not, whose very existence stemmed from the crimes whose effects they now live with.

Paying attention to these concerns is simply about asking the same questions technology ethicists already ask in a different context. For instance, many already argue that the provenance of a tech product should be made transparent. In Ethical by Design: Principles for Good Technology, we argue that:

The complete history of artefacts and devices, including the identities of all those who have designed, manufactured, serviced and owned the item, should be freely available to any current owner, custodian or user of the device.

It’s a natural extension of this to apply the same requirements to the funding and ownership of tech products. We don’t just need to know who built them, perhaps we also need to know who paid for them to be built, and who is earning capital (financial or social) as a result.

AI and data ethics have recently focused on concerns around the unfair distribution of harms. It’s not enough, many argue, that an algorithm is beneficial 95% of the time, if the 5% who don’t benefit are all (for example) people with disabilities or from another disadvantaged, minority group. We can apply the same principle to the Epstein funding: if the moral costs of having AI funded by a repeated sex offender are borne by survivors of sexual violence, then this is an unacceptable distribution of risks.

MIT Media Lab, like other labs around the world, literally wants to design the future for all of us. It’s not unreasonable to demand that MIT Media Lab and other groups in the business of designing the future, design it on our terms – not those of a silent, anonymous philanthropist.


Ageing well is the elephant in the room when it comes to aged care

I recently came across a quote from philosopher Jean Jacques Rousseau, talking about what it means to live well:

“To live is not to breathe but to act. It is to make use of our organs, our senses, our faculties, of all the parts of ourselves which give us the sentiment of our existence. The man who has lived the most is not he who has counted the most years but he who has most felt life. Men have been buried at one hundred who have died at their birth.”

Perhaps unsurprisingly, I found myself nodding sagely along as I read. Because life isn’t something we have, it’s something we do. It is a set of activities that we can fuse with meaning. There doesn’t seem much value to living if all we do with it is exist. More is demanded of us.

Rousseau’s quote isn’t just sage; it’s inspiring. It makes us want to live better – more fully. It captures an idea that moral philosophers have been exploring for thousands of years: what it means to ‘live well’ – to have a life worth living.

Unfortunately, it also illustrates a bigger problem. Because in our current reality, not everyone is able to live the way Rousseau outlines as being the gold standard for Really Good LivingTM.

This is a reality that professionals working in the aged care sector should know all too well. They work directly with people who don’t have full use of their organs, their faculties or their senses. And yet when I presented Rousseau’s thought to a room full of aged care professionals recently, they felt the same inspiration and agreement that I’d felt.

That’s a problem.

If the good life looks like a robust, activity-filled life, what does that tell us about the possibility for the elderly to live well? And if we don’t believe that the elderly can live well, what does that mean for aged care?

If you have been following the testimony around the Aged Care Royal Commission, you’ll be aware of the galling evidence of misconduct, negligence and at times outright abuse. The most vulnerable members of our communities, and our families, have been subject to mistreatment due in part to a commercial drive to increase the profitability of aged care facilities at the expense of person-centred care .

Absent from the discussion thus far has been the question of ‘the good life’. That’s understandable given the range of much more immediate and serious concerns facing the aged care sector, but it is one that cannot be ignored.

In 2015, celebrity chef and aged care advocate Maggie Beer told The Ethics Centre that she wanted “to create a sense of outrage about [elderly people] who are merely existing”. Since then she has gone on to provide evidence to the Royal Commission, because she believes that food is about so much more than nutrition. It’s about memory, community, pleasure and taking care and pride in your work.

Consider the evidence given around food standards in aged care. There have been suggestions that uneaten food is being collected and reused in the kitchens for the next meal; that there is a “race to the bottom” to cut costs of meals at the expense of quality, and that the retailers selling to aged care facilities wildly inflate their prices. The result? Bad food for premium prices.

We should be disturbed by this. This food doesn’t even permit people to exist, let alone flourish. It leaves them wasting away, undernourished. It’s abhorrent. But what should be the appropriate standard for food within aged care? How should we determine what’s acceptable? Do we need food that is merely nutritious and of an acceptable standard, or does it need to do more than that?

Answering that question requires us to confront an underlying question:

 Do we believe aged care is simply about providing people’s basic needs until they eventually die? 

Or is it much more than that? Is it about ensuring that every remaining moment of life provides the “sentiment of existence” that Rousseau was concerned with?

When you look at the approximately 190,000 words of testimony that’s been given to the Royal Commission thus far, a clear answer begins to emerge. Alongside terms like ‘rights’, ‘harms’ and ‘fairness’ –which capture the bare minimum of ethical treatment for other people – appear words such as ‘empathy’, ‘love’ and ‘connection’. These words capture more than basic respect for persons, they capture a higher standard of how we should relate to other people. They’re compassionate words. People are expressing a demand not just for the elderly to be cared for, but to be cared about.

Counsel assisting the Royal Commission, Peter Gray QC, recently told the commission that “a philosophical shift is required, placing the people receiving care at the centre of quality and safety regulation. This means a new system, empowering them and respecting their rights.”

It’s clear that a philosophical shift is necessary. However, I would argue that what’s not clear is if ‘person-centred care’ is enough. Because unless we are able to confront the underlying social belief that at a certain age, all that remains for you in life is to die, we won’t be able to provide the kind of empowerment you felt reading Rousseau at the start of this article.

There is an ageist belief embedded within our society that all of the things that make life worth living are unavailable to the elderly. As long as we accept that to be true, we’ll be satisfied providing a level of care that simply avoids harm, rather than one that provides for a rich, meaningful and satisfying life.


The invisible middle: why middle managers aren't represented

The empty chair on stage was more than symbolic when The Banking and Finance Oath (BFO) was hosting a panel discussion on who holds the responsibility of culture within an organisation. In months of preparation, I had not found one middle manager who was willing or able to contribute to the discussion.

A chairman, director, CEO, HR specialist and a professor settled into their places, ready to give their opinions on the role they played in developing culture. The empty space at this event, three years ago, spoke volumes about the invisibility and voicelessness of those who have been promoted to manage others, but have little actual decision-making power.

Middle managers are often in the crossfire when it comes to apportioning blame for the failure to transform an organisation’s culture or to enact strategy. I have heard them derisively called “permafrost”, as if they are frozen into position and will only move with the application of a blowtorch.

“Culture Blockers” is another well-used epithet.

Middle managers are typically those people who head departments, business units, or who are project managers. It is their responsibility to implement the strategy that is imposed from above them and may have two management levels below them.

Over the past 20 years, the ranks of the middle managers have been slashed as organisations cut out unnecessary costs and aim towards flatter hierarchies. Those occupying the surviving positions may be characterised like this:

  • They are often managing people for the first time and offered little training to deal with professional development, project management, time management and conflict resolution
  • They may have been promoted because of their technical competence, rather than management ability
  • Their management responsibilities may be added on top of what they were already doing before being promoted
  • They have responsibility, but little formal authority
  • They may have limited budget
  • They are charged with enacting policy and embedding values, but may not be given the context or the “why”
  • They have little autonomy or flexibility and may lack a sense of purpose.

All these characteristics make middle management a highly stressful position. Two large US studies found that people who work at this level are more likely to suffer depression (18 per cent of supervisors and managers) and had the lowest levels of job satisfaction.

“I don’t know any middle manager that feels like they’re doing a good job”, a middle manager recently told me.

However, the reason we need to pay attention to our middle managers is more than just concern for their welfare. Strategies and cultural change will fail if they are not supported and motivated. They are the custodians of culture and some would argue the creators, as people observe their behaviour as guidance for their own.

“We know what good looks like, but we’re not set up for success”, confided another middle manager.

Stanford University professor, Behnam Tabrizi, studied large-scale change and innovation efforts in 56 randomly selected companies and found that the 32 per cent that succeeded in their efforts could thank the involvement of their middle managers.

“In those cases, mid-level managers weren’t merely managing incremental change; they were leading it by working levers of power up, across and down in their organisations,” he wrote in the Harvard Business Review in 2016.

As more evidence that middle managers are intrinsic to a business’ success, Google founders Larry Page and Sergey Brin decided they could do without managers in the early days of the company in 2002. However, their experiment with a manager-less organisation only lasted a few months.

“They relented when too many people went directly to Page with questions about expense reports, interpersonal conflicts, and other nitty-gritty issues. And as the company grew, the founders soon realized that managers contributed in many other, important ways—for instance, by communicating strategy, helping employees prioritise projects, facilitating collaboration, supporting career development, and ensuring that processes and systems aligned with company goals,” wrote David Garvin, the C. Roland Christensen Professor at Harvard Business School.

With all of this in mind, you may think business leaders would now be seeking the views of their middle managers, to engage them in the cultural change required to regain public trust after the Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry and other recent scandals. But sadly, no.

Just this month at The BFO conference, I was again presenting a panel discussion on the plight of middle managers. Prior to the day, two of the middle management participants – despite one being nominated by a senior leader – were pulled and additionally, the discussion was ruled Chatham House with journalists being asked to leave the room. Although I saw a glimpse of positivity, my research leading up to the discussion would suggest very little has changed and this issue is not limited to financial services.

While senior leaders are working tirelessly to overcome challenges in this transitional time, part of the answer is right in front of them (well, below them) – their hard-working middle managers. But first, they have to make the effort to engage them with appreciation, seek their views with empathy, and involve them in the formulation of strategy.

This article was originally written for The Ethics Alliance. Find out more about this corporate membership program. Already a member? Log in to the membership portal for more content and tools here.


What are millennials looking for at work?

Kat Dunn had a big life, but it wasn’t fulfilling. She was the youngest executive to serve on the senior leadership team of fund manager Perpetual Limited, but she went home each night feeling empty.

The former mergers and acquisitions lawyer tossed in the job two years ago and found her way into the non-profit sector, as CEO of the charity and social business promoter Grameen Australia.

Grameen Australia aims to take Social Business mainstream in Australia by scaling and starting up social businesses and advising socially-minded institutions on how to do the same.

Dunn says Millennials are more interested in “purpose” than money and security. She was speaking at the Crossroads: The 2019 Banking and Finance Oath Conference in Sydney in August.

Dunn said Perpetual tried to talk her out of leaving the fund manager. “I think they thought that I was going through some sort of early-onset midlife crisis.

“Because, after all, what sane person would give up a prestigious job, good money at the age of 33 when my priority should have been financial security, even more status, and chasing those last two rungs to get to be CEO of a listed company?”

‘I was living the dream’

Dunn said she was conditioned to believe she should want to climb the corporate ladder and make a lot of money.

“At 32 years old, I was appointed to be the youngest senior executive on the senior leadership team. The year before, I had just done $3 billion worth of deals in 18 months. I was, as some would say, living the dream,” said Dunn.

“So, you can imagine how disillusioned I felt when I went home every night feeling like I was a fraud. I was wondering how I could possibly reconcile my career with my identity of myself as an ethical person”.

Dunn had been put in charge of building the company’s continuous improvement program, but the move proved a disappointment. “I was so green because I thought [the role] meant I had the privilege of actually making things better for my colleagues.

“Later, I realised that it was just code for riskless cost-cutting … and impossible-to-achieve growth targets.”

Dunn said she had childhood aspirations to help create a sustainable future. “But, instead, I found myself perpetuating the very system of greed that I had vowed to change.”

“My whole career, I was told I had to make a choice between making a living or making a difference. I couldn’t do both and I found that deeply unsettling. I had cognitive dissonance.”

A desire to do work that matters

Dunn made the point that her motivations are shared by many – and not just be Millennials (she just scrapes over the line into Generation X).

By 2025, 75 per cent of the workforce will be Millennials (born between 1980 and 2000) and only 13 per cent of millennials say that their career goal involves climbing the corporate ladder, 60 per cent have aspirations to leave their companies in the next three years.

Moreover, 66 per cent of Millennials say their career goals involve starting their own business, according to a study by Bentley University.

“A steady paycheque and self-interest are not the primary drivers for many Millennials any more. The desire to do work that matters is,” said Dunn.

“Growing up poor, I thought that money would make me happy. I thought it would give me

security and social standing. I thought that if I ticked all of the boxes, that I would be free.

“At the height of my corporate career, though, I was anything but. I felt that making profits for profit’s sake was just deeply unfulfilling. For me, it was just the opposite of fulfilling – it caused me fear, distress and this stinging sense of isolation.

“What was strange is that no one else seemed to be outwardly admitting to feeling the same.”

The vision was impaired

Dunn recalled talking to a peer about strategy at the time and saying to him ‘I think our vision is wrong’.

She told him: “Our vision is to be Australia’s largest and most trusted independent wealth manager.  I think it’s wrong. It’s not actually a vision. It’s a metric on some imaginary league table and it’s all about us.

“It doesn’t say anything about creating anything of value for anyone else.”

Her colleague retorted: “Kat, we have bigger fish to fry than our vision”.

She knew, at that point, she would not realise her potential in that environment.

Aaron Hurst, the author of the book, The Purpose Economy, predicts that purpose is going to be the primary organising principle for the fourth [entrepreneurial] economy.

He defines “purpose” as the experience of three things: personal growth, connection and impact.

“When he wrote the book, five years ago, Hurst said that by 2020, CEOs expected

demands for purpose in the consumer marketplace would increase by 300 per cent,” said Dunn.

“Now, what that means is that consumers deprioritise cost, convenience and function and make decisions based on their need to increase meaning in their lives.”

Dunn says that, as Millennials take on more leadership roles, this trend will become more evident in the job market.

“When you talk about how hard it is to find top talent to work in the industry, it is worthwhile knowing that for the top talent – the future leaders of the industry, of our country, our planet – work isn’t just about money.

“It is a vehicle to self-actualisation. They don’t just want to work nine-to-five for a secure income, they actually want to run through brick walls if it means they get to do work that they believe in, within a culture of integrity, for a purpose that leaves the world in a better place than they found it,

And they want to work in a place that develops not only their skills, but sharpens their character.”

Dunn said that when she left her corporate job, she would not have believed that the financial services industry could build a better society and a sustainable future.

However, she changed her mind when she learned about Grameen Bank, microfinance and social business.

This article was originally written for The Ethics Alliance. Find out more about this corporate membership program. Already a member? Log in to the membership portal for more content and tools here.


Following a year of scandals, what's the future for boards?

As guardians of moral behaviour, company boards continue to be challenged. After a year of wall-to-wall scandals, especially within the Banking and Finance sector, many are asking whether there are better ways to oversee what is going on in a business.

A series of damning inquiries, including the recent Royal Commission into Financial Services, has spurred much discussion about holding boards to account – but far less about the structure of boards and whose interests they serve.

Ethicist Lesley Cannold expressed her frustration at this state of affairs in a speech to the finance industry, saying the Royal Commission was a lost opportunity to look at “root and branch” reform.

“We need to think of changes that will propel different kinds of leaders into place and rate their performance according to different criteria – criteria that relate to the wellbeing of a range of stakeholders, not just shareholders,” she said at the Crossroads: The 2019 Banking and Finance Oath Conference in Sydney in August.

This issue is close to the heart of Andrew Linden, PhD researcher on German corporate governance and a sessional Lecturer in RMIT’s School of Management. Linden favours the German system of having an upper supervisory board, with 50 per cent of directors elected by employees, and a lower management board to handle the day-to-day operations.

This system was imposed on the Germans after World War II to ensure companies were more socially responsible but, despite its advantages, has not spread to the English-speaking world, says Linden.

“For 40 years, corporate Australia has been allowed to get away with the idea that all they had to do was to serve shareholders and to maximise the value returned to shareholders.

“Now, that’s never been a feature of the Corporate Law. And directors have had very specific duties, publicly imposed duties, that they ought to have been fulfilling – but they haven’t.”

It is the responsibility of directors of public companies to govern in the corporation’s best interests and also ensure that corporations do not impose costs on the wider community, he says.

“All these piecemeal responses to the Banking Royal Commission are just Band-Aids on bullet wounds. They are not actually going to fix the problem. All through these corporate governance debates, there has not been too much of a focus on board design.”

The German solution – a two-tier model

This board structure, proposed by Linden, would have non-executive directors on an upper (supervisory) board, which would be legally tasked with monitoring and control, approving strategy and appointing auditors.

A lower management board would have executive directors responsible for implementing the approved strategy and day-to-day management.

This structure would separate non-executive from executive directors and create clear, legally separate roles for both groups, he says.

“Research into European banks suggests having employee and union representation on supervisory boards, combined with introduction of employee elected works councils to deal with management over day-to-day issues, reduces systemic risk and holds executives accountable,” according to Linden, who wrote about the subject with Warren Staples (senior lecturer in Management, RMIT University) in The Conversation last year.

Denmark, Norway and Sweden also have employee directors on corporate boards and the model is being proposed in the US by Democratic presidential hopefuls, including Senators Elizabeth Warren and Bernie Sanders.

As Linden said, “All the solutions that people in the English-speaking world typically think about are ownership-based solutions. So, you either go for co-operative ownership as an alternative to shareholder ownership, or, alternatively, it’s public ownership. All of these debates over decades have been about ‘who are the best owners’, not necessarily about the design of their governing bodies.”

Linden says research shows the riskiest banks are those that are English-speaking, for-profit, shareholder-dominated, overseen by an independent-director-dominated board.

“And they have been the ones that have imposed the most cost on communities,” he says.

 Outsourcing the board

Allowing consultant-like companies to oversee governance is a solution proposed by two law academics in the US, who say they are “trying to encourage people to innovate in governance in ways that are fundamentally different than just little tweaks at the edges”.

Law professors Stephen Bainbridge (UCLA) and  Todd Henderson (University of Chicago) say organisations are familiar with the idea of outsourcing responsibilities to lawyers, accountants, financial service providers.

“We envision a corporation, say Microsoft or ExxonMobil, hiring another company, say Boards-R-Us, to provide it with director services, instead of hiring 10 or so separate ‘companies’ to do so,” Henderson explained in an article.

 “Just as other service firms, like Kirkland and Ellis, McKinsey and Company, or KPMG, are staffed by professionals with large support networks, so too would BSPs [board service providers] bring the various aspects of director services under a single roof. We expect the gains to efficiency from such a move to be quite large.

“We argue that hiring a BSP to provide board services instead of a loose group of sole proprietorships [non-executive directors] will increase board accountability, both from markets and judicial supervision.”

Outsourcing to specialists is a familiar concept, said Bainbridge in a video interview with The Conference Board.

“Would you rather deal with you know twelve part-timers who get hired in off the street, or would you rather deal with a professional with a team of professionals?”

Your director is a robot

A Hong Kong venture capital firm, Deep Knowledge Ventures, appointed the first-ever robot director to its board in 2014, giving it the power to veto investment decisions deemed as too risky by its artificial intelligence.

Australia’s Chief Scientist, Dr Alan Finkel, told company directors that he had initially thought the robo-director, named Vital, was a mere publicity stunt.

However, five years on “… the company is still in business. Vital is still on the Board. And waiting in the wings is her successor: Vital 2.0,” Finkel said at a governance summit held by the Australian Institute of Company Directors in March.

“The experiment was so successful that the CEO predicts we’ll see fully autonomous companies – able to operate without any human involvement – in the coming decade.

Stop and think about it: fully autonomous companies able to operate without any human involvement. There’d be no-one to come along to AICD summits!”

Dr Finkel reassured his audience that their jobs were safe … for now.

“… those director-bots would still lack something vital – something truly vital – and that’s what we call artificial general intelligence: the digital equivalent of the package deal of human abilities, human insights and human experiences,” he said.

“The experts tell us that the world of artificial general intelligence is unlikely to be with us until 2050, perhaps longer. Thus, shareholders, customers and governments who want that package deal will have to look to you for quite some time,” he told the audience.

“They will rely on the value that you, and only you, can bring, as a highly capable human being, to your role.”

Linden agrees that robo-directors have limitations and that, before people get too excited about the prospect of technology providing the solution to governance, they need to get back to basics.

“All these issues to do with governance failures get down to questions of ethics and morality and lawfulness – on making judgments about what is appropriate conduct,” he says, adding that it was “hopelessly naïve” to expect machines to be able to make moral judgements.

“These systems depend on who designs them, what kind of data goes into them. That old analogy ‘garbage in, garbage out’ is just as applicable to artificial intelligence as it is to human systems.”

This article was originally written for The Ethics Alliance. Find out more about this corporate membership program. Already a member? Log in to the membership portal for more content and tools here.


How BlueRock uses culture to attract top talent

Glossy highrises form a wall of corporate Australia along the Yarra River. The size of those companies and the magnetism of their brand names easily attract talented people and the attractions of big businesses are obvious.

These giants offer world-leading working conditions and benefits, career advancement, important work for powerful clients and the chance to work overseas.

Even still, people leave these big businesses for smaller ones all the time. And the reasons they quit can provide useful ammunition for those pointy-elbowed entrepreneurs who would love to get them on board.

A few blocks back from the river in Melbourne is the office of professional services firm, BlueRock, which started as an accounting business 11 years ago by five “escapees” from corporate Australia.

Today, the firm has around 170 employees and has diversified into areas such as law, private wealth, finance and insurance. Last year, they made it to fourth place on the Great Place To Work list for companies with between 100 and 999 employees.

It was also a finalist in the employer of choice category of the Lawyers Weekly 2019 Law Awards.

COO of the BlueRock, Dean Godfrey, says the biggest challenge in competing with the big firms is to attract graduates or to recruit people who are in the first half of their careers.

“There is still some prestige in going to some of the other more structured, high profile organisations,” he says. “When people are starting out, they don’t always know what they want.”

However, he says people who have had experience working for the big firms find they enjoy life more at BlueRock. “It is about having fun while you do it, working with like-minded people and understanding that the grass isn’t greener on the other side.”

Reasonable hours

Godfrey says people who make the move to BlueRock from big “churn-and-burn” firms often talk about wanting more purpose in their lives and getting away from the long hours culture.

“It is more about getting the job done than having prescriptive rules around having to be there,” he says.

Godfrey says BlueRock tries to ensure its clients – who are mostly business owners – share its vision for a healthy workplace.

The legal division distinguished itself by having less reliance on hourly-billing, which is the traditional way that lawyers’ time is charged out, but also a contributor to high stress levels in the practice of law.

Variety

One of the benefits of being in a smaller company is that employees are often given a broader range of experiences. “People in those larger firms almost cut their teeth on monotony, doing something really, really, really well,” says Godfrey.

Social purpose

BlueRock aspires to become a social enterprise and achieved B-Corp certification in 2017. This means it is legally required to consider the impact of their decisions on their workers, customers, suppliers, community, and the environment.

The challenge of B-Corp is that companies have to continue to improve to maintain their accreditation.

Godfrey says people who want to leave large firms often say they want to find more meaning in their work.

“You see people who have been in those businesses looking for something different. They may like the accounting stream or law stream or finance stream, but they want to be part of something that looks after its community,” he says.

BlueRock is working on becoming carbon-neutral and is phasing out its printers, is composting waste and considering more environmental lighting solutions. The firm is also reassessing its supply chain and the B-Corp status of its suppliers.

“We want to make sure they are putting their money where their mouth is,” he says.

BlueRock has partnered with B1G1 (Business For Good), a global giving initiative whereby every transaction made in a business “earns” a donation.

Employee ownership

Any employee of BlueRock is eligible to invest in the company and about one-third of staff have participated.

Unlike larger firms, where is it only the partners or those at senior levels who can become owners, the BlueRock founders determined that the people who work in the business should also be able to have a stake in the wealth and direction of the firm.

“It really does give you a feeling like you are a part of what we’re building,” says Godfrey.

As a firm that is focused on its entrepreneur clients, employees at BlueRock are also encouraged to have their own businesses.

Fun

The funky office space, which includes a giant chessboard and a unicorn sculpture, signals the company does not want to be seen as your usual professional services firm. The website promises fun activities and healthy food options and a range of flexible work options.

Managing director of BlueRock, Peter Lalor, has said people are left to decide how they do their work:

“Our philosophy is quite different: if we just let people get on with the job of working stuff out in a really smart, efficient way, they’ll get the right answer,” he said in a podcast.

“And I think that there’s a little bit of combativeness in people when they’re told they have to do something … They rebel against it. So, by having little to no structure in terms of how we do what we do, and no rules per se, people feel very empowered to get on with the job.”

This article was originally written for The Ethics Alliance. Find out more about this corporate membership program. Already a member? Log in to the membership portal for more content and tools here.


The truth isn't in the numbers

If you want to work out what the people are thinking, one thing is for sure, you can’t just go out and ask them.

The failures of political polling over recent elections have taught us that opinion surveys can no longer be trusted. If you were betting on the winner, you would have been better off putting your money on the predicted losers.

This was a $5.2 million lesson for betting company Sportsbet when it pre-emptively paid out Bill Shorten backers – two days early ­– based on the fact that seven out of every ten wagers supported a Labor win in May. Labor lost, the gamblers got it wrong.

And it is not just the polling and betting companies that have lost credibility as truth-telling tools. Science is having its own crisis over the quality of peer-reviewed research.

Just one sleuth, John Carlisle (an anaesthetist in the UK with time on his hands) has discovered problems in clinical research, leading to the retraction and correction of hundreds of papers because of misconduct and mistakes.

The world of commerce is no better at ensuring that decisions are backed by valid, scientific research. Too often, companies employ consultants who design feedback surveys to tell clients what they want to hear, or employers hire people based on personality questionnaires of dubious provenance.

Things are further complicated by poor survey questions, untruthful answers, failures of memory and survey fatigue (36 per cent of employees report receiving surveys regularly, three or more times per year).

Why bother?

All of this may give rise to the notion that asking people for their opinion is an utter waste of time. However, that is not the conclusion drawn by Adrian Barnett, president of the Statistical Society of Australia and professor at the Queensland University of Technology.

Barnett, who studies the value of health and medical research, says people should view all surveys with a healthy skepticism, but there is no substitute for a survey with a good representative sample.

“I do think there is a problem, yes, but it is potentially overblown, or overstated” he says.

“We know that, in theory, we can find out what the whole population is feeling by taking just a small sample and extrapolating up. We know it works and it’s a brilliant, cheap way of finding out all sorts of things about the country and about your customers,” he says.

However, it is getting harder to get that representative sample. As people have replaced their landlines with mobile phones, researchers can no longer rely on the telephone book to source an adequate spread of interviewees. And, even if they make contact via a mobile phone, people are now reluctant to answer calls from unknown numbers in case they are scammers, charities … or market researchers.

“(Also) on controversial topics, it can be extremely challenging to get people to talk to you,” Barnett says.

You need the right people

Reluctance to participate is one of the problems identified in political polling. In a post-election blog, private pollster Raphaella Crosby described the issue: “You can have a great, balanced, geographically distributed panel such as ours or YouGov’s – but it was very difficult to get conservatives to respond in the last three weeks.

“I presume phone pollsters had the same issue – the Coalition voters just hang up the phone, in the same way they ignored our emails. All surveys and polls are opt-in; you simply can’t make people who think their party is going to lose do a survey to say they’re voting for a loser.”

The Pew Research Centre reports that response rates to telephone surveys in the US are down to 6 per cent.

The polling industry is conducting an inquiry into election polling methods, which include a combination of calling landlines, mobile phones, robo-dialling and internet surveys. Each of these channels can introduce biases and, then, there can be errors of analysis and a tendency to “groupthink”.

However, Barnett says the same problems do not necessarily hamper market research.

Market research does not usually require the large pool of participants (up to 1,600 is common in pre-election polls) which are needed to narrow the margin of error. Business can question a small number of customers and get a clear indication of preferences, he says.

Identifying a representative sample of customers is also much easier than a random selection of voters who must represent an entire population.

Putting employees to the test

When it comes to business, the use of engagement surveys presents an interesting case. Billions of dollars are spent by business every year to try to increase employee engagement – yet little benefit can be seen in the engagement survey statistics.

According to polling company Gallup, a mere 14 per cent of Australians are engaged in their work, “showing up every day with enthusiasm and the motivation to be highly productive”. This is down from 24 per cent, six years earlier.

Jon Williams has 30 years of experience to back up a jaundiced view of the way employee surveys are used. Co-founder of management consultancy Fifth Frame, Williams was previously PWC’s global leader of its people and organisation practice, managing principal at Gallup in Australia, and managing director of Hewett Associates (Aon Hewett).

“Clearly, engaged places are better places to be and, if we are going to work on that stuff, we are going to create better workplaces. But can it actually be linked to success? Does it really drive more successful companies? I think you would struggle to really prove that.”

Williams says people fail to understand that correlation is not causation. A company may put a lot of effort into its high engagement and also be very successful, but that success may, in fact, stem from other factors such as its place in the market, timing, dynamic leadership or the economy.

“It is a false attribution because we love the idea of certainty and predictability,” he says.

This same desire to codify success also drives the use of personality testing in recruitment – which appears to have done little to rid the workplaces of bullies, psychopaths and frauds.

“Business just wants something that looks like a shiny tool with a brand name on it that they can assume, or pretend, is efficient,” he says.

“People like [the tests] because they give the appearance of rigour. Very few of those tools have any predictive reliability at all.”

Williams says the only two personality measures that have any correlation with job success are intelligence and conscientiousness.

Meanwhile, many organisations still ask their employees to undertake an assessment with Myers-Briggs Type Indicator – a personality test that divides people into 16 different personality types and based on the work of a mother-daughter team, who had no training in psychology or testing, but a devotion to the theories of Swiss psychiatrist Carl Jung.

“Myers Briggs has the same predictive validity as horoscopes. Horoscopes are great for starting a conversation about who you are as a person. Pretending it is scientific, or noble, is obviously stupid,” says Williams.

What can we do better?

Williams is not advocating that organisations stop asking employees what they think. He says, instead, that they should reassess how they regard that information.

“Don’t religiously follow one tool and think that is the source of all knowledge. Use different tools at different times. Then, don’t keep measuring the same thing for the next five years, because you’ve done it [already]. Go to a different tool, use multiple inputs. Just use all of them intelligently as interesting pieces of data.”

Barnett has his own suggestions to increase the reliability of surveys. Postal surveys may seem rather “old school” these days, but can allow researchers to engage a bit more, allowing them to establish their bona fides with people who are (justifiably) suspicious of attempts to “pick their brains”.

Acknowledging that the interviewees’ time is valuable can help elicit honest, thoughtful responses. Something as simple as including a voucher for free coffee or a chocolate can show value.

Adding a personal touch, using a real stamp on the return envelope, will also encourage participation. ‘”It shows you spent time reaching out to them,’” Barnett says.

Citizens’ juries can also help make big decisions, using a panel of people to represent customers or a population.

Infrastructure Victoria used this technique in February when it set up a 38-member community panel to consider changing the way Victorians pay for the transport network.

“We know there are problems with [citizens’ juries], but the reason we have them is that you are being judged by your peers. If you can get a representative bunch of your customers, then I think that is an interesting idea,” he says.

“You really might not like what they say, or you may be surprised by what they say – but those surprising results can sometimes be the best in a way, because it may be something you have been missing for a long time.”

Questions you should ask

The American writer Mark Twain was well aware that numbers can be contrived to back any argument.

“Figures often beguile me, particularly when I have the arranging of them myself; in which case the remark attributed to Disraeli would often apply with justice and force: ‘There are three kinds of lies: lies, damned lies, and statistics’,” he wrote.

The observation could be made of many of the survey results found online. Many have been produced as a marketing exercise by companies and reproduced by others who want numbers to add some weight to their arguments.

However, rather than accepting, on face value, survey claims that 30 per cent of the population thinks this or that, some basic checks can help determine whether the research is valid.

A good place to start is the Australian Press Council’s guide for editors, dealing with previously unpublished opinion poll results:

  1. The name of the organisation that carried out the poll
  2. The identity of any sponsor or funder
  3. The exact wording of the questions asked
  4. A definition of the population from which the sample was drawn
  5. The sample size and method of sampling
  6. The dates when the interviews were carried out
  7. How the interviews were carried out (in person, by telephone, by mail, online, or robocall)
  8. The margin of error

Other questions may be to ask where the participants were found and are they typical of the whole population of interest?

If only a small proportion of people responded, then you may deduce that the survey is biased towards the people who have strong feelings about the subject.

If the subjects were paid, this might affect their answers.

In the UK, there is a public information campaign Ask For Evidence, to encourage people to request for themselves the evidence behind news stories, marketing claims and policies.

This article was originally written for The Ethics Alliance. Find out more about this corporate membership program. Already a member? Log in to the membership portal for more content and tools here.


Courage isn’t about facing our fears, it’s about facing ourselves

What was your first lesson in courage? The first time someone told you to be brave – or explained to you that fear was something to be overcome, not something that should control you?

I can’t remember the very first moment I encountered the idea of courage, but the first I can remember – and the one that has stayed with me the longest – is this: “being brave doesn’t mean we looking for trouble.”

It is, of course, from Disney’s The Lion King. Mufasa, moments after saving his son, Simba, from ravenous hyenas, chastises his boy for putting himself and his friend in danger.

Simba’s journey in The Lion King is really a treatise on courage. From a reckless, thrill-seeking child, Simba loses his father and, believing himself responsible, flees from the social judgement that might bring.

Simba takes his father’s lesson too far: he doesn’t look for trouble, but when it finds him, he runs from it. He lives as an exile in a hedonistic paradise. Comfortable, apathetic, safe. For all his physical prowess, Simba is a coward.

Of course, by the end of the film Simba has found courage. He accepts his identity as the true king, admits his shame publicly, defeats Scar (his true enemy) and takes his place in the circle of life, complete with staggering musical accompaniment.

As well as having a soundtrack that’s jam-packed with bangers, it turns out the Lion King has a strong philosophical pedigree.

Courage as a virtue

The Greek philosopher Aristotle believed that courage was as virtue – a marker of moral excellence. More specifically, it was the virtue that moderated our instincts toward recklessness on one hand and cowardice on the other.

He believed the courageous person feared only things that are worthy of fear. Courage means knowing what to fear and responding appropriately to that fear.

For Aristotle, what mattered isn’t just whether you face your fears, but why you face them and what it is that you fear.

There is something important here. Your reasons for overcoming fear matter. They can be the difference between courage, cowardice and recklessness.

For instance, in Homer’s Iliad, the Trojan prince Hector threatens to punish any soldier he sees fleeing from the fight. In World War I, soldiers who deserted were executed.

Were these soldiers courageous? The only reason they risk death from the enemy is because they’re guaranteed to be killed if they don’t. The fear of death is still what drives them.

More courageous, says Aristotle, is the soldier who freely chooses to fight despite having no personal reason to do so besides honour and nobility. In fact, for Aristotle, this is the highest form of courage – it faces the greatest fear (death) for the most selfless reason (the nation).

Of course, Aristotle was an Ancient Greek bloke, so we should take his prioritising of military virtue with a grain of salt. Is death at war really to be so highly prized?

For one thing, in a culture like Ancient Greece or Troy, the failure to be an excellent warrior would be met with enormous dishonour. How many soldiers went to war for fear of dishonour? Is dishonour something to be rightly feared? And if so, whose dishonour should we fear?

Surely not that of a society whose moral compass prioritises victory over justice – risking your life to support a cause like that is reckless.

If courage means fearing dishonour from those who are morally corrupt, then a courageous enemy is worse than a cowardly one. Courage becomes like a superpower – making some people into heroes and others villains.

But there’s a deeper reason to doubt Aristotle’s idea of the highest courage. Whilst most of us do fear death, it’s not clear that it’s the thing we fear most. Even if we do fear death, we have a range of different reasons for doing so.

Our deepest fears

Perhaps my most visceral fear is of drowning. The thought of it is enough to make me feel short of breath. Perhaps that’s because of an experience when I was younger – when I was overseas I learned that my Dad nearly drowned. It was perhaps the first time I really had to come to grips with the fact he was mortal – and so was I – and all that I loved.

Today, I fear death because it would mean never seeing my children grow up. Never holding them one last time. Seeing my son’s first days at school. Hearing my daughter’s first words.

Worse, I wouldn’t be sure that they were safe and flourishing. If someone could guarantee that, perhaps I’d be less fearful of death. It’s not the death I fear; it’s what it represents: an incomplete life, failed commitments, unending love brought to a close.

Aristotle didn’t consider courage in the face of existential anxieties like these. What does it mean to live courageously in a world where all our loves, passions and projects expose us to pain and loss? To live is to have a nerve constantly exposed to the world – always vulnerable to suffering.

The French psychoanalyst and philosopher Anne Dufourmantelle argues that risk is an inherent part of living fully in the world. Risk-free living, she argues, is not living at all. Courage is as much about living despite knowing the exposed nerve of love and passion could trigger chest-tightening pain at any moment.

Yet so often we close ourselves from the world to keep ourselves safe. We self-censor not because we think we might be wrong, but because we fear upsetting the wrong person. We withdraw from relationships because we don’t want to be the one to take the leap. We tell ourselves stories in the shower of all the things we could do – could be – if only the world let us.

Existentialist philosophers have a name for this kind of self-deception: bad faith – a kind of failure to engage with the world as it really is and accepting ourselves as we are and as we could be.

Too often we think of courage purely as facing up to our fears. What that misses is how deeply connected our fears are with deeper beliefs about who we are, who we want to be, who we love and what we wish the world was.

A courageous truth

Maybe the truth of courage is that it’s all about truth. It’s about looking reality in the face and having the force of will not to turn away, despite the pain, the unpleasantness and the risk.

Maybe it’s about looking for long enough to see the joy in the pain; the beauty in the ugliness and the comfort that lies in the little risks we take every day.

Perhaps it’s only then we can know what’s worth dying for – and what’s worth living for. Certainly, Dufourmantelle gives us some hope of this. In 2017, she died in rough seas off the coast of France, having swum out to rescue two children caught in the surf.

The children survived, but how many of us would have dived in? How many would have hoped for a lifeguard? A stronger swimmer? For the children to rescue themselves?

I want to encourage you to think: are there rough waters you’re not jumping into for fear of the waves? Do you tend to dive in without counting the costs?

 

Dr Matt Beard was the host of The Ethics of Courage, alongside Saxon Mullins and Benjamin Law at The Ethics Centre on 21 August. This is a transcript of his opening address.