LGBT...Z? The limits of ‘inclusiveness’ in the alphabet rainbow

A few years ago on Twitter, I found myself mindlessly clicking on a breadcrumb trail of ‘likes’ linked to a random post. It was under these banal circumstances that I came across a user profile with a brief but purposeful bio, one featuring the mysterious acronym ‘LGBTZ’.

The first four letters were obvious enough to me. LGBT, that bite-sized abbreviation for Lesbian, Gay, Bisexual, Transgender, has become a nearly ubiquitous rallying call for members of these historically marginalised groups and their allies. Even Donald Trump spoke this family friendly shorthand in his convention speech (although his oddly staggered enunciation sounded like he was a nervous pre-schooler tip-toeing through an especially tricky part of the alphabet). Trump also tacked on a “Q” for all those ill-defined “Queers” in the Republican audience. (A far less common iteration of this initialism includes an “I” for “Intersex” and an “A” for “Asexual.”)

But Z? The Twitter user’s profile image was a horse, and other language alluding to the fact he (or she) was an animal lover – and not of the platonic kind – brought that curious Z into sharp, squirm-worthy focus: “Zoophile”.

Perhaps we should take a hard look in the mirror and ask whether excluding Zs and Ps and others from the current tolerance roster isn’t doing to them precisely what was once done to us.

If you’re not familiar with the term, a zoophile is a person who is primarily sexually attracted to animals. The primarily part of that sentence is key. These aren’t just lascivious farmhands shagging goats because they can’t find willing human partners. That’s just plain bestiality.

Rather, these are people who genuinely prefer animals over members of their own species. If you hook a male zoophile’s genitals up to a plethysmograph (an extremely sensitive measure of sexual arousal), these men display stronger erectile responses to, say, images of stallions or Golden Retrievers than they do to naked human models.

I’d written about scientific research into zoophilia, along with other unusual sexualities, in my book Perv, so it wasn’t shocking to learn zoophiles have a social media presence. What’s surprising is this maligned demographic is apparently becoming emboldened enough to pull its Z up to the acronym table.

 

 

Paedophiles have started inching their much-loathed “P” in this direction as well, albeit in veiled form with the contemporary label “MAP” (“Minor-Attracted Person”). This is especially true for the so-called virtuous paedophiles, who are seriously committed to refraining from acting on their sexual desires because they realise the harm they’d cause to children. Similarly, many zoophiles consider themselves gentle animal welfare advocates, denouncing “zoosadists” who sexually abuse animals.

In any event, it’s easy to shun the Zs and Ps and all the other unwanted sexual minorities clawing their way up the acceptance ladder, refusing them entry into our embattled LGBT territory, because we don’t want to be associated with “perverts”. We’ve overcome tremendous obstacles to be where we are today. As an American growing up during the homophobic Reagan era, never in a million years did I imagine I’d legally marry another man one day. Yet I did. At this stage, perhaps we should take a hard look in the mirror and ask whether excluding Zs and Ps and others from the current tolerance roster isn’t doing to them precisely what was once done to us.

I know what you’re thinking. There’s a huge difference, since in these sad cases we’re dealing with the most innocent, most vulnerable members of society, who also can’t give their consent. That’s very much true.

When you actually try to justify our elbowing the Zs and Ps and others of their ilk out from under the rainbow umbrella though, it’s not so straightforward. Any seemingly ironclad rationale for their exclusion is stuffed more with blind emotion than clear-sighted reason.

To begin with, one doesn’t have to be sexually active to be a member of a sexual community. After all, I identified as gay before I had gay sex, just as I imagine most heterosexual people identify as straight before losing their virginity. In principle, at least, the same would apply to morally celibate zoophiles and paedophiles, neither of which are criminals and child molesters. Desires and behaviours are two different things.

Secondly, there’s now strong evidence paraphilias (lust outside of the norm) emerge in early childhood or, in the case of paedophilia, may even be innate. One zoophile, a successful attorney, told researchers that while his friends in middle school were all trying to get their hands on their fathers’ Playboys, he was secretly coveting the latest issue of Equus magazine.

Whether Zs or Ps are “born that way” or become that way early in life, it’s certainly not a choice they’ve made. This isn’t difficult to grasp but it tends to elude popular wisdom. I don’t know about you but I couldn’t become aroused by a Clydesdale or a prepubescent child if my life depended on it. That doesn’t make me morally superior to those unlucky enough to have brains that through no fault of their own respond this way to animals and children.

It’s an uncomfortable conversation to have, but there’s no science or logic to why “LGBT” contains the particular letters it does.

Not so long ago, remember, the majority of society saw gay men like me as immoral – even evil. Not for anything they’d done but for the simple fact that, neurologically, they fancied other men rather than women. The courts would have declared me mentally ill, not happily married. Just like conversion therapy has failed miserably to turn gay people straight, paraphilias are also immutable. Every clinical attempt to turn paedophiles into “teleiophiles” (attracted to reproductive-aged adults) has been a major flop.

Who knows what tortuous inner lives all those closeted Zs and Ps – and other unmentionables bearing today’s cross of scorn – experience, despite being celibate. Clinical psychologists report many of their clients are suicidal because of unwanted sexual desires – and this includes teenagers with a dawning awareness they are attracted to younger people.

I think it’s patently hypocritical for the LGBT community – which has worked so hard to overcome negative stereotypes, ostracism, and unjust laws –  to shut out these people, fearing they would tarnish us more acceptable deviants. We’re only paying lip service to the concept of inclusiveness when we so publicly distance ourselves from those who need this communal protection the most.
In fact, LGB people arguably share more in common with the Zs and Ps than they do the Ts, since being transgender isn’t about who (or what) you’re sexually attracted to, but the gender you identify with. Unlike those representing the other letters in this character soup, trans people say their sexuality plays no role at all. Why then are Ts included while other, more unspeakable, sexual minorities aren’t?

Here’s my point then. It’s an uncomfortable conversation to have but there’s no science or logic to why “LGBT” contains the particular letters it does. Instead, it’s an evolving social code. So, is the filter that shapes this code just another moralistic lens that casts some human beings as inherently inferior and worthier of shame than others? And if this is so, who gets to control this filter and why?


5 dangerous ideas: Talking dirty politics, disruptive behaviour and death

The Ethics Centre was the founding partner of the Festival of Dangerous Ideas back in 2009. We’re thrilled that the festival continues with a program full of world-leading thinkers.

Here are five ideas that were pondered, dissected and debated over the big weekend in 2016. We talked dirty politics, disruptive behaviour, disappearing countries and death.

  1. Dirty politics

In 2016, the festival put dirty old politics in the spotlight. Australia’s federal parliament had just resumed session with a bunch of independent and minority party representatives, the United States was still trying to make sense of Donald Trump and across the globe nations were trying to unpack exactly what ‘extremism’ was and how to deal with it.

“If our goal is to seek a deeper understanding of the world, our lack of moral diversity is going to make it harder.”

American psychologist Jonathan Haidt’s TED talk explores the moral values underpinning liberals and conservatives. Instead of looking at politics as a battleground of ‘right vs wrong’, Haidt encourages us to see political differences as being based in different moral values.

  1. Disruptive behaviour

You can’t make an omelette without breaking a few eggs, right? For the disruptors of the world, improvement comes at a price – we need to break eggs, challenge convention, and occasionally hurt people’s feelings.

On the other side of the Pacific, the #BlackLivesMatter movement was upsetting middle-class, white Americans in 2016 by calling attention to continued racial disparities in the US.

Check out philosopher George Yancy’s open letter, ‘Dear White America’ to learn about the intellectual basis for the movement. In the letter, Yancy makes a simple but confronting point to his white American fellows – if you’re white, no matter how well intentioned you are, you’re probably racist. He wrote:

“If you are white, and you are reading this letter, I ask that you don’t run to seek shelter from your own racism. Don’t hide from your responsibility. Rather, begin, right now, to practice being vulnerable. Being neither a ‘good’ white person nor a liberal white person will get you off the proverbial hook.”

Yancy’s essay prompted exactly the response he expected – anger. So much so the American Philosophical Association issued a letter of support. You can read Yancy’s thoughts on the backlash he copped here.

Australians reading or hearing about the Black Lives Matter movement might also want to read into the history of Aboriginal deaths in custody.

  1. Disappearing conflicts

Conflict, politics and geography drive some nations and communities to the brink while others flourish. What are the unseen consequences of major global trends?

The Right to be Cold asks whether the world’s failure to address climate change is a human rights violation against Inuit peoples whose way of life is being eradicated along with the melting ice.

To get a sense of what’s going on for these remote communities, check out photographer Ciril Jazbec’s series documenting climate change and its impact on Greenlanders.

“It was April and the ice was starting to melt, which was highly unusual. Usually the ice would stay out until June.”

  1. Dealing in death

If evolution hardwires in us the drive to survive, how is it humans are able to overcome their biological imperative and take their own lives? There’s still a stigma that suicide is a ‘selfish choice’, but evolutionary biologist Jesse Bering explores the science behind suicide.

“Human suicide is an adaptive behavioural strategy that becomes increasingly likely to occur whenever there is a perfect storm of social, ecological, developmental and biological variables factoring into the evolutionary equation.”

For the scientifically minded, Bering’s essay in Scientific American is a must-read. If you’ve never donned a white lab coat, you might be more inclined to listen to the Freakonomics podcast ‘The Suicide Paradox’.

  1. Dangerous ideas

While every Festival of Dangerous Ideas has specific themes, the main goal has always been to create a safe space for open discussion of ideas some people would consider dangerous.

It’s a skill that seems to be in growing demand, so before you listen, read, think or tweet, check out what festival co-founder Simon Longstaff writes on why conversations matter.


Ethics Explainer: Social Contract

Social contract theories see the relationship of power between state and citizen as a consensual exchange. It is legitimate only if given freely to the state by its citizens and explains why the state has duties to its citizens and vice versa.

Although the idea of a social contract goes as far back as Epicurus and Socrates, it gained popularity during The Enlightenment thanks to Thomas Hobbes, John Locke and Jean-Jacques Rousseau. Today the most popular example of social contract theory comes from John Rawls.

The social contract begins with the idea of a state of nature – the way human beings would exist in the world if they weren’t part of a society. Philosopher Thomas Hobbes believed that because people are fundamentally selfish, life in the state of nature would be “nasty, brutish and short”. The powerful would impose their will on the weak and nobody could feel certain their natural rights to life and freedom would be respected. 

Hobbes believed no person in the state of nature was so strong they could be free from fear of another person and no person was so weak they could not present a threat. Because of this, he suggested it would make sense for everyone to submit to a common set of rules and surrender some of their rights to create an all-powerful state that could guarantee and protect every person’s right. Hobbes called it the ‘Leviathan’. 

It’s called a contract because it involves an exchange of services. Citizens surrender some of their personal power and liberty. In return the state provides security and the guarantee that civil liberty will be protected. 

Crucially, social contract theorists insist the entire legitimacy of a government is based in the reciprocal social contract. They are legitimate because they are the only ones the people willingly hand power to. Locke called this popular sovereignty. 

Unlike Hobbes, Locke thought the focus on consent and individual rights meant if a group of people didn’t agree with significant decisions of a ruling government then they should be allowed to join together to form a different social contract and create a different government. 

Not every social contract theorist agrees on this point. Philosophers have different ideas on whether the social contract is real, or if it’s a fictional way to describe the relationship between citizens and their government. 

If the social contract is a real contract – just like your employment contract – people could be free not to accept the terms. If a person didn’t agree they should give some of their income to the state they should be able to decline to pay tax and as a result, opt out of state-funded hospitals, education, and all the rest. 

Like other contracts, withdrawing comes with penalties – so citizens who decide to stop paying taxes may still be subject to punishment. 

Other problems arise when the social contract is looked at through a feminist perspective. Historically, social contract theories, like the ones proposed by Hobbes and Locke, say that (legitimate) state authority comes from the consent of free and equal citizens. 

Philosophers like Carole Pateman challenge this idea by noting that it fails to deal with the foundation of male domination that these theories rest on.  

For Pateman the myth of autonomous, free and equal individual citizens is just that: a myth. It obscures the reality of the systemic subordination of women and others.  

In Pateman’s words the social contract is first and foremost a ‘sexual contract’ that keeps women in a subordinate role. The structural subordination of women that props up the classic social contract theory is inherently unjust. 

The inherent injustice of social contract theory is further highlighted by those critics that believe individual citizens are forced to opt in to the social contract. Instead of being given a choice, they are just lumped together in a political system which they, as individuals, have little chance to control.  

Of course, the idea of individuals choosing not to opt in or out is pretty impractical – imagine trying to stop someone from using roads or footpaths because they didn’t pay tax.  

To address the inherent inequity in some forms of social contract theory, John Rawls proposes a hypothetical social contract based on fundamental principles of justice. The principles are designed to provide a clear rationale to guide people in choosing to willingly agree to surrender some individual freedoms in exchange for having some rights protected. Rawls’ answer to this question is a thought experiment he calls the veil of ignorance.

By imagining we are behind a veil of ignorance with no knowledge of our own personal circumstances, we can better judge what is fair for all. If we do so with a principle in place that would strive for liberty for all at no one else’s expense, along with a principle of difference (the difference principle) that guarantees equal opportunity for all, as a society we would have a more just foundation for individuals to agree to a contract that in which some liberties would be willingly foregone.  

Out of Rawls’ focus on fairness within social contract theory comes more feminist approaches, like that of Jean Hampton. In addition to criticising Hobbes’ theory, Hampton offers another feminist perspective that focuses on extending the effects of the social contract to interpersonal relationships. 

In established states, it can be easy to forget the social contract involves the provision of protection in exchange for us surrendering some freedoms. People can grow accustomed to having their rights protected and forget about the liberty they are required to surrender in exchange.  

Whether you think the contract is real or just a useful metaphor, social contract theory offers many unique insights into the way citizens interact with government and each other.


Why hard conversations matter

There are times in the history of a nation when its character is tested and defined. Too often it happens with war, natural disasters or economic collapse. Then the shouting gets our attention.

But there are also our quieter moments – the ones that reveal solid truths about who we are and what we stand for.

How should we recognise Indigenous Australians? Can our economy be repaired in a manner that is even-handed? How will we choose if forced to decide between China and the United States? How do we create safe ways for people seeking asylum? Can we grow our economy and protect our people and environments? These are just some of the questions we face.

Too often, I see conversations shut down before they have even begun. People with a contrary point of view are faced with outrage, shouted down or silenced by others driven by the certainty of righteous indignation.

And here’s another question. Do we have the capacity to talk about these things without tearing ourselves and each other apart?

There are some safe places for open conversation about difficult questions. Thirty years ago I began work at a not-for-profit, The Ethics Centre dedicated to creating them. The Festival of Dangerous Ideas now enters its 11th year with a new digital format to cater to our current times, bringing leading thinkers from around the world together to discuss important issues.

Sadly, there is a growing fragility across Australian society. The demand for ideological purity (you’re completely ‘with us’ or ‘against us’) puts us at risk of a fractured and stuffy world of absolutes.

Too often, I see conversations shut down before they have even begun. People with a contrary point of view are faced with outrage, shouted down or silenced by others driven by the certainty of righteous indignation. In such a world, there is no nuance, no seeking to understand the grey areas or subtleties of argument.

Attempts to prove to people that they are wrong just leads to stalemate. Barricades go up and each side lobs verbal grenades. There is another way.

This phenomenon crosses the political spectrum – embracing conservatives and progressives alike. In my opinion, it is the product of a self-fulfilling fear that our society’s ethical skin is too thin to survive the prick of controversy and debate. This is a poisonous belief that drains the life from a liberal democracy.

Fortunately, the antidote is easily at hand. In essence we need to spend less time trying to change other people’s minds and more time trying to understand their point of view. We do that by taking them entirely seriously.

Why make this change? Because attempts to prove to people that they are wrong just leads to stalemate. Barricades go up and each side lobs verbal grenades. There is another way. We could allow people to work out what the boundaries are for their own beliefs.

Working out the lines we cannot cross is often the first step towards others, but it can only happen when people feel safe. Giving people the space to fall on just the right side of such lines can make a world of difference.

So I wonder, might we pause for a moment, climb down from our battle stations and call a ceasefire in the wars of ideas? Might we recognise the person on the other side of an issue may not be unprincipled? Perhaps they’re just differently principled.

Can we see in the face of our ideological opponent another person of goodwill? What then might we discover about each other; what unites and, yes, what divides? What then might we understand about the issues that will define us as a people?

Let’s rediscover the art of difficult discussions in which success is measured in the combination of passion and respect. Let’s banish the bullies – even those who claim to be well-intentioned. They, alone, have no place in the conversations we now need to have.


Ethics Explainer: Eudaimonia

The closest English word for the Ancient Greek term eudaimonia is probably “flourishing”.

The philosopher Aristotle used it as a broad concept to describe the highest good humans could strive toward – or a life ‘well lived’.

Though scholars translated eudaimonia as ‘happiness’ for many years, there are clear differences. For Aristotle, eudaimonia was achieved through living virtuously – or what you might describe as being good. This doesn’t guarantee ‘happiness’ in the modern sense of the word. In fact, it might mean doing something that makes us unhappy, like telling an upsetting truth to a friend.

Virtue is moral excellence. In practice, it is to allow something to act in harmony with its purpose. As an example, let’s take a virtuous carpenter. In their trade, virtue would be excellences in artistic eye, steady hand, patience, creativity, and so on.

The eudaimon [yu-day-mon] carpenter is one who possesses and practices the virtues of his trade.

By extension, the eudaimon life is one dedicated to developing the excellences of being human. For Aristotle, this meant practicing virtues like courage, wisdom, good humour, moderation, kindness, and more.

Today, when we think about a flourishing person, virtue doesn’t always spring to mind. Instead, we think about someone who is relatively successful, healthy, and with access to a range of the good things in life. We tend to think flourishing equals good qualities plus good fortune.

This isn’t far from what Aristotle himself thought. Although he did believe the virtuous life was the eudaimon life, he argued our ability to practice the virtues was dependent on other things falling in our favour.

For instance, Aristotle thought philosophical contemplation was an intellectual virtue – but to have the time necessary for contemplation you would need to be wealthy. Wealth (as we all know) is not always a product of virtue.

Some of Aristotle’s conclusions seem distasteful by today’s standards. He believed ugliness was a hindrance to developing practical social virtues like friendship (because nobody would be friends with an ugly person).

 

 

However, there is something intuitive in the observation that the same person, transformed into the embodiment of social standards of beauty, would – everything else being equal – have more opportunities available to them.

In recognising our ability to practice virtue might be somewhat outside our control, Aristotle acknowledges our flourishing is vulnerable to misfortune. The things that happen to us can not only hurt us temporarily, but they can put us in a condition where our flourishing – the highest possible good we can achieve – is irrevocably damaged.

For ethics, this is important for three reasons.

First, because when we’re thinking about the consequences of an action we should take into account their impact on the flourishing of others. Second, it suggests we should do our best to eliminate as many barriers to flourishing as we possibly can. And thirdly, it reminds us that living virtuously needs to be its own reward. It is no guarantee of success, happiness or flourishing – but it is still a central part of what gives our lives meaning.


Don’t throw the birth plan out with the birth water!

Don’t throw the birth plan out with the birth water!

Don’t throw the birth plan out with the birth water!

Just try mentioning ‘birth plans’ at a party and see what happens. Hannah Dahlen – a midwife’s perspective

Mia Freedman once wrote about a woman who asked what her plan was for her placenta. Freedman felt birth plans were “most useful when you set them on fire and use them to toast marshmallows”. She labelled people who make these plans as “birthzillas” more interested in birth than having a baby.

In response, Tara Moss argued:

The majority of Australian women choose to birth in hospital and all hospitals do not have the same protocols. It is easy to imagine they would, but they don’t, not from state to state and not even from hospital to hospital in the same city. Even individual health practitioners in the same facility sometimes do not follow the same protocols.

The debate

Why the controversy over a woman and her partner writing down what they would like to have done or not done during their birth?  The debate seems not to be about the birth plan itself, but the issue of women taking control and ownership of their births and what happens to their bodies.

Some oppose birth plans on the basis that all experts should be trusted to have the best interests of both mother and baby in mind at all times. Others trust the mother as the person most concerned for her baby and believe women have the right to determine what happens to their bodies during this intimate, individual, and significant life event.

As a midwife of some 26 years, I wish we didn’t need birth plans. I wish our maternity system provided women with continuity of care so by the time a woman gave birth her care provider would fully know and support her well-informed wishes. Unfortunately, most women do not have access to continuity of care. They deal with shift changes, colliding philosophical frameworks, busy maternity units, and varying levels of skill and commitment from staff.

There are so many examples of interventions that are routine in maternity care but lack evidence they are necessary or are outright harmful. These include immediate clamping and cutting of the umbilical cord at birth, episiotomy, continuous electronic foetal monitoring, labouring or giving birth laying down and unnecessary caesareans. Other deeply personal choices such as the use of immersion in water for labour and birth or having a natural birth of the placenta are often not presented as options, or are refused when requested.

The birth plan is a chance to raise and discuss your wishes with your healthcare provider. It provides insight into areas of further conversation before labour begins.

I once had a woman make three birth plans when she found out her baby was in a breech presentation at 36 weeks. One for a vaginal breech birth, one for a cesarean, and one for a normal birth if the baby turned. The baby turned and the first two plans were ditched. But she had been through each scenario and carved out what was important for her.

Bashi Hazard – a legal perspective

Birth plans were introduced in the 1980s by childbirth educators to help women shape their preferences in labour and to communicate with their care providers. Women say preparing birth plans increases their knowledge and ability to make informed choices, empowers them, and promotes their sense of safety during childbirth. Some (including in Australia) report that their carefully laid plans are dismissed, overlooked, or ignored.

There appears to be some confusion about the legal status or standing of birth plans. Neither is reflective of international human rights principles or domestic law. The right to informed consent is a fundamental principle of medical ethics and human rights law and is particularly relevant to the provision of medical treatment. In addition, our common law starts from the premise that every human body is inviolate and cannot be subject to medical treatment without autonomous, informed consent.

Pregnant women are no exception to this human rights principle nor to the common law.

If you start from this legal and human rights premise, the authoritative status of a birth plan is very clear. It is the closest expression of informed consent that a woman can offer her caregiver prior to commencing labour. This is not to say she cannot change her mind but it is the premise from which treatment during labour or birth should begin.

Once you accept that a woman has the right to stipulate the terms of her treatment, the focus turns to any hostility and pushback from care providers to the preferences a woman has the right to assert in relation to her care.

Mothers report their birth plans are criticised or outright rejected on the basis that birth is “unpredictable”. There is no logic in this.

Care providers who understand the significance of the human and legal right to informed consent begin discussing a woman’s options in labour and birth with her as early as the first antenatal visit. These discussions are used to advise, inform, and obtain an understanding of the woman’s preferences in the event of various contingencies. They build the trust needed to allow the care provider to safely and respectfully support the woman through pregnancy and childbirth. Such discussions are the cornerstone of woman-centred maternity healthcare.

Human Rights in Childbirth

Reports received by Human Rights in Childbirth indicate that care provider pushback and hostility towards birth plans occurs most in facilities with fragmented care or where policies are elevated over women’s individual needs. Mothers report their birth plans are criticised or outright rejected on the basis that birth is “unpredictable”. There is no logic in this. If anything, greater planning would facilitate smoother outcomes in the event of unanticipated eventualities.

In truth, it is not the case that these care providers don’t have a birth plan. There is a birth plan – one driven purely by care providers and hospital protocols without discussion with the woman. This offends the legal and human rights of the woman concerned and has been identified as a systemic form of abuse and disrespect in childbirth, and as a subset of violence against women.

It is essential that women discuss and develop a birth plan with their care providers from the very first appointment. This is a precious opportunity to ascertain your care provider’s commitment to recognising and supporting your individual and diverse needs.

Gauge your care provider’s attitude to your questions as well as their responses. Expect to repeat those discussions until you are confident that your preferences will be supported. Be wary of care providers who are dismissive, vague or non-responsive. Most importantly, switch care providers if you have any concerns. The law is on your side. Use it.

Making a birth plan – some practical tips

  1. Talk it through with your lead care provider. They can discuss your plans and make sure you understand the implications of your choices.
  2. Make sure your support network know your plan so they can communicate your wishes.
  3. Attending antenatal classes will help you feel more informed. You’ll discover what is available and the evidence is behind your different options.
  4. Talk to other women about what has worked well for them, but remember your needs might be different.
  5. Remember you can change your mind at any point in the labour and birth. What you say is final, regardless of what the plan says.
  6. Try not to be adversarial in your language – you want people working with you, not against you. End the plan with something like “Thank you so much for helping make our birth special”.
  7. Stick to the important stuff.

Some tips on the specific content of your birth plan are available here.


The “good enough” ethical setting for self-driving cars

Plenty of electronic ink has been spilled over the benefits self-driving cars offer. We have good reason to believe they could greatly reduce the number of fatalities from car accidents – studies suggest upwards of 90 percent of road accidents are caused by driver error.

Avoiding a crash altogether is clearly the best option, but even in crash scenarios some believe autonomous cars might be preferable. Facing a “no win” situation, a driverless car may have the opportunity to “optimise” the crash by minimising harm to those involved. However, choices about how to direct or distribute harm in these cases (for example, hit that person instead of the other) are ethically fraught and demand extraordinary scrutiny of a number of distinctly philosophical issues.

Can we be punished for inaction?

It would be unfair to expect car manufacturers to program their products to ‘crash ethically’ when the outcomes might get them in legal trouble. The law typically errs on the side of not directly committing harm. This means there might be difficulties in developing algorithms that simply minimise harm.

Given this, the law might condemn an autonomous car that steered away from five people and into one person in order to minimise the harm resulting from an accident. A judge might argue that the car steered into someone and so it did harm. The alternative, merely running over five people, results in more harm, but at least the car did not aim at any one of them.

But is inaction in this case morally justified if it leads to more harm? Philosophers have long disputed this distinction between doing harm and merely allowing harm to occur. It is the basis for perhaps the most famous philosophical thought experiment – the trolley problem.

Some philosophers argue that we can still be held responsible for inaction because not doing something still involves making a decision. For example, a doctor may kill her patient by withholding treatment, or a diplomat may offend a foreign dignitary by not shaking her hand. If algorithms that minimise harm are problematic because of a legal preference for inaction over the active causing of harm, there might be reason to ask the law to change.

Should we always try to minimise harm?

Even if we were to assume autonomous cars should minimise the total amount of harm that comes about from an accident, there are complex issues to resolve. Should cars try to minimise the total number of people harmed? Or minimise the kinds of harms that come about?

For example, if a car must choose between hitting one person head-on (a high risk accident) and steering off the road, endangering several others to a less serious injury, which is preferable? Moral philosophers will disagree about which of these options is better.

Another complication arises when we consider that harm minimisation might require an autonomous car to allow its own passengers to be injured or even killed in cases where inaction wouldn’t have brought them to harm. Few consumers would buy a car they expected to behave this way, even if they would prefer everyone else’s car did.

Are people breaking the law more deserving of harm?

Minimising overall harm might in some cases lead to consequences many would find absurd. Imagine a driver who decided to play ‘chicken’ with an autonomous car – driving on the wrong side of the road and threatening to plough head-long into it. Should the passengers in the autonomous car be put at risk to try to avoid a crash that is only occurring because the other driving is breaking the law?

Perhaps self-driving cars need something like ‘legality-adjusted aggregate harm minimisation’ algorithms. Given the widely-held beliefs that people breaking the law are liable to greater harm, deserve a greater share of any harm and that it would be unjust to require law-abiding citizens to share in the harm equally, self-driving cars will need to reflect these values if they are to be commercially viable.

But this approach also faces problems. Engineers would need a reliable way to predict crash trajectories in a way that provided information about the severity of harms, which they aren’t yet able to do. Philosophers would also need a reliable way to assign weighted values to harms, for example, by assigning values to minor versus major injuries. And as a society we would need to determine how liable to harm someone becomes by breaking the law. For example, someone exceeding the speed limit by a small amount may not be as liable to harm as someone playing ‘chicken’.

None of these issues are easy and seeking sure-fire answers every stakeholder agrees to is likely impossible. Instead, perhaps we should seek overlapping consensus – narrowing down the domain of possible algorithms to those that are technically feasible, morally justified and legally defensible. Every proposal for autonomous car ethics is likely to generate some counterintuitive verdicts but ongoing engagement between various parties should continue in the hopes of finding a set of all-around acceptable algorithms.


Ethics Explainer: Just War Theory

Just war theory is an ethical framework used to determine when it is permissible to go to war. It originated with Catholic moral theologians like Augustine of Hippo and Thomas Aquinas, though it has had a variety of different forms over time.

Today, just war theory is divided into three categories, each with its own set of ethical principles. The categories are jus ad bellumjus in bello, and jus post bellum. These Latin terms translate roughly as ‘justice towards war’, ‘justice in war’, and ‘justice after war’.

Jus ad bellum

When political leaders are trying to decide whether to go to war or not, just war theory requires them to test their decision by applying several principles:

  • Is it for a just cause?

This requires war only be used in response to serious wrongs. The most common example of just cause is self-defence, though coming to the defence of another innocent nation is also seen as a just cause by many (and perhaps the highest cause).

  • Is it with the right intention?

This requires that war-time political leaders be solely motivated, at a personal level, by reasons that make a war just. For example, even if war is waged in defence of another innocent country, leaders cannot resort to war because it will assist their re-election campaign.

  • Is it from a legitimate authority?

This demands war only be declared by leaders of a recognised political community and with the political requirements of that community.

  • Does it have due proportionality?

This requires us to imagine what the world would look like if we either did or didn’t go to war. For a war to be ‘just’ the quality of the peace resulting from war needs to superior to what would have happened if no war had been fought. This also requires we have some probability of success in going to war – otherwise people will suffer and die needlessly.

  • Is it the last resort?

This says we should explore all other reasonable options before going to war – negotiation, diplomacy, economic sanctions and so on.

Even if the principles of jus ad bellum are met, there are still ways a war can be unjust.

Jus in bello

These are the ethical principles that govern the way combatants conduct themselves in the ‘theatre of war’.

  • Discrimination requires combatants only to attack legitimate targets. Civilians, medics and aid workers, for example, cannot be the deliberate targets of military attack. However, according the principle of double-effect, military attacks that kill some civilians as a side-effect may be permissible if they are both necessary and proportionate.
  • Proportionality applies to both jus ad bellum and jus in bello. Jus in bello requires that in a particular operation, combatants do not use force or cause harm that exceeds strategic or ethical benefits. The general idea is that you should use the minimum amount of force necessary to achieve legitimate military aims and objectives.
  • No intrinsically unethical means is a debated principle in just war theory. Some theorists believe there are actions which are always unjustified, whether or not they are used against enemy combatants or are proportionate to our goals. Torture, shooting to maim and biological weapons are commonly-used examples.
  • ‘Following orders’ is not a defence as the war crime tribunals after the Second World War clearly established. Military personnel may not be legally or ethically excused for following illegal or unethical orders. Every person bearing arms is responsible for their conduct – not just their commanders.

Jus post bello

Once a war is completed, steps are necessary to transition from a state of war to a state of peace. Jus post bello is a new area of just war theory aimed at identifying principles for this period. Some of the principles that have been suggested (though there isn’t much consensus yet) are:

  • Status quo ante bellum, a Latin term meaning ‘the way things were before war’ – basically rights, property and borders should be restored to how they were before war broke out. Some suggest this is a problem because those can be the exact conditions which led to war in the first place.
  • Punishment for war crimes is a crucial step to re-installing a just system of governance. From political leaders down to combatants, any serious offences on either side of the conflict need to be brought to justice.
  • Compensation of victims suggests that, as much as possible, the innocent victims of conflict be compensated for their losses (though some of the harms of war will be almost impossible to adequately compensate, such as the loss of family members).
  • Peace treaties need to be fair and just to all parties, including those who are guilty for the war occurring.

Just war theory provides the basis for exercising ‘ethical restraint’ in war. Without restraint, philosopher Michael Ignatieff, argues there is no way to tell the difference between a ‘warrior’ and a ‘barbarian’.


Only love deserves loyalty, not countries or ideologies

We exist via our social interactions with others. Without these connections embedding us in the social world we have no identity, existence or meaning. These ongoing interactions, made easier through habit and accepted norms, define us.

Central among these habits and norms is the idea of loyalty. Our loyalties provide a guide to what we might expect in an otherwise uncertain world. It is about our expectations of future action – my loyalty is based on the feeling that you will return it in future.

Being loyal is not obligatory, but when you breach someone’s loyalty you lose part of yourself. That’s the inescapable cost of disloyalty, whether minor – a friendship lost – or extreme – the firing squad for treason. That is the challenge with loyalty – there is always a chance to be disloyal. If that opportunity didn’t exist, loyalty wouldn’t make sense at all.

Nobody deserves our unconditional loyalty. However, they do deserve a shared sense of reciprocity based on past actions and hopes for the future. If people are loyal to us, they expect we will return that loyalty at some point in the future. And this is where the problem begins.

It can be hard to imagine a self without certain relationships and so we tend to hold to them more strongly.

The future is unknowable. Our loyalty may be called on in a variety of circumstances, from the mundane to the deeply difficult. A brother asks you to lie – how far are you willing to go to maintain that familial loyalty? What has he done to require you to lie? Is it the socially lubricating ‘white lie’ – “please tell Mum I wasn’t late”, or the far more serious – “please tell the police we were together all evening”.

What’s crucial here is an assessment of the act that generated the need to lie – what is the social expectation regarding the behaviour – is it acceptable? Is it the sort of behaviour other people would overlook in favour of loyalty?

We tend to over-invest in those loyalties that are key to our social milieu. It can be hard to imagine a self without certain relationships and so we tend to hold to them more strongly. Loyalty to family, friends, sports team, social activity – without them we lose parts of our self.

Because these relationships form part of who we are, there is a cost to disloyalty, even when it is the right thing to do. The experience of whistleblowers reveals the loss of identity that can come from breaching loyalty. While whistleblowing is often the ethical choice, the individuals tend to be shunned, excluded, exposed, attacked and betrayed.

If our country provides the basics of existence, security of self, food, shelter and the conditions to live a just life, then don’t we owe a debt of loyalty?

Loyalty can be vexing when it is demanded rather than given freely. Nation-states demand loyalty from their citizens to the point of self-sacrifice, especially in times of war. We also see a demand for loyalty attached to ideologies and beliefs. For example, the McCarthyism in the US in the 1950s saw an aggressive enforcement of compulsory ideological loyalty to one political system over another. Do we as citizens have an obligation to be loyal to our country of birth?

If our country provides the basics of existence, security of self, food, shelter and the conditions to live a just life, then don’t we owe a debt of loyalty? No, we don’t. Loyalty to abstractions shouldn’t be demanded. In fact I think we should avoid such commitments because they ask us to sacrifice real loyalties to people – family, friends and community.

The nation doesn’t care for you or me as an individual, that’s the job of our interpersonal connections. It’s also what makes them more important to us. However, a defender of nationalism might point to the threat a conqueror poses to the individual and family. They might argue that you have an obligation to be conscripted to defend your nation as a way to protect the people you are loyal to, but this is different to being loyal to the state itself.

Our loyalties foster connection, provide us with a map of social obligations and help alleviate the threat of an unknowable future. But to call upon them is fraught with risk – we might be betrayed, exploited or the future may change in ways we don’t anticipate. But if there were no risk, there would be little value to being loyal at all, would there?


Learning risk management from Harambe

Traditional and social media channels were flooded with the story of Harambe, a 17-year-old western lowland silverback gorilla shot dead at Cincinnati Zoo on 28 May 2016 after a four-year-old boy crawled through a barrier and fell into his enclosure.

With the benefit of hindsight, forming an opinion is easy. There are already plenty being thrown around regarding the incident and who was to blame for the tragedy.

The need to kill Harambe is exceptionally depressing: a gorilla lost his life, the zoo lost a real asset, a mother was at risk of losing her child, and staff tasked by the zoo to shoot Harambe faced emotional trauma based on the bond they likely formed with him.

In technical risk management terms, the zoo seems to have been in line with best practice.

Overall, it was a bad state of affairs. Though the case gives rise to a number of ethical issues, one way to consider it is as a risk management issue – where it presents us with some important lessons that might prevent similar circumstances from happening in the future, and ensure they are better managed if they do.

In technical risk management terms, the zoo seems to have been in line with best practice.

According to Cincinnati Zoo’s annual report, 1.5 million visitors visited the park in 2014 to 2015. Included in those numbers are hundreds of parents who visited the zoo with children who didn’t end up in any of the animal enclosures.

According to WLWT-TV, this was the first breach at the zoo since its opening in 1978. There is no doubt the zoo identified this risk and managed it with secure (until now) enclosures. There is also little doubt relevant signage and duty of care reminders would have been placed around the zoo. Staff would assume parents would manage their children and keep them safe. In the eyes of most risk management experts, this would seem to be more than sufficient.

Organisations need to put energy and effort into so called ‘black swan’ events… that are unlikely but have immense consequences if they do occur.

However, as we have seen in several cases (including Cecil the Lion) it does not seem to be the incident itself that brings the massive negative consequences but rather social media, based on the fact that the internet provides a platform for everyone to be judge and jury.

This flags the massive shift required in the way we manage risk. If we look only at the financial losses to the zoo, their decision may seem logical and rational. They were truly put in a no-win position – an immediate tactical decision was required once Harambe began dragging the child around the water.

Imagine if they had decided to tranquillise Harambe and the four-year-old boy had died while they were waiting for the tranquillisers to take effect – what would the impact have been?

The true lesson regarding this issue lies in the need for organisations to put energy and effort into so-called ‘black swan’ events – ones that are unlikely but have immense consequences if they do occur. These events are often overlooked, based only on the fact they are unlikely, leaving organisations unprepared for when they do.

The ethics of what is right and wrong tend to blur when the masses have a platform to pass judgement.

Traditional risk management approaches try to allocate scores to things and then put associated resources to the highest ranking risk issues. In this case, a risk that was deemed managed actually occurred and the result was very negative.

Whether negligent or not, various social media commentators have held the mother accountable. It seems she has been held to account based not on what she did, but for the apparently unapologetic and callous way she responded to the killing of Harambe.

This shows us risk management needs to consider the human element in a way we previously haven’t. The ethics of what is right and wrong tend to blur when the masses have a platform to pass judgement. There are many lessons to be taken from this incident, including the following considerations:

  1. Risk management and duty of care should be incorporated in a more cohesive manner, focusing on applying a BDA approach (Before, During and After).
  2. Social media backlash adds a new dimension to the way organisations should make, report and defend their decisions.
  3. Individuals can no longer purely blame the organisation they believe responsible based on negligence or breach of duty of care. Even if the individual shifts blame onto the organisation entirely, and they are not held to account by the law, they will be held to account by the general public.
  4. We have entered an era where system- and process-based risk management needs to integrate human and emotive elements to account for emotional responses.
  5. Lastly – and unrelatedly – the question of why one story attracts massive public outcry and why another doesn’t raises ethical questions regarding the way we consume news, the way the media reports it, and the upsides and downsides of social media.

In short this is another case of how much work we still have to do – especially in the modern internet age – to proactively and ethically manage risk.