James C. Hathaway on the refugee convention

To mark the 2017 anniversary of the UN’s World Refugee Day and Australia’s Refugee Week, we spoke to Professor James Hathaway, the world’s leading expert on refugee law.

Watch this video for his in-depth analysis of the Refugee Convention, an agreement made in 1951 in response to World War II. Is it still relevant? What needs to change? He doesn’t hold back!

James’ work is used by courts all over the world when interpreting the Refugee Convention and applying it to decisions. His many publications include the seminal book co-authored with Australia’s Michelle Foster, The Law of Refugee Status.

Follow James Hathaway on Twitter here: @JC_Hathaway

Join the conversation

Is there merit in the refugee convention?


Ethics Explainer: Rights and Responsibilities

When you have a right either to do or not do something, it means you are entitled to do it or not.

Rights are always about relationships. If you were the only person in existence, rights wouldn’t be relevant at all. This is why rights always correspond to responsibilities. My rights will limit the ways you can and can’t behave towards me.

Legal philosopher Wesley Hohfeld distinguished between two sets of rights and responsibilities. First, there are claims and duties. Your right to life is attached to everyone else’s duty not to kill you. You can’t have one without the other.

Second, there are liberties and no-claims. If I’m at liberty to raise my children as I see fit it’s because there’s no duty stopping me – nobody can make a claim to influence my actions here. If we have no claim over other people’s liberties, our only duty is not to interfere with their behaviour.

But your liberty disappears as soon as someone has a claim against you. For example, you’re at liberty to move freely until someone else has a claim to private property. Then you have a duty not to trespass on their land.

It’s useful to add into the mix the distinction between positive and negative rights. If you have a positive right, it creates a duty for someone to give you something – like an education. If you have a negative right, it means others have a duty not to treat you in some way – like assaulting you.

All this might seem like tedious academic stuff but it has real world consequences. If there’s a positive right to free speech, people need to be given opportunities to speak out. For example, they might need access to a radio program so they can be heard.

By contrast, if it’s a negative claim right, nobody can censor anyone else’s speech. And if free speech is a liberty, your right to use it is subject to the claims of other. So if other people claim the right not to be offended, for example, you may not be able to speak up.

There are a few reasons why rights are a useful concept in ethics.

First, they are easy to enforce through legal systems. Once we know what rights and duties people have, we can enshrine them in law.

Second, rights and duties protect what we see as most important when we can’t trust everyone will act well all the time. In our imperfect world, rights provide a strong language to influence people’s behaviour.

Finally, rights capture the central ethical concepts of dignity and respect for persons. As the philosopher Joel Feinberg writes:

Having rights enables us to “stand up like men,” to look others in the eye, and to feel in some fundamental way the equal of anyone. To think of oneself as the holder of rights is not to be unduly but properly proud, to have that minimal self-respect that is necessary to be worthy of the love and esteem of others.

Indeed, respect for persons […] may simply be respect for their rights, so that there cannot be the one without the other; and what is called “human dignity” may simply by the recognizable capacity to assert claims.

Feinberg suggests rights are a manifestation of who we are as human beings. They reflect our dignity, autonomy and our equal ethical value. There are other ways to give voice to these things, but in highly individualistic cultures, what philosophers call “rights talk” resonates for two reasons: individual freedom and equality.

Join the conversation

What is more important: rights or responsibilities?


Big Thinker: Hannah Arendt

Johannah “Hannah” Arendt (1906 – 1975) was a German Jewish political philosopher who left life under the Nazi regime for nearby European countries before settling in the United States. Informed by the two world wars she lived through, her reflections on totalitarianism, evil, and labour have been influential for decades.

We are still learning from this seminal political theorist. Her book The Origins of Totalitarianism sold out on Amazon in 2017, more than 60 years after it was first published.

Evil doesn’t need malicious intentions

Arendt’s most well known idea is “the banality of evil”. She explored this in 1963 in a piece for The New Yorker that covered the trial of a Nazi bureaucrat who shared the first name of Hitler, Adolf Eichmann. This later became a book called Eichmann in Jerusalem: Reflections on the Banality of Evil.

In Eichmann, Arendt found a man whose greatest crime was a lack of thinking. His role was to transport Jewish people from German occupied areas to concentration camps in Poland. Eichmann did not kill anyone first hand. He was not involved in designing Hitler’s final solution. But he oversaw the trains that took millions of people to their deaths. They were gassed in chambers or died along the way or in camps due to starvation, overwork, illness, cold, heat, or brutality. Eichmann’s only defence for his involvement in this atrocity was obedience to the law and fulfilling his duty.

Eichmann was so steadfast with this line of reasoning he even referenced the philosopher Immanuel Kant and his theory of moral duty. Kant argued morality was acting on your obligations, not your emotions or what will bring you benefit. For Kant, the person who helps the beggar out of empathy or a belief assisting is a pathway to heaven is not doing an act of good. Kant felt everyone is morally obliged to help the beggar, and they are especially virtuous if they act on this duty despite feeling repulsed or no rewarding sense of doing good.

This does not really suit Eichmann’s argument because Kant was emphasising our ability to reason above emotion. This is precisely where Eichmann failed. We can only guess but it seems likely he did his job without asking questions while feeling a sense of comfort in the safety of his salary and senior position during volatile times.

Arendt believed it was this lack of true thinking and questioning that paved the way to genocide. Evil on the scale of Nazism required far more Adolf Eichmanns than Adolf Hitlers.

Totalitarianism needs political apathy

In studying the causes of WWII, Arendt came across “the masses”. She believed totalitarian regimes needed this to succeed.

By “the masses”, she meant the enormous group of people who are politically disconnected from other members of society. They don’t identify with a particular class, religion, or group. Their lack of group membership deprives them of common interests to demand from government. These people have no interest in politics because they don’t have political clout. They are an unorganised cohort with different, often conflicting desires whose needs are easily disregarded by politicians.

But although these people take no active interest in politics they still hold expectations for the state. If politicians fail those expectations they face “the loss of the silent consent and support of the unorganised masses”. In response, they “shed their apathy” and look for an outlet “to voice their new violent opposition”.

The totalitarian leader emerges from this “structureless mass of furious individuals”. With the political apathy of the masses turned to hostility, leaders will rise by breaking established norms and ignoring the way politics is usually done. Arendt dramatically says they prefer “methods which end in death rather than persuasion”. In short, they’re less likely to build politics up than they are to tear it down because that’s what the masses want.

If this all sounds depressing, there is a solution embedded in Arendt’s writing: political engagement. The masses arise when individuals are lonely and politically disconnected. They are defined by a lack of solidarity or responsibility with other citizens.

By revitalising our political community we can recreate what Arendt sees as good politics. This is when people feel a sense of personal and political responsibility for the nation and are able to band together with other citizens who have common interests. When citizens are connected in solidarity with one another, the mass never occurs and totalitarianism is held at bay.

When work defines you, unemployment is a curse

Not all Arendt’s work was concerned with war and totalitarianism. In The Human Condition, she also offers a general critique of modernity. Drawing on Karl Marx, Arendt thought the industrial age transformed humanity from thinkers into “working animals”.

She thought most people had come to define themselves by their work – reducing themselves to economic robots. Although it’s not a central point of Arendt’s analysis, this reduction is a product of the same forces she sees in the banality of evil. It’s a triumph of ‘doing’ over ‘thinking’ and of humans finding easy ways to define themselves.

Arendt wasn’t just concerned because people were reducing themselves to working drones. She also worried the industrial age which had just redefined them was also about to rob them of their new identities. She believed within a few decades, technology would replace factory jobs. Many people’s work would vanish.

“What we are confronted with is the prospect of a society of labourers without labour, that is, without the only activity left to them. Surely, nothing could be worse.”

In a time when automation now threatens over half of jobs on average in OECD countries, Arendt’s predictions seem timely. Can we shift our identity away from work in time to survive the massive job reduction to come?

Published February 2017. Updated August 2018.

Join the conversation

What leads people to support genocide?


Ethics Explainer: Dignity

When we say someone or something has dignity, we mean they have worth beyond their usefulness and abilities. To possess dignity is to have absolute, intrinsic and unconditional value.

The concept of dignity became prominent in the work of Immanuel Kant. He argued objects can be valuable in two different ways. They can have a price or dignity. If something has a price, it is valuable only because it is useful to us. By contrast, things with dignity are valued for their own sake. They can’t be used as tools for our own goals. Instead, we are required to show them respect. For Kant, dignity was what made something a person.

Dignity through the ages

Beliefs about where dignity comes from vary between different philosophical and religious systems. Christians believe humans have dignity because they’re made in the image of God. This is called imago dei (pronounced ee-marg-oh day)Kant believed humans possessed dignity because they’re rational. Others believe dignity is a way of recognising our common humanity. Some say it’s a social construct we created because it’s useful. Whatever its origin, the concept has become influential in political and ethical discourse today.

 

 

A question of human rights

Dignity is often seen as a central notion for human rights. The preamble to the Universal Declaration of Human Rights recognises the “inherent dignity” of “all members of the human family”. By recognising dignity, the Declaration acknowledges ethical limits to the ways we can treat other people.

Kant captured these ethical limits in his idea of respect for persons. In every interaction with another person we are required to treat them as ends in themselves rather than tools to achieve our own goals. We fail to respect people when we treat them as tools for our own convenience or don’t give adequate attention to their needs and wishes.

When it comes to practical matters, it’s not always clear what ‘dignity and respect for persons’ require us to do. For example, in debates around assisted dying (also called assisted suicide or euthanasia) both sides use dignity to argue for opposing conclusions.

Advocates believe the best way to respect dignity is by sparing people from unnecessary or unbearable suffering, while opponents believe dignity requires us never to intentionally kill someone. They claim dignity means a person’s value isn’t diminished by pain or suffering and we are ethically required to remind the patient of this, even if the patient disagrees.

Who makes the rules?

There are also disputes about exactly who is worthy of dignity. Should it be exclusive to humans or extended to animals? And do all animals possess intrinsic value and dignity or just specific species? If animals do have dignity, we’re required to treat them with same respect we afford our fellow human beings.

Join the conversation

How do you measure your worth?


Ethics Explainer: The Harm Principle

The harm principle says people should be free to act however they wish unless their actions cause harm to somebody else.

The principle is a central tenet of the political philosophy known as liberalism and was first proposed by English philosopher John Stuart Mill.

The harm principle is not designed to guide the actions of individuals but to restrict the scope of criminal law and government restrictions of personal liberty.

For Mill – and the many politicians, philosophers and legal theorists who have agreed with him – social disapproval or dislike for a person’s actions isn’t enough to justify intervention by government unless they actually harm someone.

The phrase “your freedom to swing your fist ends where my nose begins” captures the general sentiment of the principle. The approach is usually linked to the idea of ‘negative rights’, which are demands someone does not do something to you.

 

 

The limitation of state power

On the other side, ‘positive rights’ demand things are done for you, like the provision of healthcare or welfare payments. For this reason, the principle is often used in political debates to discuss the limitations of state power.

There’s no issue with activities that are harmful to the individual themselves. If you want to smoke, drink, or use drugs to excess, you should be free to do so. But if you get behind the wheel of a car whilst under the influence, pass second-hand smoke onto other people, or become unpredictably violent on certain drugs, then there’s good reason for the government to get involved.

Attempting to define harm

The sticking point comes in trying to define what counts as harmful. Although it might seem obvious, it’s actually not that easy. For example, if you benefit by winning a promotion at work while other applicants lose out, does this count as being harmful to them?

Philosopher Joel Feinberg would argue no: he defines harms as “wrongful setbacks to interests”. He would argue you wouldn’t be harming anyone by winning a promotion because although their interests are set back, it wasn’t done in a wrongful manner.

A more difficult category concerns harmful speech. For Mill, you do not have the right to incite violence – this is obviously harmful as it physically hurts and injures. However, he says you do have the right to offend other people – having your feelings hurt doesn’t count as harm.

Recent debates have been questioning this and saying certain kinds of speech can be as damaging psychologically as a physical attack – either because they’re personally insulting or because they entrench established power dynamics and oppress minorities.

Importantly, Mill believed the harm principle only applied to people who are able to exercise their freedom responsibly. For instance, paternalism was still justifiable for children (which makes sense, seeing as paternalism means ‘governing as though a parent over children’).

Unfortunately, he also thought these measures were appropriate to use against “barbarians”, by which he meant non-Europeans in British colonies like India.

Although some might see this as a side note, it does highlight an important point about the harm principle: the basis for determining who is worthy or capable of exercising their freedom can be subject to personal, cultural or political bias. And that is not good.

Join the conversation

How do you determine what is harmful to another?


Ethics Explainer: Social Contract

Social contract theories see the relationship of power between state and citizen as a consensual exchange. It is legitimate only if given freely to the state by its citizens and explains why the state has duties to its citizens and vice versa.

Although the idea of a social contract goes as far back as Socrates, it gained popularity during The Enlightenment thanks to Thomas HobbesJohn Locke and Jean-Jacques Rousseau. Today the most popular example of social contract theory comes from John Rawls.

The social contract begins with the idea of a state of nature – the way human beings would exist in the world if they weren’t part of a society. Philosopher Thomas Hobbes believed that because people are fundamentally selfish, life in the state of nature would be “nasty, brutish and short”. The powerful would impose their will on the weak and nobody could feel certain their natural rights to life and freedom would be respected.

Hobbes believed no person in the state of nature was so strong they could be free from fear of another person and no person was so weak they could not present a threat. Because of this, he suggested it would make sense for everyone to submit to a common set of rules and surrender some of their rights to create an all-powerful state that could guarantee and protect every person’s right. Hobbes called it the ‘Leviathan’.

It’s called a contract because it involves an exchange of services. Citizens surrender some of their personal power and liberty. In return the state provides security and the guarantee that civil liberty will be protected.

Crucially, social contract theorists insist the entire legitimacy of a government is based in the reciprocal social contract. They are legitimate because they are the only ones the people willingly hand power to. Locke called this popular sovereignty.

Crucially – and unlike Hobbes – Locke thought the focus on consent and individual rights meant if a group of people didn’t agree with significant decisions of a ruling government then they should be allowed to join together to form a different social contract and create a different government.

Not every social contract theorist agrees on this point. Philosophers have different ideas on whether the social contract is real, or if it’s a fictional way to describe the relationship between citizens and their government.

If the social contract is a real contract – just like your employment contract – people could be free not to accept the terms. If a person didn’t agree they should give some of their income to the state they should be able to decline to pay tax and as a result, opt out of state-funded hospitals, education, and all the rest.

Like other contracts, withdrawing comes with penalties – so citizens who decide to stop paying taxes may still be subject to punishment.

Critics of social contract theory believe individual citizens are forced to opt in to the social contract. Instead of being given a choice, they are just lumped together in a political system which they, as individuals, have little chance to control.

Of course, the idea of individuals choosing not to opt in or out is pretty impractical – imagine trying to stop someone from using roads or footpaths because they didn’t pay tax. This is why Rawls proposes a hypothetical social contract.

This would be a contract all reasonable people would consent to. Which freedoms would they be willing to surrender in exchange to having their rights protected? By answering this question and structuring government around the answer to this hypothetical, the government holds an implied social contract with its citizens.

In established states it can be easy to forget the social contract involves the provision of protection in exchange for us surrendering some freedoms. People can grow accustomed to having their rights protected and forget about the liberty they are required to surrender in exchange. But according to social contract theory, to insist on unrestricted liberty is to accept limited involvement by the state in our lives –even when they might be helpful.

Join the conversation

What do you have a social contract for?


Don’t throw the birth plan out with the birth water!

Don’t throw the birth plan out with the birth water!

Don’t throw the birth plan out with the birth water!

Just try mentioning ‘birth plans’ at a party and see what happens. Hannah Dahlen – a midwife’s perspective

Mia Freedman once wrote about a woman who asked what her plan was for her placenta. Freedman felt birth plans were “most useful when you set them on fire and use them to toast marshmallows”. She labelled people who make these plans as “birthzillas” more interested in birth than having a baby.

In response, Tara Moss argued:

The majority of Australian women choose to birth in hospital and all hospitals do not have the same protocols. It is easy to imagine they would, but they don’t, not from state to state and not even from hospital to hospital in the same city. Even individual health practitioners in the same facility sometimes do not follow the same protocols.

The debate

Why the controversy over a woman and her partner writing down what they would like to have done or not done during their birth?  The debate seems not to be about the birth plan itself, but the issue of women taking control and ownership of their births and what happens to their bodies.

Some oppose birth plans on the basis that all experts should be trusted to have the best interests of both mother and baby in mind at all times. Others trust the mother as the person most concerned for her baby and believe women have the right to determine what happens to their bodies during this intimate, individual, and significant life event.

As a midwife of some 26 years, I wish we didn’t need birth plans. I wish our maternity system provided women with continuity of care so by the time a woman gave birth her care provider would fully know and support her well-informed wishes. Unfortunately, most women do not have access to continuity of care. They deal with shift changes, colliding philosophical frameworks, busy maternity units, and varying levels of skill and commitment from staff.

There are so many examples of interventions that are routine in maternity care but lack evidence they are necessary or are outright harmful. These include immediate clamping and cutting of the umbilical cord at birth, episiotomy, continuous electronic foetal monitoring, labouring or giving birth laying down and unnecessary caesareans. Other deeply personal choices such as the use of immersion in water for labour and birth or having a natural birth of the placenta are often not presented as options, or are refused when requested.

The birth plan is a chance to raise and discuss your wishes with your healthcare provider. It provides insight into areas of further conversation before labour begins.

I once had a woman make three birth plans when she found out her baby was in a breech presentation at 36 weeks. One for a vaginal breech birth, one for a cesarean, and one for a normal birth if the baby turned. The baby turned and the first two plans were ditched. But she had been through each scenario and carved out what was important for her.

Bashi Hazard – a legal perspective

Birth plans were introduced in the 1980s by childbirth educators to help women shape their preferences in labour and to communicate with their care providers. Women say preparing birth plans increases their knowledge and ability to make informed choices, empowers them, and promotes their sense of safety during childbirth. Some (including in Australia) report that their carefully laid plans are dismissed, overlooked, or ignored.

There appears to be some confusion about the legal status or standing of birth plans. Neither is reflective of international human rights principles or domestic law. The right to informed consent is a fundamental principle of medical ethics and human rights law and is particularly relevant to the provision of medical treatment. In addition, our common law starts from the premise that every human body is inviolate and cannot be subject to medical treatment without autonomous, informed consent.

Pregnant women are no exception to this human rights principle nor to the common law.

If you start from this legal and human rights premise, the authoritative status of a birth plan is very clear. It is the closest expression of informed consent that a woman can offer her caregiver prior to commencing labour. This is not to say she cannot change her mind but it is the premise from which treatment during labour or birth should begin.

Once you accept that a woman has the right to stipulate the terms of her treatment, the focus turns to any hostility and pushback from care providers to the preferences a woman has the right to assert in relation to her care.

Mothers report their birth plans are criticised or outright rejected on the basis that birth is “unpredictable”. There is no logic in this.

Care providers who understand the significance of the human and legal right to informed consent begin discussing a woman’s options in labour and birth with her as early as the first antenatal visit. These discussions are used to advise, inform, and obtain an understanding of the woman’s preferences in the event of various contingencies. They build the trust needed to allow the care provider to safely and respectfully support the woman through pregnancy and childbirth. Such discussions are the cornerstone of woman-centred maternity healthcare.

Human Rights in Childbirth

Reports received by Human Rights in Childbirth indicate that care provider pushback and hostility towards birth plans occurs most in facilities with fragmented care or where policies are elevated over women’s individual needs. Mothers report their birth plans are criticised or outright rejected on the basis that birth is “unpredictable”. There is no logic in this. If anything, greater planning would facilitate smoother outcomes in the event of unanticipated eventualities.

In truth, it is not the case that these care providers don’t have a birth plan. There is a birth plan – one driven purely by care providers and hospital protocols without discussion with the woman. This offends the legal and human rights of the woman concerned and has been identified as a systemic form of abuse and disrespect in childbirth, and as a subset of violence against women.

It is essential that women discuss and develop a birth plan with their care providers from the very first appointment. This is a precious opportunity to ascertain your care provider’s commitment to recognising and supporting your individual and diverse needs.

Gauge your care provider’s attitude to your questions as well as their responses. Expect to repeat those discussions until you are confident that your preferences will be supported. Be wary of care providers who are dismissive, vague or non-responsive. Most importantly, switch care providers if you have any concerns. The law is on your side. Use it.

Making a birth plan – some practical tips

  1. Talk it through with your lead care provider. They can discuss your plans and make sure you understand the implications of your choices.
  2. Make sure your support network know your plan so they can communicate your wishes.
  3. Attending antenatal classes will help you feel more informed. You’ll discover what is available and the evidence is behind your different options.
  4. Talk to other women about what has worked well for them, but remember your needs might be different.
  5. Remember you can change your mind at any point in the labour and birth. What you say is final, regardless of what the plan says.
  6. Try not to be adversarial in your language – you want people working with you, not against you. End the plan with something like “Thank you so much for helping make our birth special”.
  7. Stick to the important stuff.

Some tips on the specific content of your birth plan are available here.

Join the conversation

How protected are women by the law?


Ethics Explainer: Just War Theory

Just war theory is an ethical framework used to determine when it is permissible to go to war. It originated with Catholic moral theologians like Augustine of Hippo and Thomas Aquinas, though it has had a variety of different forms over time.

Today, just war theory is divided into three categories, each with its own set of ethical principles. The categories are jus ad bellumjus in bello, and jus post bellum. These Latin terms translate roughly as ‘justice towards war’, ‘justice in war’, and ‘justice after war’.

Jus ad bellum

When political leaders are trying to decide whether to go to war or not, just war theory requires them to test their decision by applying several principles:

  • Is it for a just cause?

This requires war only be used in response to serious wrongs. The most common example of just cause is self-defence, though coming to the defence of another innocent nation is also seen as a just cause by many (and perhaps the highest cause).

  • Is it with the right intention?

This requires that war-time political leaders be solely motivated, at a personal level, by reasons that make a war just. For example, even if war is waged in defence of another innocent country, leaders cannot resort to war because it will assist their re-election campaign.

  • Is it from a legitimate authority?

This demands war only be declared by leaders of a recognised political community and with the political requirements of that community.

  • Does it have due proportionality?

This requires us to imagine what the world would look like if we either did or didn’t go to war. For a war to be ‘just’ the quality of the peace resulting from war needs to superior to what would have happened if no war had been fought. This also requires we have some probability of success in going to war – otherwise people will suffer and die needlessly.

  • Is it the last resort?

This says we should explore all other reasonable options before going to war – negotiation, diplomacy, economic sanctions and so on.

Even if the principles of jus ad bellum are met, there are still ways a war can be unjust.

Jus in bello

These are the ethical principles that govern the way combatants conduct themselves in the ‘theatre of war’.

  • Discrimination requires combatants only to attack legitimate targets. Civilians, medics and aid workers, for example, cannot be the deliberate targets of military attack. However, according the principle of double-effect, military attacks that kill some civilians as a side-effect may be permissible if they are both necessary and proportionate.
  • Proportionality applies to both jus ad bellum and jus in bello. Jus in bello requires that in a particular operation, combatants do not use force or cause harm that exceeds strategic or ethical benefits. The general idea is that you should use the minimum amount of force necessary to achieve legitimate military aims and objectives.
  • No intrinsically unethical means is a debated principle in just war theory. Some theorists believe there are actions which are always unjustified, whether or not they are used against enemy combatants or are proportionate to our goals. Torture, shooting to maim and biological weapons are commonly-used examples.
  • ‘Following orders’ is not a defence as the war crime tribunals after the Second World War clearly established. Military personnel may not be legally or ethically excused for following illegal or unethical orders. Every person bearing arms is responsible for their conduct – not just their commanders.

Jus post bello

Once a war is completed, steps are necessary to transition from a state of war to a state of peace. Jus post bello is a new area of just war theory aimed at identifying principles for this period. Some of the principles that have been suggested (though there isn’t much consensus yet) are:

  • Status quo ante bellum, a Latin term meaning ‘the way things were before war’ – basically rights, property and borders should be restored to how they were before war broke out. Some suggest this is a problem because those can be the exact conditions which led to war in the first place.
  • Punishment for war crimes is a crucial step to re-installing a just system of governance. From political leaders down to combatants, any serious offences on either side of the conflict need to be brought to justice.
  • Compensation of victims suggests that, as much as possible, the innocent victims of conflict be compensated for their losses (though some of the harms of war will be almost impossible to adequately compensate, such as the loss of family members).
  • Peace treaties need to be fair and just to all parties, including those who are guilty for the war occurring.

Just war theory provides the basis for exercising ‘ethical restraint’ in war. Without restraint, philosopher Michael Ignatieff, argues there is no way to tell the difference between a ‘warrior’ and a ‘barbarian’.

Join the conversation

When are you obliged to go to war?


Learning risk management from Harambe

Traditional and social media channels were flooded with the story of Harambe, a 17-year-old western lowland silverback gorilla shot dead at Cincinnati Zoo on 28 May 2016 after a four-year-old boy crawled through a barrier and fell into his enclosure.

With the benefit of hindsight, forming an opinion is easy. There are already plenty being thrown around regarding the incident and who was to blame for the tragedy.

The need to kill Harambe is exceptionally depressing: a gorilla lost his life, the zoo lost a real asset, a mother was at risk of losing her child, and staff tasked by the zoo to shoot Harambe faced emotional trauma based on the bond they likely formed with him.

In technical risk management terms, the zoo seems to have been in line with best practice.

Overall, it was a bad state of affairs. Though the case gives rise to a number of ethical issues, one way to consider it is as a risk management issue – where it presents us with some important lessons that might prevent similar circumstances from happening in the future, and ensure they are better managed if they do.

In technical risk management terms, the zoo seems to have been in line with best practice.

According to Cincinnati Zoo’s annual report, 1.5 million visitors visited the park in 2014 to 2015. Included in those numbers are hundreds of parents who visited the zoo with children who didn’t end up in any of the animal enclosures.

According to WLWT-TV, this was the first breach at the zoo since its opening in 1978. There is no doubt the zoo identified this risk and managed it with secure (until now) enclosures. There is also little doubt relevant signage and duty of care reminders would have been placed around the zoo. Staff would assume parents would manage their children and keep them safe. In the eyes of most risk management experts, this would seem to be more than sufficient.

Organisations need to put energy and effort into so called ‘black swan’ events… that are unlikely but have immense consequences if they do occur.

However, as we have seen in several cases (including Cecil the Lion) it does not seem to be the incident itself that brings the massive negative consequences but rather social media, based on the fact that the internet provides a platform for everyone to be judge and jury.

This flags the massive shift required in the way we manage risk. If we look only at the financial losses to the zoo, their decision may seem logical and rational. They were truly put in a no-win position – an immediate tactical decision was required once Harambe began dragging the child around the water.

Imagine if they had decided to tranquillise Harambe and the four-year-old boy had died while they were waiting for the tranquillisers to take effect – what would the impact have been?

The true lesson regarding this issue lies in the need for organisations to put energy and effort into so-called ‘black swan’ events – ones that are unlikely but have immense consequences if they do occur. These events are often overlooked, based only on the fact they are unlikely, leaving organisations unprepared for when they do.

The ethics of what is right and wrong tend to blur when the masses have a platform to pass judgement.

Traditional risk management approaches try to allocate scores to things and then put associated resources to the highest ranking risk issues. In this case, a risk that was deemed managed actually occurred and the result was very negative.

Whether negligent or not, various social media commentators have held the mother accountable. It seems she has been held to account based not on what she did, but for the apparently unapologetic and callous way she responded to the killing of Harambe.

This shows us risk management needs to consider the human element in a way we previously haven’t. The ethics of what is right and wrong tend to blur when the masses have a platform to pass judgement. There are many lessons to be taken from this incident, including the following considerations:

  1. Risk management and duty of care should be incorporated in a more cohesive manner, focusing on applying a BDA approach (Before, During and After).
  2. Social media backlash adds a new dimension to the way organisations should make, report and defend their decisions.
  3. Individuals can no longer purely blame the organisation they believe responsible based on negligence or breach of duty of care. Even if the individual shifts blame onto the organisation entirely, and they are not held to account by the law, they will be held to account by the general public.
  4. We have entered an era where system- and process-based risk management needs to integrate human and emotive elements to account for emotional responses.
  5. Lastly – and unrelatedly – the question of why one story attracts massive public outcry and why another doesn’t raises ethical questions regarding the way we consume news, the way the media reports it, and the upsides and downsides of social media.

In short this is another case of how much work we still have to do – especially in the modern internet age – to proactively and ethically manage risk.

Join the conversation

Was Harambe just a risk management disaster?


‘Eye in the Sky’ and drone warfare

Warning – general plot spoilers to follow.

Collateral damage

Eye in the Sky begins as a joint British and US surveillance operation against known terrorists in Nairobi. During the operation, it becomes clear a terrorist attack is imminent, so the goals shift from surveillance to seek and destroy.

Moments before firing on the compound, drone pilots Steve Watts (Aaron Paul) and Carrie Gershon (Phoebe Fox) see a young girl setting up a bread stand near the target. Is her life acceptable collateral damage if her death saves many more people?

In military ethics, the question of collateral damage is a central point of discussion. The principle of ‘non-combatant immunity’ requires no civilian be intentionally targeted, but it doesn’t follow from this that all civilian casualties are unethical.

Most scholars and some Eye in the Sky characters, such as Colonel Katherine Powell (Helen Mirren), accept even foreseeable casualties can be justified under certain conditions – for instance, if the attack is necessary, the military benefits outweigh the negative side effects and all reasonable measures have been taken to avoid civilian casualties.

Risk-free warfare

The military and ethical advantages of drone strikes are obvious. By operating remotely, we prevent the risk of our military men and women being physically harmed. Drone strikes are also becoming increasingly precise and surveillance resources mean collateral damage can be minimised.

However, the damage radius of a missile strike drastically exceeds most infantry weapons – meaning the tools used by drones are often less discriminate than soldiers on the ground carrying rifles. If collateral damage is only justified when reasonable measures have been taken to reduce the risk to civilians, is drone warfare morally justified, or does it simply shift the risk away from our war fighters to the civilian population? The key question here is what counts as a reasonable measure – how much are we permitted to reduce the risk to our own troops?

Eye in the Sky forces us to confront the ethical complexity of war.

Reducing risk can also have consequences for the morale of soldiers. Christian Enemark, for example, suggests that drone warfare marks “the end of courage”. He wonders in what sense we can call drone pilots ‘warriors’ at all.

The risk-free nature of a drone strike means that he or she requires none of the courage that for millennia has distinguished the warrior from all other kinds of killers.

How then should drone operators be regarded? Are these grounded aviators merely technicians of death, at best deserving only admiration for their competent application of technical skills? If not, by what measure can they be reasonably compared to warriors?

Moral costs of killing

Throughout the film, military commanders Catherine Powell and Frank Benson (Alan Rickman) make a compelling consequentialist argument for killing the terrorists despite the fact it will kill the innocent girl. The suicide bombers, if allowed to escape, are likely to kill dozens of innocent people. If the cost of stopping them is one life, the ‘moral maths’ seems to check out.

Ultimately it is the pilot, Steve Watts, who has to take the shot. If he fires, it is by his hand a girl will die. This knowledge carries a serious ethical and psychological toll, even if he thinks it was the right thing to do.

There is evidence suggesting drone pilots suffer from Post Traumatic Stress Disorder (PTSD) and other forms of trauma at the same rates as pilots of manned aircraft. This can arise even if they haven’t killed any civilians. Drone pilots not only kill their targets, they observe them for weeks beforehand, coming to know their targets’ habits, families and communities. This means they humanise their targets in a way many manned pilots do not – and this too has psychological implications.

Who is responsible?

Modern military ethics insist all warriors have a moral obligation to refuse illegal or unethical orders. This sits in contrast to older approaches, by which soldiers had an absolute duty to obey. St Augustine, an early writer on the ethics of war, called soldiers “swords in the hand” of their commanders.

In a sense, drone pilots are treated in the same way. In Eye in the Sky, a huge number of senior decision-makers debate whether or not to take the shot. However, as Powell laments, “no one wants to take responsibility for pulling the trigger”. Who is responsible? The pilot who has to press the button? The highest authority in the ‘kill chain’? Or the terrorists for putting everyone in this position to begin with?

Join the conversation

Can drone warfare be morally justified?