Ethics Explainer: Ethics

What is ethics?

If you’re struggling to answer this, you’re not alone. Despite considering ethics a crucial part of our lives in a variety of different contexts, a common definition of the word can elude us.

Most of us are comfortable labelling products, people, and businesses ‘ethical’ and ‘unethical’. So, let’s get a clear understanding of these titles mean.

Here’s an easy way of breaking ethics down into four areas.

 

 

The question

Ethics is a process of reflection. We ‘do ethics’ every time we try to answer the question, “What should I do?”

Ethics doesn’t discount emotional responses but it does require us to be thoughtful when weighing up a decision. Rather than acting on instinct alone, ethics asks us to reasonably consider our options. An assessment of what we know, what we assume and what we believe, helps us choose a course of action most consistent with what we think is good and right.

While ethics is a branch of philosophy concerned with what’s right and wrong, it doesn’t seek to produce a list of rules to apply to all people at all times. Two people can both think ‘ethically’ about a situation and come up with very different decisions about what they should do.

Turning to an ethicist to get a definite answer on what’s right and wrong misses the point. Reflecting on the question “What should I do?” helps us discover and live by our values, principles, and purpose.

Values – ‘What’s good’

When faced with a decision, every person is going to choose the option they believe is best. It could be self-destructive, mean, or foolish – but the decision maker will always see more good in the option they settle on.

When you decide what you want to eat for lunch, you’ll consider a range of possibilities and choose one you think is good. Sometimes you might define good as ‘healthy’, other times as ‘tasty’, sometimes as ‘cheap’ and occasionally as a combination of all of them. Once you’ve got your definition down, you’re going to pick the option you think is most good.

Values are what help us define what’s good. Some of these will be unique to the individual but many values are held in common by cultures all around the world because they speak to the basic needs of human beings.

Freedom, safety, community, education, and health are all valued by people from very different walks of life. Each culture may express their values differently – norms of friendship will differ between cultures – but the basic value is still the same.

We tend to value lots of different things and prioritise them differently depending on our circumstances. In our youth we might rank excitement and fun over safety but later in life those values could shift in the other direction. This reflects changing beliefs about how much good is preserved by each value and how much they matter to us.

Principles – ‘What’s right’

Knowing what’s ‘good’ is an important step in ethical decision-making, but most of us believe there are better and worse ways of getting the things we value. We value honesty but are still careful with how we give criticism to colleagues – even if it would be more honest to be blunt.

This is the role of principles – they help us identify the right or wrong way to achieve the things we value. Some common examples are:

The Golden Rule: Treat other people the way you’d like to be treated.

Universality: Don’t ask other people to act in a way you wouldn’t be willing to act in the same situation.

Machiavellian: I’ll do what works and gets me what I want, no matter how it affects other people.

Notice how these principles are value-neutral? This means you can use them no matter what your values are – some may even seem unethical to you. Different people want to be treated in different ways – some gently and others with ‘tough love’ – but everyone can use the Golden Rule as a way to guide their decisions.

Purpose – picking your values and principles

There are a huge amount of values and principles to choose between. Many of us don’t choose at all, sticking with the systems we inherited from family, culture, or religion.

If we were to choose, which ones would we decide to act on? Which ones would we care about most? This is where understanding our defining purpose is important.

Some philosophers believe every person has the same purpose – like flourishing, maximising wellbeing for others, or fulfilling their obligations. Others think people should be able to find or choose their own purpose.

What our purpose should be is hard to determine. Organisations have an easier run of it – they’re usually designed with a purpose in mind and can choose principles and values accordingly.

For example, news organisations exist to inform the public. From this they can find values like truth and integrity as well as principles like impartiality and rigorous checking of sources.

Some individuals have thought about purpose in terms of ‘vocations’ – the types of activities we commit our lives to. These can include professional roles but can also include things like parenting, volunteer work, or self-improvement.

The initial question, values, principles and purpose form the building blocks of our ethical thinking. They don’t provide us with easy answers to the question ‘What should I do?’, but they help us to understand what a good answer might look like.


Ethics Explainer: The Sunlight Test

You can use the sunlight test by asking yourself, would I do the same thing if I knew my actions would end up on the front page of the newspaper tomorrow?

It’s an easy way to test an ethical decision before you act on it.

This test is most useful as a guard against moral temptations – where we stand to gain a great deal for doing something unethical. Moral temptations are strongest when the likelihood of punishment is low and what you stand to gain outweighs the ethical costs of doing the wrong thing.

Here’s a quick example

Say you have the chance to lie to your employer about a lunch you just took. It was meant to be with a client, but they cancelled at the last minute. You were already at the restaurant and ran into some friends and spent a couple of hours together.

If there’s not much chance of getting caught, do you tell your boss you were at a work lunch and charge the bill back to the company, or be honest and accept getting in a bit of trouble for taking an extended break?

This is a situation where the sunlight test can be really helpful. By taking the belief that we won’t get caught out of the equation, we’re able to determine whether our actions would stand up to public scrutiny.

It’s a really good way of ensuring we’re being motivated by what we think is good or right, and not by self-interest.

But what if I definitely won’t get caught?

The sunlight test is not actually about whether or not you’ll get busted. Often, it’s best used when it’s unlikely we’ll get caught doing the wrong thing. What we need to examine is whether a well informed but impartial third party would believe what we were doing is okay.

Although the sunlight test can be used by any person, it’s especially important for people whose professional roles put them in positions of public scrutiny – politicians, police, judges, journalists, and so on. For these people there is a real possibility their actions will end up on the front of the newspaper.

This means the sunlight test should be a daily part of their decision making.

However, even though there is a chance they’ll end up in the news, it’s still crucial public figures do what they believe is right. The sunlight test doesn’t ask us to imagine what the most popular course of action would be, but how our actions would be perceived by a reasonable and fair minded third party.


Ethics Explainer: Social Contract

Social contract theories see the relationship of power between state and citizen as a consensual exchange. It is legitimate only if given freely to the state by its citizens and explains why the state has duties to its citizens and vice versa.

Although the idea of a social contract goes as far back as Epicurus and Socrates, it gained popularity during The Enlightenment thanks to Thomas Hobbes, John Locke and Jean-Jacques Rousseau. Today the most popular example of social contract theory comes from John Rawls.

The social contract begins with the idea of a state of nature – the way human beings would exist in the world if they weren’t part of a society. Philosopher Thomas Hobbes believed that because people are fundamentally selfish, life in the state of nature would be “nasty, brutish and short”. The powerful would impose their will on the weak and nobody could feel certain their natural rights to life and freedom would be respected. 

Hobbes believed no person in the state of nature was so strong they could be free from fear of another person and no person was so weak they could not present a threat. Because of this, he suggested it would make sense for everyone to submit to a common set of rules and surrender some of their rights to create an all-powerful state that could guarantee and protect every person’s right. Hobbes called it the ‘Leviathan’. 

It’s called a contract because it involves an exchange of services. Citizens surrender some of their personal power and liberty. In return the state provides security and the guarantee that civil liberty will be protected. 

Crucially, social contract theorists insist the entire legitimacy of a government is based in the reciprocal social contract. They are legitimate because they are the only ones the people willingly hand power to. Locke called this popular sovereignty. 

Unlike Hobbes, Locke thought the focus on consent and individual rights meant if a group of people didn’t agree with significant decisions of a ruling government then they should be allowed to join together to form a different social contract and create a different government. 

Not every social contract theorist agrees on this point. Philosophers have different ideas on whether the social contract is real, or if it’s a fictional way to describe the relationship between citizens and their government. 

If the social contract is a real contract – just like your employment contract – people could be free not to accept the terms. If a person didn’t agree they should give some of their income to the state they should be able to decline to pay tax and as a result, opt out of state-funded hospitals, education, and all the rest. 

Like other contracts, withdrawing comes with penalties – so citizens who decide to stop paying taxes may still be subject to punishment. 

Other problems arise when the social contract is looked at through a feminist perspective. Historically, social contract theories, like the ones proposed by Hobbes and Locke, say that (legitimate) state authority comes from the consent of free and equal citizens. 

Philosophers like Carole Pateman challenge this idea by noting that it fails to deal with the foundation of male domination that these theories rest on.  

For Pateman the myth of autonomous, free and equal individual citizens is just that: a myth. It obscures the reality of the systemic subordination of women and others.  

In Pateman’s words the social contract is first and foremost a ‘sexual contract’ that keeps women in a subordinate role. The structural subordination of women that props up the classic social contract theory is inherently unjust. 

The inherent injustice of social contract theory is further highlighted by those critics that believe individual citizens are forced to opt in to the social contract. Instead of being given a choice, they are just lumped together in a political system which they, as individuals, have little chance to control.  

Of course, the idea of individuals choosing not to opt in or out is pretty impractical – imagine trying to stop someone from using roads or footpaths because they didn’t pay tax.  

To address the inherent inequity in some forms of social contract theory, John Rawls proposes a hypothetical social contract based on fundamental principles of justice. The principles are designed to provide a clear rationale to guide people in choosing to willingly agree to surrender some individual freedoms in exchange for having some rights protected. Rawls’ answer to this question is a thought experiment he calls the veil of ignorance.

By imagining we are behind a veil of ignorance with no knowledge of our own personal circumstances, we can better judge what is fair for all. If we do so with a principle in place that would strive for liberty for all at no one else’s expense, along with a principle of difference (the difference principle) that guarantees equal opportunity for all, as a society we would have a more just foundation for individuals to agree to a contract that in which some liberties would be willingly foregone.  

Out of Rawls’ focus on fairness within social contract theory comes more feminist approaches, like that of Jean Hampton. In addition to criticising Hobbes’ theory, Hampton offers another feminist perspective that focuses on extending the effects of the social contract to interpersonal relationships. 

In established states, it can be easy to forget the social contract involves the provision of protection in exchange for us surrendering some freedoms. People can grow accustomed to having their rights protected and forget about the liberty they are required to surrender in exchange.  

Whether you think the contract is real or just a useful metaphor, social contract theory offers many unique insights into the way citizens interact with government and each other.


Ethics Explainer: Eudaimonia

The closest English word for the Ancient Greek term eudaimonia is probably “flourishing”.

The philosopher Aristotle used it as a broad concept to describe the highest good humans could strive toward – or a life ‘well lived’.

Though scholars translated eudaimonia as ‘happiness’ for many years, there are clear differences. For Aristotle, eudaimonia was achieved through living virtuously – or what you might describe as being good. This doesn’t guarantee ‘happiness’ in the modern sense of the word. In fact, it might mean doing something that makes us unhappy, like telling an upsetting truth to a friend.

Virtue is moral excellence. In practice, it is to allow something to act in harmony with its purpose. As an example, let’s take a virtuous carpenter. In their trade, virtue would be excellences in artistic eye, steady hand, patience, creativity, and so on.

The eudaimon [yu-day-mon] carpenter is one who possesses and practices the virtues of his trade.

By extension, the eudaimon life is one dedicated to developing the excellences of being human. For Aristotle, this meant practicing virtues like courage, wisdom, good humour, moderation, kindness, and more.

Today, when we think about a flourishing person, virtue doesn’t always spring to mind. Instead, we think about someone who is relatively successful, healthy, and with access to a range of the good things in life. We tend to think flourishing equals good qualities plus good fortune.

This isn’t far from what Aristotle himself thought. Although he did believe the virtuous life was the eudaimon life, he argued our ability to practice the virtues was dependent on other things falling in our favour.

For instance, Aristotle thought philosophical contemplation was an intellectual virtue – but to have the time necessary for contemplation you would need to be wealthy. Wealth (as we all know) is not always a product of virtue.

Some of Aristotle’s conclusions seem distasteful by today’s standards. He believed ugliness was a hindrance to developing practical social virtues like friendship (because nobody would be friends with an ugly person).

 

 

However, there is something intuitive in the observation that the same person, transformed into the embodiment of social standards of beauty, would – everything else being equal – have more opportunities available to them.

In recognising our ability to practice virtue might be somewhat outside our control, Aristotle acknowledges our flourishing is vulnerable to misfortune. The things that happen to us can not only hurt us temporarily, but they can put us in a condition where our flourishing – the highest possible good we can achieve – is irrevocably damaged.

For ethics, this is important for three reasons.

First, because when we’re thinking about the consequences of an action we should take into account their impact on the flourishing of others. Second, it suggests we should do our best to eliminate as many barriers to flourishing as we possibly can. And thirdly, it reminds us that living virtuously needs to be its own reward. It is no guarantee of success, happiness or flourishing – but it is still a central part of what gives our lives meaning.


Ethics Explainer: Just War Theory

Just war theory is an ethical framework used to determine when it is permissible to go to war. It originated with Catholic moral theologians like Augustine of Hippo and Thomas Aquinas, though it has had a variety of different forms over time.

Today, just war theory is divided into three categories, each with its own set of ethical principles. The categories are jus ad bellumjus in bello, and jus post bellum. These Latin terms translate roughly as ‘justice towards war’, ‘justice in war’, and ‘justice after war’.

Jus ad bellum

When political leaders are trying to decide whether to go to war or not, just war theory requires them to test their decision by applying several principles:

  • Is it for a just cause?

This requires war only be used in response to serious wrongs. The most common example of just cause is self-defence, though coming to the defence of another innocent nation is also seen as a just cause by many (and perhaps the highest cause).

  • Is it with the right intention?

This requires that war-time political leaders be solely motivated, at a personal level, by reasons that make a war just. For example, even if war is waged in defence of another innocent country, leaders cannot resort to war because it will assist their re-election campaign.

  • Is it from a legitimate authority?

This demands war only be declared by leaders of a recognised political community and with the political requirements of that community.

  • Does it have due proportionality?

This requires us to imagine what the world would look like if we either did or didn’t go to war. For a war to be ‘just’ the quality of the peace resulting from war needs to superior to what would have happened if no war had been fought. This also requires we have some probability of success in going to war – otherwise people will suffer and die needlessly.

  • Is it the last resort?

This says we should explore all other reasonable options before going to war – negotiation, diplomacy, economic sanctions and so on.

Even if the principles of jus ad bellum are met, there are still ways a war can be unjust.

Jus in bello

These are the ethical principles that govern the way combatants conduct themselves in the ‘theatre of war’.

  • Discrimination requires combatants only to attack legitimate targets. Civilians, medics and aid workers, for example, cannot be the deliberate targets of military attack. However, according the principle of double-effect, military attacks that kill some civilians as a side-effect may be permissible if they are both necessary and proportionate.
  • Proportionality applies to both jus ad bellum and jus in bello. Jus in bello requires that in a particular operation, combatants do not use force or cause harm that exceeds strategic or ethical benefits. The general idea is that you should use the minimum amount of force necessary to achieve legitimate military aims and objectives.
  • No intrinsically unethical means is a debated principle in just war theory. Some theorists believe there are actions which are always unjustified, whether or not they are used against enemy combatants or are proportionate to our goals. Torture, shooting to maim and biological weapons are commonly-used examples.
  • ‘Following orders’ is not a defence as the war crime tribunals after the Second World War clearly established. Military personnel may not be legally or ethically excused for following illegal or unethical orders. Every person bearing arms is responsible for their conduct – not just their commanders.

Jus post bello

Once a war is completed, steps are necessary to transition from a state of war to a state of peace. Jus post bello is a new area of just war theory aimed at identifying principles for this period. Some of the principles that have been suggested (though there isn’t much consensus yet) are:

  • Status quo ante bellum, a Latin term meaning ‘the way things were before war’ – basically rights, property and borders should be restored to how they were before war broke out. Some suggest this is a problem because those can be the exact conditions which led to war in the first place.
  • Punishment for war crimes is a crucial step to re-installing a just system of governance. From political leaders down to combatants, any serious offences on either side of the conflict need to be brought to justice.
  • Compensation of victims suggests that, as much as possible, the innocent victims of conflict be compensated for their losses (though some of the harms of war will be almost impossible to adequately compensate, such as the loss of family members).
  • Peace treaties need to be fair and just to all parties, including those who are guilty for the war occurring.

Just war theory provides the basis for exercising ‘ethical restraint’ in war. Without restraint, philosopher Michael Ignatieff, argues there is no way to tell the difference between a ‘warrior’ and a ‘barbarian’.


Ethics Explainer: Begging the question

Begging the question is when you use the point you’re trying to prove as an argument to prove that very same point. Rather than proving the conclusion is true, it assumes it.

It’s also called circular reasoning and is a logical fallacy.

Should I trust Steve?

Premise: Steve is a trustworthy person because I trust him.

Conclusion: Therefore, you should trust Steve.

Let’s replace the word ‘trust’ with ‘love’.

Premise: Steve is a lovable person because I love him.

Conclusion: Therefore, you should love Steve.

Are you any closer to figuring out why Steve is trustworthy or loved?

As a logical argument, here’s how it looks.

Premise: Steve is X because X.

Conclusion: Therefore, you should X because X.

The conclusion and the premise make the same claim. That is, they say the same thing.

Though the logical structure is valid, there’s no argument to move on from. We’re no closer to understanding why Steve is trustworthy or lovable.  

Rather than assuming the conclusion is true from the beginning, let’s prove it.

Premise 1: Reliable people can be trusted.

Premise 2: Steve is a reliable person.

Conclusion 1: Therefore, Steve can be trusted by any person.

Premise 3: You are a person.

Conclusion 2: Therefore, you should trust Steve.

This way, even if Steve is clearly trustworthy, you’re using a good argument to help his case.

A leading question

Here’s another related example:

‘If we accept your argument that people who download movies should be put in jail, who should provide the education they receive while they’re in prison?’

This is called a leading question. It sneaks in a claim that needs to be argued for in the form of a question.

In this example, the claim is that people who are put in jail should receive education programs. That might be true, it might not, but because it forces the answer to go in a certain direction, it is an example of begging the question.

Radio interviews and talk shows conversations can beg the question by asking leading questions that try to box you in to a certain answer. Being able to spot a leading question is useful – it allows us to be critical not only of other people’s arguments, but the way they frame the question.

Note: when people say ‘this begs the question that’ they actually mean ‘this raises the question that’. In this context, they’re usually not referring to the logical fallacy.


Ethics Explainer: Ad Hominem Fallacy

Ad hominem, Latin for “to the man”, is when an argument is rebutted by attacking the person making it rather than the argument itself. It is another informal logical fallacy.

The logical structure of an ad hominem is as follows:

  1. Person A makes a claim X.
  2. Person B attacks person A.
  3. Therefore, X is wrong.

When you see the logical structure of the argument it becomes clear why it’s a fallacy. The truth or falsehood of X has nothing to do with the person arguing in support of it. Imagine if X had been written down and you didn’t know who was arguing the case. If you couldn’t prove it wrong with arguments, then you can’t prove it wrong at all.

 

 

Here are some common ad hominem arguments:

Argument from abuse

Steve: “I don’t think we should catch a taxi to dinner. It’s just a short walk and the environment doesn’t need the extra pollution.”

Jaime: You would say that – you’re so cheap!”

Jaime’s rebuttal doesn’t address Steve’s argument. Instead it abuses him as a person. Not only is this unpleasant, it’s fallacious because Steve’s character doesn’t impact on the truth or falsehood of what he said.

‘Tu quoque’ fallacy

Jaime: “Now that online streaming services are affordable and available in Australia, there’s no justification for pirating films anymore. People shouldn’t do it.”

Steve: “But earlier you’d said you were about to download a torrent for the new Game of Thrones!”

‘Tu quoque’ is Latin for “you also”. The ‘tu quoque’ fallacy occurs when an argument is rebutted because the arguer’s own behaviour is contrary to what they’re arguing. While this is a good way of highlighting hypocrisy, it isn’t a refutation. Just because a person doesn’t ‘walk the walk’ doesn’t mean what they say is false.

Appeal to authority

Steve: I don’t believe in God. Richard Dawkins is an atheist and he’s really smart.

The appeal to authority is actually a reverse ad hominem in which the credentials of another person are used to strengthen an argument. Rather than relying on arguments against God’s existence, Steve relies on the authority of other people who don’t believe in God.

Although it isn’t criticising the person making the argument, it still doesn’t deal with the argument itself. The appeal to authority is another kind of ad hominem fallacy.

Note that the ad hominem fallacy only applies to attempts to discredit (or strengthen) an argument by reference to the person making the argument. In court cases, lawyers will often use a person’s character to prove or undermine their credibility.

This is not necessarily a case of ad hominem – credibility is about whether or not we should believe whether a person is telling the truth, not whether the arguments they make are reasonable ones or not.


Ethics Explainer: Dirty Hands

The problem of dirty hands refers to situations where a person is asked to violate their deepest held ethical principles for the sake of some greater good.

The problem of dirty hands is usually seen only as an issue for political leaders. Ordinary people are typically not responsible for serious enough decisions to justify getting their hands ‘dirty’. Imagine a political leader who refuses to do what is necessary to advance the common good – what would we think of them?

Michael Walzer steps in

This was the question philosopher Michael Walzer asked when he discussed dirty hands, and another philosopher, Max Weber, had asked before him.

 

 

Walzer asks us to imagine a politician who is elected in a country that has been devastated by civil war, and who campaigned on policies of peace, reconciliation and an opposition to torture. Immediately after this politician is elected, he is asked to authorise the torture of a terrorist. The terrorist has hidden bombs throughout apartments in the city which will explode in the next 24 hours. Should the politician authorise the torture in the hope the information provided by the terrorist might save lives?

Finding common ground

This is a pretty common ethical dilemma, and different ethical theories will give different answers. Deontologists will mostly refuse, taking the ‘absolutist’ position that torture is an attack on human dignity and therefore always wrong. Utilitarians will probably see the torture as the action leading to the best outcomes and argue it is the right course of action.

What makes dirty hands different is it treats each of these arguments seriously. It accepts torture might always be wrong, but also that the stakes are so high it might also be the right thing to do. So, the political leader might have a duty to do the wrong thing – but what they are required to do is still wrong. As Walzer says, “The notion of dirty hands derives from an effort to refuse ‘absolutism’ without denying the reality of the moral dilemma”.

The paradox of dirty hands – that the right thing to do is also wrong – poses a challenge to political leaders. Are they willing to accept the possibility they might have to get their hands dirty and be held responsible for it? Walzer believes the moral politician is one who has dirty hands, acknowledges it, and is destroyed by it (because of feelings of guilt, punishment and so on): “it is by his dirty hands that we know him”.

Note that we’re not talking about corruption here where politicians get their hands dirty for their own selfish reasons, like fraudulent reelection or profit. What we’re talking about is when a politician might be obliged to violate their deepest personal values or the ethical creeds of their community in order to achieve some higher good, and how the politician should feel about having done so.

A remorseful politician?

Walzer believes politicians should feel wracked with guilt and seek forgiveness (and even demand punishment) in response to having dirtied their hands. Other thinkers disagree, notably Niccolo Machiavelli. He was also aware political leaders would sometimes have to do ‘what’s necessary’ for the public good. But even if those actions would be rejected by private ethics, he didn’t think decision makers should feel guilty about it.

Machiavelli felt indecision, hesitation, or squeamishness in the face of doing what’s necessary wasn’t a sign of a good or virtuous political leader – it was a sign they weren’t cut out for the job. Under this notion, the good political leader won’t just accept getting their hands dirty, they’ll do it whenever necessary without batting an eyelid.


Ethics Explainer: Double-Effect Theory

Double-effect theory recognises that a course of action might have a variety of ethical effects, some ‘good’ and some ‘bad’.

It can be seen as a way of balancing consequentialist and deontological approaches to ethics.

According to the theory, an action with both good and bad effects may be ethical as long as:

  • Only the good consequences are intended (we don’t want the bad effects to occur, they’re just inescapable, even if they can be foreseen).
  • The good done by the action outweighs the harm it inflicts.
  • The bad effect is not the means by which the good effect occurs (we can’t do evil to bring about good – the good and bad consequences have to occur simultaneously).
  • The act we are performing is not unethical for some other reason (for example, an attack on human dignity).
  • We seek to minimise, if possible, the unintended and inadvertent harm that we cause.

Double-effect is best explained through the classic thought experiment: the Trolley problem.

Imagine a runaway train carriage is hurtling down the tracks toward five railroad workers. The workers are wearing earmuffs and unable to hear the carriage approaching. You have no way of warning them. However, you do have access to a lever which will divert the train onto a side-track on which only one person is working. Should you pull the lever and kill the one man to save five lives?

Take a moment to think about what you would do and your reasons for doing it. Now, consider this alternative.

The train is still hurtling toward the five workers but this time there’s no lever. Instead, you’re a lightweight person standing on a bridge above the railroad. Next to you is a very large man who would be heavy enough to stop the train. You could push the man onto the tracks and stop the train, but it would kill the heavy man. Should you push him off the bridge?

Again, think about what you would do and why you would do it.

Did you say ‘yes’ in the first scenario and ‘no’ in the second? That’s the most common response, but why? After all, in each case you’re killing one person to save five. According to many consequentialists that would be the right thing to do. By refusing to push the man off the bridge, are we being inconsistent?

Double-effect theory provides a way of consistently explaining the difference in our responses.

In the first case, our intention is to save five lives. An unintended, foreseeable consequence of pulling the lever is the death of one worker. But because the stakes are sufficiently high, our intended act (pulling a lever to redirect a train) isn’t intrinsically wrong. The good consequences outweigh the bad. The negative outcomes are side-effects of our good action, and so, we are permitted to pull the lever.

In the second case, the death of the heavy man is not a side-effect. Rather, it is the means (pushing the man off the bridge to stop the train) by which we achieve our goal (saving the five men). The negative outcomes are not unavoidable side-effects that occur at the same time as the good deed. It is causally prior to and directly linked to the good outcome.

This fact has ethical significance because it changes the structure of the action.

Instead of ‘saving lives whilst unavoidably causing someone to die’, it is a case of ‘killing one person deliberately in order to save five’. In the lever scenario, we don’t need the one worker to die in order to save the five. In the latter, we need the heavy man to die. Which means when we push him, we are intentionally killing him.

Double-effect is used in a range of different contexts. In medical ethics it can be used to explain why it would be ethical for a pro-life pregnant woman to take life-saving medicine even if it would likely kill her unborn child (unintended side-effect). It also explains the actions of doctors who increase the dose of opiates to end pain – even though they know that the dosage will end the patient’s life.

In military ethics it explains how an air strike which causes some unavoidable ‘collateral damage’ (the death or injury of non-combatants) might still be permissible – assuming it meets the criteria described above and involves the proportionate and discriminate use of force.


Ethics Explainer: Logical Fallacies

A logical fallacy occurs when an argument contains flawed reasoning. These arguments cannot be relied on to make truth claims. There are two general kinds of logical fallacies: formal and informal.

First off, let’s define some terms.

  • Argument: a group of statements made up of one or more premises and one conclusion.
  • Premise: a statement that provides reason or support for the conclusion
  • Truth: a property of statements, i.e. that they are the case
  • Validity: a property of arguments, i.e. that they are logically structured
  • Soundness: a property of statements and arguments, i.e. that they are valid and true
  • Conclusion: the final statement in an argument that indicates the idea the arguer is trying to prove

Formal logical fallacies

These are arguments with true premises, but a flaw in its logical structure. Here’s an example:

  • Premise 1: In summer, the weather is hot.
  • Premise 2: The weather is hot.
  • Conclusion: Therefore, it is summer.

Even though statement 1 and 2 are true, the argument goes in circles. By using an effect to determine a cause, the argument becomes invalid. Therefore, statement 3 (the conclusion) can’t be trusted.

Informal logical fallacies 

These are arguments with false premises. They are based on claims that are not even true. Even if the logical structure is valid, it becomes unsound. For example:

  • Premise 1: All men have hairy beards.
  • Premise 2: Tim is a man.
  • Conclusion: Therefore, Tim has a hairy beard.

Statement 1 is false – there are plenty of men without hairy beards. Statement 2 is true. Though the logical structure is valid (it doesn’t go in circles), the argument is still unsound. The conclusion is false.

A famous example of an argument that is both valid, true, and sound is as follows.

  • Premise 1: All men are mortal.
  • Premise 2: Socrates is a man.
  • Conclusion: Socrates is mortal.

It’s important to look out for logical fallacies in the arguments people make. Bad arguments can lead to true conclusions, but there is no reason for us to trust the argument that got us to the conclusion. We might have missed something or it might not always be the case.