Philosophy must (and can) thrive outside universities

A recent article in ABC Religion by Steve Fuller described philosophy being “at the crossroads”. The article explores philosophy’s relationship to universities and what living a “philosophical life” really looks like.

Reading it, my mind whisked me back to some of my earliest days at The Ethics Centre. Before returning to Sydney, I enjoyed the good fortune to complete my doctorate at Cambridge – one of the great universities of the world. While there, I mastered the disciplines of academic philosophy. However, I also learned the one lesson that my supervisor offered me at our first meeting – I should always “go for the jugular”. As it happens, I was quite good at drawing blood.

Perhaps this was a young philosopher’s sport because, as I grew older and read more deeply, I came to realise what I’d learned to do was not really consistent with the purpose and traditions of philosophy at all. Rather, I had become something of an intellectual bully – more concerned with wounding my opponents than with finding the ‘truth’ in the matter being discussed.

This realisation was linked to my re-reading of Plato – and his account of the figure of Socrates who, to this day, remains my personal exemplar of a great philosopher.

The key to my new understanding of Socrates lay in my realisation that, contrary to what I had once believed, he was not a philosophical gymnast deliberately trying to tie his interlocutors in knots (going for the jugular). Rather, he was a man sincerely wrestling, with others, some of the toughest questions faced by humanity in order to better understand them. What is justice? What is a good life? How are we to live?

The route to any kind of answer worth holding is incredibly difficult – and I finally understood (I was a slow learner) that Socrates subjected his own ideas to the same critical scrutiny he required of others.

In short, he was totally sincere when he said that he really did not know anything. All of his questioning was a genuine exploration involving others who, in fact, did claim to ‘know’. That is why he would bail up people in the agora (the town square) who were heading off to administer ‘justice’ in the Athenian courts.

Surely, Socrates would say, if you are to administer justice – then you must know what it is. As it turned out, they did not.

The significance of Socrates’ work in the agora was not lost on me. Here was a philosopher working in the public space. The more I looked, the more it seemed that this had been so for most of the great thinkers.

So that is what I set out to do.

One of my earliest initiatives was to head down to Martin Place, in the centre of Sydney, where I would set up a circle of 10 plastic chairs and two cardboard signs that said something like, “If you want to talk to a philosopher about ideas, then take a seat”. And there I would sit – waiting for others.

Without fail they would come – young, old, rich, poor – wanting to talk about large, looming matters in their lives. I remember cyclists discussing their place on our roads, school children discussing their willingness to cheat in exams (because they thought the message of society is ‘do whatever it takes’).

Occasionally, people would come from overseas – having heard of this odd phenomenon. A memorable occasion involved a discussion with a very senior and learned rabbi from Amsterdam – the then global head (I think) of Progressive Judaism. On another occasion, a woman brought her mother (visiting from England) to discuss her guilt at no longer believing in God. I remember we discussed what it might mean to feel guilt in relation to a being you claimed not to exist. There were few answers – but some useful insights.

Anyway, I came to imagine a whole series of philosophers’ circles being dotted around Martin Place and other parts of Sydney (and perhaps Australia). After all, why should I be the only philosopher pursuing this aspect of the philosophical life. So I reached out to the philosophy faculty at Sydney University – thinking (naively as it turned out) I would have a rush of colleagues wishing to join me.

Alas – not one was interested. The essence of their message was that they doubted the public would be able to engage with ‘real philosophy’ – that the techniques and language needed for philosophy would be bewildering to non-philosophers. I suspect there was also an undeclared fear of being exposed to their fellow citizens in such a vulnerable position.

Actually, I still don’t really know what led to such a wholesale rejection of the idea.

However, I think it was a great pity other philosophers should have felt more comfortable within the walls of their universities rather than out in the wider world.

I doubt that anything I write or say will be quoted in the centuries to come. However, I would not, for a moment, change the choice I made to step outside of the university and work within the agora. Life then becomes messy and marvellous in equal measure. Everything needs to be translated into language anyone can understand (and I have found that this is possible without sacrificing an iota of philosophical nuance).

I think it was a great pity other philosophers should have felt more comfortable within the walls of their universities rather than out in the wider world.

You constantly need to challenge unthinking custom and practice most people simply take for granted. This does not make you popular. You are constantly accused of being ‘unethical’ because you entertain ideas one group or another opposes. You please almost nobody. You cannot aim to be liked. And you have to deal with the rawness of people’s lives – discovering just how much the issues philosophers consider (especially in the field of ethics) really matter.

This is not to say that ‘academic’ philosophy should be abandoned. However, I can see no good reason why philosophers should think this is the only (or best) way to be a philosopher. Surely, there is room (and a need) for philosophers to live larger, more public lives.

You constantly need to challenge unthinking custom and practice most people simply take for granted. This does not make you popular.

I have scant academic publications to my name. However, at the height of the controversy surrounding the introduction of ethics classes for children not attending scripture in NSW, I enjoyed the privilege of being accused of “impiety” and “corrupting the youth” by the Anglican and Catholic Archbishops of Sydney. Why a ‘privilege’? Because these were precisely the same charges alleged against Socrates. So far, I have avoided the hemlock. For a philosopher, what could be better than that?


‘Eye in the Sky’ and drone warfare

Warning – general plot spoilers to follow.

Collateral damage

Eye in the Sky begins as a joint British and US surveillance operation against known terrorists in Nairobi. During the operation, it becomes clear a terrorist attack is imminent, so the goals shift from surveillance to seek and destroy.

Moments before firing on the compound, drone pilots Steve Watts (Aaron Paul) and Carrie Gershon (Phoebe Fox) see a young girl setting up a bread stand near the target. Is her life acceptable collateral damage if her death saves many more people?

In military ethics, the question of collateral damage is a central point of discussion. The principle of ‘non-combatant immunity’ requires no civilian be intentionally targeted, but it doesn’t follow from this that all civilian casualties are unethical.

Most scholars and some Eye in the Sky characters, such as Colonel Katherine Powell (Helen Mirren), accept even foreseeable casualties can be justified under certain conditions – for instance, if the attack is necessary, the military benefits outweigh the negative side effects and all reasonable measures have been taken to avoid civilian casualties.

Risk-free warfare

The military and ethical advantages of drone strikes are obvious. By operating remotely, we prevent the risk of our military men and women being physically harmed. Drone strikes are also becoming increasingly precise and surveillance resources mean collateral damage can be minimised.

However, the damage radius of a missile strike drastically exceeds most infantry weapons – meaning the tools used by drones are often less discriminate than soldiers on the ground carrying rifles. If collateral damage is only justified when reasonable measures have been taken to reduce the risk to civilians, is drone warfare morally justified, or does it simply shift the risk away from our war fighters to the civilian population? The key question here is what counts as a reasonable measure – how much are we permitted to reduce the risk to our own troops?

Eye in the Sky forces us to confront the ethical complexity of war.

Reducing risk can also have consequences for the morale of soldiers. Christian Enemark, for example, suggests that drone warfare marks “the end of courage”. He wonders in what sense we can call drone pilots ‘warriors’ at all.

The risk-free nature of a drone strike means that he or she requires none of the courage that for millennia has distinguished the warrior from all other kinds of killers.

How then should drone operators be regarded? Are these grounded aviators merely technicians of death, at best deserving only admiration for their competent application of technical skills? If not, by what measure can they be reasonably compared to warriors?

Moral costs of killing

Throughout the film, military commanders Catherine Powell and Frank Benson (Alan Rickman) make a compelling consequentialist argument for killing the terrorists despite the fact it will kill the innocent girl. The suicide bombers, if allowed to escape, are likely to kill dozens of innocent people. If the cost of stopping them is one life, the ‘moral maths’ seems to check out.

Ultimately it is the pilot, Steve Watts, who has to take the shot. If he fires, it is by his hand a girl will die. This knowledge carries a serious ethical and psychological toll, even if he thinks it was the right thing to do.

There is evidence suggesting drone pilots suffer from Post Traumatic Stress Disorder (PTSD) and other forms of trauma at the same rates as pilots of manned aircraft. This can arise even if they haven’t killed any civilians. Drone pilots not only kill their targets, they observe them for weeks beforehand, coming to know their targets’ habits, families and communities. This means they humanise their targets in a way many manned pilots do not – and this too has psychological implications.

Who is responsible?

Modern military ethics insist all warriors have a moral obligation to refuse illegal or unethical orders. This sits in contrast to older approaches, by which soldiers had an absolute duty to obey. St Augustine, an early writer on the ethics of war, called soldiers “swords in the hand” of their commanders.

In a sense, drone pilots are treated in the same way. In Eye in the Sky, a huge number of senior decision-makers debate whether or not to take the shot. However, as Powell laments, “no one wants to take responsibility for pulling the trigger”. Who is responsible? The pilot who has to press the button? The highest authority in the ‘kill chain’? Or the terrorists for putting everyone in this position to begin with?


Ethics Explainer: Naturalistic Fallacy. Aerial view of a spiral staircase with green plants in the center and a person reaching for them.

Ethics Explainer: Naturalistic Fallacy

Ethics Explainer: Naturalistic Fallacy. Aerial view of a spiral staircase with green plants in the center and a person reaching for them.

The naturalistic fallacy is an informal logical fallacy which argues that if something is ‘natural’ it must be good. It is closely related to the is/ought fallacy – when someone tries to infer what ‘ought’ to be done from what ‘is’.

The is/ought fallacy is when statements of fact (or ‘is’) jump to statements of value (or ‘ought’), without explanation. First discussed by Scottish philosopher, David Hume, he observed a range of different arguments where writers would be using the terms ‘is’ and ‘is not’ and suddenly, start saying ‘ought’ and ‘ought not’.

For Hume, it was inconceivable that philosophers could jump from ‘is’ to ‘ought’ without showing how the two concepts were connected. What were their justifications?

If this seems weird, consider the following example where someone might say:

  1. It is true that smoking is harmful to your health.
  2. Therefore, you ought not to smoke.

The claim that you ‘ought’ not to smoke is not just saying it would be unhealthy for you to smoke. It says it would be unethical. Why? Lots of ‘unhealthy’ things are perfectly ethical. The assumption that facts lead us directly to value claims is what makes the is/ought argument a fallacy.

As it is, the argument above is unsound – much more is needed. Hume thought no matter what you add to the argument, it would be impossible to make the leap from ‘is’ to ‘ought’ because ‘is’ is based on evidence (facts) and ‘ought’ is always a matter of reason (at best) and opinion or prejudice (at worst).

Later, another philosopher named G.E. Moore coined the term naturalistic fallacy. He said arguments that used nature, or natural terms like ‘pleasant’, ‘satisfying’ or ‘healthy’ to make ethical claims, were unsound.

The naturalistic fallacy looks like this:

  1. Breastfeeding is the natural way to feed children.
  2. Therefore, mothers ought to breastfeed their children and ought not to use baby formula (because it is unnatural).

This is a fallacy. We act against nature all the time – with vaccinations, electricity, medicine – many of which are ethical. Lots of things that are natural are good, but not all unnatural things are unethical. This is what the naturalistic fallacy argues.

Philosophers still debate this issue. For example, G. E. Moore believed in moral realism – that some things are objectively ‘good’ or ‘bad’, ‘right’ or ‘wrong’. This suggests there might be ‘ethical facts’ from which we can make value claims and which are different from ordinary facts. But that’s a whole new topic of discussion.


Male suicide is a global health issue in need of understanding

“This is a worldwide phenomenon in which men die at four times the rate of women. The four to one ratio gets closer to six or seven to one as people get older.”

That’s Professor Brian Draper, describing one of the most common causes of death among men: Suicide.

Suicide is the cause of death with the highest gender disparity in Australia – an experience replicated in most places around the world, according to Draper.

So what is going on? Draper is keen to avoid debased speculation – there are a lot of theories but not much we can say with certainty. “We can describe it, but we can’t understand it,” he says. One thing that seems clear to Draper is it’s not a coincidence so many more men die by suicide than women.

If you are raised by damaged parents, it could be damaging to you.

“It comes back to masculinity – it seems to be something about being male,” he says.

“I think every country has its own way of expressing masculinity. In the Australian context not talking about feelings and emotions, not connecting with intimate partners are factors…”

The issue of social connection is also thought to be connected in some way. “There is broad reluctance by many men to connect emotionally or build relationships outside their intimate partners – women have several intimate relationships, men have a handful at most,” Draper says.

You hear this reflection fairly often. Peter Munro’s feature in the most recent edition of Good Weekend on suicide deaths among trade workers tells a similar story.

Mark, an interviewee, describes writing a suicide note and feeling completely alone until a Facebook conversation with his girlfriend “took the weight of his shoulders”. What would have happened if Mark had lost Alex? Did he have anyone else?

None of this, Draper cautions, means we can reduce the problem to idiosyncrasies of Aussie masculinity – toughness, ‘sucking it up’, alcohol… It’s a global issue.

“I’m a strong believer in looking at things globally and not in isolation. Every country will do it differently, but you’ll see these issues in the way men interact – I think it’s more about masculinity and the way men interact.”

Another piece of the puzzle might – Draper suggests – be early childhood. If your parents have suffered severe trauma, it’s likely to have an effect.

“If you are raised by damaged parents, it could be damaging to you. Children of survivors of concentration camps, horrendous experiences like the killing fields in Cambodia or in Australia the Stolen Generations…”

It comes back to masculinity – it seems to be something about being male.

There is research backing this up. For instance, between 1988 and 1996 the children of Vietnam War veterans died by suicide at over three times the national average.

Draper is careful not to overstate it – there’s still so much we don’t know, but he does believe there’s something to early childhood experiences. “Sexual abuse in childhood still conveys suicide risk in your 70s and 80s … but there’s also emotional trauma from living with a person who’s not coping with their own demons.”

“I’m not sure we fully understand these processes.” The amount we still need to understand is becoming a theme.

What we don’t know is a source of optimism for Draper. “We’ve talked a lot about factors that might increase your risk but there’s a reverse side to that story.”

“A lot of our research is focussed predominantly on risk rather than protection. We don’t look at why things have changed for the better … For example, there’s been a massive reduction in suicides in men between 45-70 in the last 50 years.”

“Understanding what’s happened in those age groups would help.”

It’s pretty clear we need to keep talking – researchers, family, friends, support workers and those in need of support can’t act on what they don’t know.

If you or someone you know needs support, contact:

  • Lifeline 13 11 14

  • Men’s Line 1300 78 99 78

  • beyondblue 1300 224 636

  • Kids Helpline 1800 551 800


Green cracked surface illustrating logical fallacies. Conceptual image of a flaw or break in reasoning, ethics explainer.

Ethics Explainer: Logical Fallacies

Green cracked surface illustrating logical fallacies. Conceptual image of a flaw or break in reasoning, ethics explainer.

A logical fallacy occurs when an argument contains flawed reasoning. These arguments cannot be relied on to make truth claims. There are two general kinds of logical fallacies: formal and informal.

First off, let’s define some terms.

  • Argument: a group of statements made up of one or more premises and one conclusion.
  • Premise: a statement that provides reason or support for the conclusion
  • Truth: a property of statements, i.e. that they are the case
  • Validity: a property of arguments, i.e. that they are logically structured
  • Soundness: a property of statements and arguments, i.e. that they are valid and true
  • Conclusion: the final statement in an argument that indicates the idea the arguer is trying to prove

Formal logical fallacies

These are arguments with true premises, but a flaw in its logical structure. Here’s an example:

  • Premise 1: In summer, the weather is hot.
  • Premise 2: The weather is hot.
  • Conclusion: Therefore, it is summer.

Even though statement 1 and 2 are true, the argument goes in circles. By using an effect to determine a cause, the argument becomes invalid. Therefore, statement 3 (the conclusion) can’t be trusted.

Informal logical fallacies 

These are arguments with false premises. They are based on claims that are not even true. Even if the logical structure is valid, it becomes unsound. For example:

  • Premise 1: All men have hairy beards.
  • Premise 2: Tim is a man.
  • Conclusion: Therefore, Tim has a hairy beard.

Statement 1 is false – there are plenty of men without hairy beards. Statement 2 is true. Though the logical structure is valid (it doesn’t go in circles), the argument is still unsound. The conclusion is false.

A famous example of an argument that is both valid, true, and sound is as follows.

  • Premise 1: All men are mortal.
  • Premise 2: Socrates is a man.
  • Conclusion: Socrates is mortal.

It’s important to look out for logical fallacies in the arguments people make. Bad arguments can lead to true conclusions, but there is no reason for us to trust the argument that got us to the conclusion. We might have missed something or it might not always be the case.


The myths of modern motherhood

It seems as if three successive waves of feminism haven’t resolved the chronic mismatch between the ideal of the ‘good’ and ‘happy’ mother and the realities of women’s lives.

Even if you consciously reject them, ideas about what a mother ought to be and ought to feel are probably there from the minute you wake up until you go to bed at night. Even in our age of increased gender equality it seems as if the culture loves nothing more than to dish out the myths about how to be a better mother (or a thinner, more fashionable, or better-looking one).

It’s not just the celebrity mums pushing their prams on magazine covers, or the continuing dearth of mothers on TV who are less than exceptionally good-looking, or that mothers in advertising remain ubiquitously obsessed with cleaning products and alpine-fresh scents. While TV dramas have pleasingly increased the handful of roles that feature working mothers, most are unduly punished in the twists of the melodramatic plot. They have wimpy husbands or damaged children, and of course TV’s bad characters are inevitably bad due to the shortcomings of their mothers (serial killers, for example, invariably have overbearing mothers or alcoholic mothers, or have never really separated from their mothers).

It seems we are living in an age of overzealous motherhood. Indeed, in a world in which the demands of the workplace have increased, so too the ideals of motherhood have become paradoxically more – not less – demanding. In recent years, commonly accepted ideas about what constitutes a barely adequate level of mothering have dramatically expanded to include extraordinary sacrifices of time, money, feelings, brains, social relationships, and indeed sleep.

In Australia, most mothers work. But recent studies show that working mothers now spend more time with their children than their non-working mothers did in 1975. Working mothers achieve this extraordinary feat by sacrificing leisure, mental health, and even personal hygiene to spend more time with their kids.

This is coupled with a new kind of anxious sermonising that is having a profound impact on mothers, especially among the middle class. In Elisabeth Badinter’s book The Conflict, she argues that an ideology of ‘Naturalism’ has given rise to an industry of experts advocating increasingly pristine forms of natural birth and natural pregnancy, as well as an ever-expanding list of increasingly time-intensive child rearing duties that are deemed to fall to the mother alone. These duties include most of the classic practices of 21st century child rearing, including such nostrums as co-sleeping, babywearing and breastfeeding-on-demand until the age of two.

It seems we are living in an age of overzealous motherhood.

Whether it is called Intensive Mothering or Natural Parenting, these new credos of motherhood are wholly taken up with the idea that there is a narrowly prescribed way of doing things. In the West, 21st century child rearing is becoming increasingly time-consuming, expert-guided, emotionally draining, and incredibly expensive. In historical terms, I would be willing to hazard a guess that never before has motherhood been so heavily scrutinised. It is no longer just a question of whether you should or should not eat strawberries or prawns or soft cheese, or, heaven forbid, junk food, while you are pregnant, but so too, the issue of what you should or should not feel has come under intense scrutiny.

Never before has there been such a microscopic investigation of a pregnant woman’s emotional state, before, during and after birth. Indeed, the construction of new psychological disorders for mothers appears to have become something of a psychological pastime, with the old list of mental disorders expanding beyond prenatal anxiety, postnatal depression, postpartum psychosis and the baby blues, to include the baby pinks (a label for a woman who is illogically and inappropriately happy to be a mother), as well as Prenatal and Postnatal Stress Disorder, Maternal Anxiety and Mood Imbalance and Tokophobia—the latter being coined at the start of this millennium as a diagnosis for an unreasonable fear of giving birth.

The problem with the way in which this pop psychology is played out in the media is that it performs an endless re-inscription of the ideologies of mothering. These ideologies are often illogical, contradictory and – one suspects – more often dictated by what is convenient for society and not what is actually good for the children and parents involved. Above all else, mothers should be ecstatically happy mothers, because sad mothers are failed mothers. Indeed, according to the prevailing wisdom, unhappy mothers are downright unnatural, if not certifiably insane.

Never before has motherhood been so heavily scrutinised.

Little wonder there has been an outcry against such miserable standards of perfection. The same decade that saw the seeming triumph of the ideologies of Intensive and Natural mothering, also saw the rise of what has been called the ‘Parenting Hate Read’ — a popular outpouring of books and blogs written by mothers (and even a few fathers) who frankly confess that they are depressed about having children for no better reason than it is often mind-numbing, exhausting and dreadful. Mothers love their children, say the ‘Parenting Hate Reads’, but they do not like what is happening to their lives.

The problem is perhaps only partly about the disparity between media images of ecstatically happy mummies and the reality of women’s lives. It is also because our ideas about happiness have grown impoverished. Happiness, as it is commonly understood in the western world, is made up of continuous moments of pleasure and the absence of pain.

These popular assumptions about happiness are of comparatively recent origin, emerging in the works of philosophers such as Jeremy Bentham, who argued in the 18th century that people act purely in their self-interest and the goal to which self-interest aspires is happiness. Ethical conduct, according to Bentham and James Mill (father to John Stuart), should therefore aspire to maximise pleasure and minimise pain.

Our ideas about happiness have grown impoverished.

This ready equation of goodness, pleasure and happiness flew in the face of ideas that had been of concern to philosophers since Aristotle argued that a person is not made happy by fleeting pleasures, but by fulfilment stemming from meaning and purpose. Or, as Nietzsche, the whirling dervish of 19th century philosophy, put it, “Man does not strive for happiness – only the Englishman does”.

Nevertheless, Western assumptions about happiness have remained broadly utilitarian, giving rise to the culturally constructed notion of happiness we see in TV commercials, showing families becoming happier with every purchase. Or by life coaches peddling the dubious hypothesis that self-belief can overcome the odds, whatever your social or economic circumstance.

Unless you are Mother Teresa, you have probably been spending your life up until the time you have children in a reasonably independent and even self-indulgent way. You work hard through the week but sleep in on the weekend. You go to parties. You come home drunk. You see your friends when you want. Babies have different ideas. They stick forks in electric sockets, go berserk in the car seat, and throw up on your work clothes. They want to be carried around in the day and wake in the night.

If society can solve its social problems then maybe parenting will cease to be a misery competition. Mothers might not be happy in a utilitarian or hedonistic sense but will lead rich and satisfying lives. Then maybe a stay-at-home dad can change a nappy without a choir of angels descending from heaven singing ‘Hallelujah’.

This is an edited extract from “On Happiness: New Ideas for The 21st Century” UWA Publishing.


Ending workplace bullying demands courage

Despite increasing measures to combat workplace harassment, bullies remain entrenched in organisations. Changes made to laws and regulations in order to stamp out bullying have instead transformed it into an underground set of behaviours. Now hidden, these behaviours often remain unaddressed.

In other cases, anti-bullying policies can actually work to support perpetrators. Where regulations specify what bullying is, some people will cleverly use those rules as a guide to work around. Although these people are no longer bullying in the narrow sense outlined by policies or regulations, their acts of shunning, scapegoating and ostracism have the same effect. Rules that explicitly define bullying create exemptions, or even permissions, for behaviours that do not meet the formal standard.

Because they are more difficult to notice or prove, these insidious behaviours can remain undetected for long periods. As Kipling Williams and Steve Nida argued in a 2011 research paper, “being excluded or ostracized is an invisible form of bullying that doesn’t leave bruises, and therefore we often underestimate its impact”.

The bruises, cuts and blows are less evident but the internal bleeding is real. This new, psychological violence can have severe, long-term effects. According to Williams, “Ostracism or exclusion may not leave external scars, but it can cause pain that often is deeper and lasts longer than a physical injury”.

Bullies tend to be very good at office politics and working upwards, and attack those they consider rivals through innuendo and social networks.

This is a costly issue for both individuals and organisations. No-one wins. Individuals can suffer symptoms akin to Post-Traumatic Stress Disorder. Organisations in which harassment occurs must endure lost time, absences, workers’ compensation claims, employee turnover, lack of productivity, the risk of costly and lengthy lawsuits, as well as a poor reputation.

So why does it continue?

First, bullies tend to be very good at office politics and working upwards, and attack those they consider rivals through innuendo and social networks. Bullies are often socially savvy, even charming. Because of this, they are able to strategically abuse co-workers while receiving positive work evaluations from managers.

In addition, anti-bullying policies aren’t the panacea they are sometimes painted to be. If they exist at all they are often ignored or ineffective. A 2014 report by corporate training company VitalSmarts showed that 96 percent of the 2283 people it surveyed had experienced workplace bullying. But only 7 percent knew someone who had used a workplace anti-bullying policy – the majority didn’t see it as an option. Plus, we now know some bullies use such policies as a base to craft new means of enacting their power – ones that aren’t yet defined as bullying behaviour by these policies.

Finally, cases often go unreported, undetected and unchallenged. This inaction rewards perpetrators and empowers them to continue behaving in the same way. This is confusing for the victim, who is stressed, unsure, and can feel isolated in the workplace. This undermines the confidence they need to report the bullying. Because of this, many opt for a less confrontational path – hoping it will go away in time. It usually doesn’t.

Cases often go unreported, undetected and unchallenged. This inaction rewards perpetrators and empowers them to continue behaving in the same way.

What can you do if a colleague is being shunned or ostracised by peers or managers? The first step is not to participate. However, most people are already likely to be aware of this. More relevant for most people is to not become complicit by remaining silent. As 2016 Australian of the Year David Morrison famously said, “The standard you walk by is the standard you accept.”

The onus is on you to take positive steps against harassment where you witness it. By doing nothing you allow psychological attacks to continue. In this way, silent witnesses bear partial responsibility for the consequences of bullying. Moreover, unless the toxic culture that enables bullying is undone, logic says you could be the next victim.

However, merely standing up to harassment isn’t likely to be a cure-all. Tackling workplace bullying is a shared responsibility. It takes regulators, managers and individuals in cooperation with law, policy and healthy organisational culture.

The onus is on you to take positive steps against harassment where you witness it. By doing nothing you allow psychological attacks to continue.

Organisational leaders in particular need to express public and ongoing support for clearly worded policies. In doing so, policies begin to shape and inform the culture of an organisation rather than serving as standalone documents. It is critical that managers understand the impacts of bullying on culture, employee wellbeing, and their own personal liability.

When regulation fails – the dilemma most frequently seen today – we need to depend on individual moral character. Herein lies the ethical challenge. ‘Character’ is an underappreciated ethical trait in many executive education programs, but the moral virtues that form a person’s character are the foundation of ethical leadership.

A return to character might diminish the need for articles like this. In the meantime, workplace bullying provides us all with the opportunity to practise courage.


What your email signature says about you

Getting too many unethical business requests? Sreedhari Desai’s research suggests a quote in your email signature may be the answer to your woes.

In a recent study Desai enrolled subjects to participate in a virtual game to earn money. The subjects were told they’d earn more money if they could convince their fellow players to spread a lie without knowing about it. Basically, subjects had to trick their fellow players into believing a lie, and then get those other players to spread the lie around the game.

What subjects didn’t know is that all their fellow ‘players’ were in fact researchers studying how they would go about their deception. Subjects communicated with the researchers by email. Some researchers had a virtuous quote underneath their email – “Success without honor is worse than fraud”. Others had a neutral quote in their email signature – “Success and luck go hand in hand”. Others had no quote at all.

And wouldn’t you know it? Subjects were less likely to try to recruit people with a virtuous quote in their email. The quote served as a “moral symbol”, shielding the person from receiving unethical requests from other players. In an interview with Harvard Business Review, Desai outlines what’s happening in these situations:

When someone is in a position to request an unethical thing, they may not consciously be thinking, “I won’t ask that person.” Instead, they may perceive a person as morally “pure” and feel that asking them to get “dirty” makes an ethical transgression even worse. Or they may be concerned that someone with moral character will just refuse the request.

So, if you want to keep your hands clean it may be as simple as updating your email signature. It won’t guarantee you’ll do the right thing when you’re tempted (there’s more to ethics than pretty words!) but it will ensure you’re tempted less.

There are other, more expensive ways to avoid unethical approaches via email.

And in case you’re looking for a virtuous quote for your email signature, we surveyed some of our staff for their favourite virtuous quotes. Here’s a sample:

  • “The unexamined life is not worth living” – Socrates
  • “No man wishes to possess the whole world if he must first become someone else” – Aristotle
  • “Protect me from what I want” – Jenny Holzer
  • “A true man goes on to the end of his endurance and then goes twice as far again” – Norwegian proverb
  • “Knowledge is no guarantee of good behaviour, but ignorance is a virtual guarantee of bad behaviour” – Martha Nussbaum

A small disclaimer to all of this – it might not work if you work with Australians. Apparently our propensity to cut down tall poppies and our discomfort for authority extend to moral postulations in email signatures. Instead of sanctimony, Aussies are likely to protect people with fun or playful quotes in their emails. Desai explains:

“We’re studying how people react to moral symbols in Australia. Our preliminary study showed that people there were sceptical of moral displays. They seemed to think the bloke with the quote was being ‘holier than thou’ and probably had something to hide.”

So, as well as your favourite virtuous quote, you might want to bung a joke on the bottom of your emails to please your sceptical Antipodean colleagues, lest they lead you into temptation.


How should vegans live?

Ethical vegans make a concerted lifestyle choice based on ethical – rather than, say, dietary – concerns. But what are the ethical concerns that lead them to practise veganism?

In this essay, I focus exclusively on that significant portion of vegans who believe consuming foods that contain animal products to be wrong because they care about harm to animals, perhaps insofar as they have rights, perhaps just because they are sentient beings who can suffer, or perhaps for some other reason.

Throughout the essay, I take this conviction as a given, that is, I do not evaluate it, but instead investigate what lifestyle is in fact consistent with caring about harm to animals, which I will begin by calling consistent veganism. I argue that the lifestyle that consistently follows from this underlying conviction behind many people’s veganism is in fact distinct from a vegan lifestyle.
It is probably the case that one cannot live without causing harm to animals due to the trade-off in welfare between other animals who are harmed by one’s own consumption.

Let us also begin by interpreting veganism in the way that many vegans – and most who are aware of veganism – would. A vegan consumes a diet containing no animal products. In conceiving of veganism in terms of what a diet contains, there seems to be an intuition about the moral relevance of directness, according to which it matters how direct the harm caused by the consumption of the food is with regards to the consumption of the food.

On this intuition, eating a piece of meat is worse than eating a certain amount of apples grown with pesticides that causes the same amount of harm, because the harm in the first case seems to be more directly related to the consumption of the food than in the second case. Harm from the pesticides seems to be a side-effect of eating the food, whereas the death of the animal for meat seems to be a means to the eating food.

Even if we grant this intuition to be a good in this case, it is not good in the case where the harm is greater from the apples than from the meat. To eat the apples in this case is to not put one’s care about harm to animals first, which means going against the only thing that should motivate a consistent vegan.

Here, our intuition about the amount of harm caused is what seems to matter; if what we care about is harm to animals, then we should cause less rather than more harm to animals, and therefore, from the moral point of view, it seems that it is better to eat the meat than the apples. Let the conviction in this intuition be called the ‘less-is-best’ thesis.

Therefore, the intuition about the directness of the harm is only potentially relevant in situations where one has to choose between alternatives that cause the same amount of harm, or in situations where one does not know which causes more harm. The rest of the time, it seems that consistent vegans should not care about the directness of the harm, but instead care only about causing less rather than more harm to animals.

Consistent vegans should not care about the directness of the harm, but instead care only about causing less rather than more harm to animals.

This requires an awareness of harm that extends further than relatively common considerations noted by vegans regarding animal products being used in the production process for—but not being contained in—foodstuffs like alcoholic drinks. Caring about harm to animals means caring about, less directly, accidental harm to (usually very small) animals from the harvesting process, and from products that have a significant carbon footprint, and thereby contribute to (and worsen) climate change, which is already starting to lead to countless deaths and harm to animals worldwide.

However, caring about harm to animals cannot plausibly require consistent vegans to cause no harm at all to animals. If it did, then in light of the last two examples given above, it seems it would require consistent veganism to be a particularly ascetic kind of prehistoric or Robinson Crusoe-type lifestyle, which would clearly be far too demanding.

In fact, it is probably the case that one cannot live without causing harm to animals due to the trade-off in welfare between other animals who are harmed by one’s own consumption, and oneself (an animal) who is harmed if one cannot consume what one needs to survive. But it is definitely the case that all humans could not survive if no harm to other animals could be caused; this means that either human animals or non-human animals will be harmed regardless of how we live.

We could not all be morally obligated to live in such a way that we could not in fact all live. Therefore, due to this argument and due to such a lifestyle being over-demanding, there are two sufficient arguments for why causing some harm to animals is morally permissible.

If it is the case that causing some harm to animals is morally permissible, then there is no clear reason why there should be a categorical difference in the moral status of acts – such as impermissibility, permissibility, and obligation – with regards to how they harm animals, apart from when these categorical differences arise only from vast differences in the amount of harm caused by different acts.

One’s care for animals should be further-reaching: rather than merely caring about harm one causes, a consistent vegan should care about acting or living in a way that leads to less rather than more harm to animals.

So, for example, shooting a vast number of animals merely for the pleasure of sport may well be impermissible, but only insofar as it causes a much greater amount of harm than alternative acts that one could reasonably do instead of hunting. It seems that the most reasonable position, then, which is in line with the less-is-best thesis, is that the morality of harm to animals is best viewed on a continuum on which causing less harm to animals is morally better and causing more harm to animals is morally worse, where the difference in morality is linked only to the difference in the amount of harm to animals.

Hitherto, I have said that it seems to be the case that consistent vegans care about causing less rather than more harm to animals. However, I claim that the less-is-best thesis should in fact be interpreted as having a wider application than merely harm caused by our actions or life lived. One’s care for animals should be further-reaching: rather than merely caring about harm one causes, a consistent vegan should care about acting or living in a way that leads to less rather than more harm to animals. The latter includes a concern about harm caused by others that one can prevent, which the former excludes as it is not harm caused by oneself.

The impact of social interaction on people’s lifestyles is an important way in which consistent vegans can act or live in a way that leads to less rather than more harm to animals. That nearly all vegans are in fact vegans because they were previously introduced to vegan ideas by others – rather than coming by them and becoming vegan through sheer introspection – is testimony to the impact of social interaction on people’s lifestyles, which in turn can be more or less harmful to animals.

If this recommended lifestyle is too demanding, many will reject it or simply not change, meaning that these people will continue to harm animals.

Consistent vegans have the potential to build a broad social movement that encourages many others to lead lives that cause less harm to animals. But in order to do this, consistent vegans will have to persuade those who do not care about harm to animals (or let care about harm to animals impact their lifestyle) to lead a different kind of lifestyle, and if this recommended lifestyle is too demanding, many will reject it or simply not change, meaning that these people will continue to harm animals.

The vegan lifestyle may seem too hard for many people

If these people are more likely to make lifestyle changes if the lifestyle suggested to them is less demanding, which for many – and probably a vast majority – will be the case, then consistent vegans could bring about less harm to animals if they try to persuade these people to live lifestyles that optimally satisfy the trade-off between demandingness and personal harm to animals. This lifestyle that consistent vegans should attempt to persuade others to follow I shall call “environmentarianism”.

Why ‘environmentarianism’? And what is the content of environmentarianism? Care about harm to animals can be framed in terms of care for the environment, as the environment is partially – and in a morally important way—constituted by animals. This can be easily – and I believe quite intuitively – communicated to those who do care about harm to animals, and those who do not are likely to be more swayed by arguments that are framed in terms of concern for the environment than for animals. Concern for oneself, one’s loved ones, and one’s species – things that most people care greatly about – may be more easily read into the former than the latter, especially in light of impending climate change.

Environmentarianism, then, is the set of lifestyles that seek to reduce harm done to the environment (which is conceived in terms of harm to animals for consistent vegans) – as this matters morally for environmentarians – regardless of which sphere of life this reduction of harm comes from. Be it rational or not, ascribing the title and social institution of ‘environmentarian’ to one’s life will, for many, make them more likely to lead a life that is more in line with caring about harm to animals; people often attach themselves to these titles, as the dogmatic behaviour of many vegans shows.

Some may prefer to reduce total harm to animals by a given amount by making the sacrifice of having a vegan diet, but not compromising on their regular car journey to work, whilst others may find taking on the latter easier than maintaining the strict vegan diet.

Moreover, environmentarianism can be practised to a more or less radical – and thus moral – extent. Some may prefer to reduce total harm to animals by a given amount by making the sacrifice of having a vegan diet, but not compromising on their regular car journey to work, or perhaps by opting out of what for them may be uncomfortable proselytising, whilst others may find taking on the latter two easier than maintaining the strict vegan diet (that they perhaps used to have). Some may reduce total harm by an even greater amount – and hence lead a morally better lifestyle – by having a vegan diet and by refraining from harmful transport and by actively suggesting environmentarianism to others.

As an environmentarian may begin by making very small changes, one can be welcomed into a social movement and be eased in to making further lifestyle changes over time, rather than being put off by the strictness of veganism or the antagonism typical of some vegans. Environmentarianism has the great advantage of making it easier for the many who cannot face the idea of never eating animal products again to live more ethically-driven lives.

It follows from all this, then, that consistent vegans should be (especially stringent) environmentarians. For the given impact they have on the total harm to animals, it does not matter if this comes from a totally vegan diet. In fact, to be fixated on dietary purity to the neglect of other spheres of one’s life – in the way that many vegans are – is to contradict a care about harms to animals. With this care given, what matters is lowering the level of harm to animals, regardless of how this harm is done.

This article was republished with permission from the Journal of Practical Ethics.


Your donation is only as good as the charity you give it to

It’s admirable, perhaps even required, for those of us living comfortable lives in the developed world to give some time or money to ‘good causes’. We recognise fortune has smiled upon us and have a desire to ‘give back’ in some way – we have so much and others have so little, we seek to redress it.

But while there is strong agreement the fortunate have an opportunity – some would say an obligation – to use their resources to make life better for the less well-off, the discussion often ends here. That is, we think people should do something but we’re often not concerned with exactly what that something is. Should you donate to a local homeless shelter, a national medical research charity or a big international NGO? For many people, it’s unclear there’s any difference between these actions – surely they all make the world a better place?

Let’s think about that.

Say there were only two charities in the world, The Cupcake Foundation and the Real Actual Medicines Trust. The Cupcake Foundation distributes delicious cupcakes to people in hospital. The Real Actual Medicines Trust distributes medicines that will completely cure a patient’s disease. Both charities are undeniably making the world a better place but it’s pretty clear that one is doing much more good than the other. I think most people would choose to donate their money to The Real Actual Medicines Trust.

Asking these questions gets to the heart of why it is we help people. Are we altruistic because we think that making others better off is good in and of itself? Or do we just do it to stave off our middle-class guilt?

Take a different example. Let’s say that a third charity, Medicines ’R Us, is also distributing medicines, but they use generic medicines that cost half as much to produce as the on-brand medicines distributed by The Real Actual Medicines Trust. This means that a $20 donation to the Medicines ‘R Us will cure twice as many people as a $20 donation to the Real Actual Medicines Trust. Surely – given we can do twice as much good donating to the former than the latter – we should give our money to Medicines ‘R Us, thus doubling our impact.

These seem like contrived examples, but in reality, the differences between charities are astounding – not just a factor of two or three times, as in the example above, but some are ten, or even 100 times more effective than others!

Asking these questions gets to the heart of why it is we help people. Are we altruistic because we think that making others better off is good in and of itself? Or do we just do it to stave off our middle-class guilt? To impress other people? To show off?

Undoubtedly many good works are motivated by these latter factors. But are they really good reasons? Are they the reasons we’d choose? I’d certainly like to think that when I donate to charity I’m doing it for the benefit of other people. The sense of wellbeing that I get afterwards is nice but ultimately not morally important.

If it’s really other people’s benefit we care about, we need to think hard about how we choose where we spend our time and money on good causes. We don’t have infinite time and money. Every choice to donate something to one cause is an implicit choice not to donate it to another.

It might seem harsh to judge any charity less effective than any other another. Doesn’t that mean that the people served by the less-effective charity lose out? It does. But not making comparisons doesn’t mean everyone gets the help they need. It means that people who would be helped by the more effective charities lose out. More people are becoming sick, even dying, because people are choosing not to donate more effectively.

In the long run, hopefully we’ll be in a position to eradicate extreme poverty and disease from the world and we’ll have enough resources to fund all good causes. In the meantime, we should surely help as many people as we can.

Effective Altruism, a new social and philosophical movement is emerging to try to answer a fundamental question – “what is the most good that we can do?”

Effective altruism is about using evidence and reason to find ways to make the world as good a place as it can be. It tries to view all people – wherever they are in the world, even those not yet born – as being equally deserving of living happy, healthy, dignified, flourishing lives. Its proponents try to focus on the best ways of doing good. It’s just like regular altruism in that it seeks to do good for others. However, by focusing on effectiveness, it seeks to do the most good possible.

This manifests in different ways. Some people try to find the most effective charities to donate to, others try to work out which career you should choose if you want to have the biggest impact. Others think about the long-run future of humanity, reasoning that if there’s even a small chance that humans could wipe ourselves out (say, in a nuclear winter, or a deadly engineered disease escaping a laboratory), avoiding such an outcome would be a huge benefit.

It’s just like regular altruism in that it seeks to do good for others. However, by focusing on effectiveness, it seeks to do the most good possible.

Many also take a pledge to give a fixed portion of their income to effective charities – often 10 percent – because it increases the chance that they’ll actually donate, rather than putting it off for another day. In all these cases people are motivated by their sense of empathy but guided by scientific evidence and reason.

It may seem strange to think we might have an obligation to donate to one set of charities rather than others. After all, surely if someone wants to donate their money to a charity that focuses on people in their local community or on a cause that’s particularly important to them, they have that right. Of course, donation – unlike taxes – is an individual choice, not a legal obligation. But when faced with a choice of whether to help 100 people or just one, is it really a difficult decision?