Ethics Explainer: Eudaimonia

The closest English word for the Ancient Greek term eudaimonia is probably “flourishing”.

The philosopher Aristotle used it as a broad concept to describe the highest good humans could strive toward – or a life ‘well lived’.

Though scholars translated eudaimonia as ‘happiness’ for many years, there are clear differences. For Aristotle, eudaimonia was achieved through living virtuously – or what you might describe as being good. This doesn’t guarantee ‘happiness’ in the modern sense of the word. In fact, it might mean doing something that makes us unhappy, like telling an upsetting truth to a friend.

Virtue is moral excellence. In practice, it is to allow something to act in harmony with its purpose. As an example, let’s take a virtuous carpenter. In their trade, virtue would be excellences in artistic eye, steady hand, patience, creativity, and so on.

The eudaimon [yu-day-mon] carpenter is one who possesses and practices the virtues of his trade.

By extension, the eudaimon life is one dedicated to developing the excellences of being human. For Aristotle, this meant practicing virtues like courage, wisdom, good humour, moderation, kindness, and more.

Today, when we think about a flourishing person, virtue doesn’t always spring to mind. Instead, we think about someone who is relatively successful, healthy, and with access to a range of the good things in life. We tend to think flourishing equals good qualities plus good fortune.

This isn’t far from what Aristotle himself thought. Although he did believe the virtuous life was the eudaimon life, he argued our ability to practice the virtues was dependent on other things falling in our favour.

For instance, Aristotle thought philosophical contemplation was an intellectual virtue – but to have the time necessary for contemplation you would need to be wealthy. Wealth (as we all know) is not always a product of virtue.

Some of Aristotle’s conclusions seem distasteful by today’s standards. He believed ugliness was a hindrance to developing practical social virtues like friendship (because nobody would be friends with an ugly person).

 

 

However, there is something intuitive in the observation that the same person, transformed into the embodiment of social standards of beauty, would – everything else being equal – have more opportunities available to them.

In recognising our ability to practice virtue might be somewhat outside our control, Aristotle acknowledges our flourishing is vulnerable to misfortune. The things that happen to us can not only hurt us temporarily, but they can put us in a condition where our flourishing – the highest possible good we can achieve – is irrevocably damaged.

For ethics, this is important for three reasons.

First, because when we’re thinking about the consequences of an action we should take into account their impact on the flourishing of others. Second, it suggests we should do our best to eliminate as many barriers to flourishing as we possibly can. And thirdly, it reminds us that living virtuously needs to be its own reward. It is no guarantee of success, happiness or flourishing – but it is still a central part of what gives our lives meaning.


Don’t throw the birth plan out with the birth water!

Don’t throw the birth plan out with the birth water!

Don’t throw the birth plan out with the birth water!

Just try mentioning ‘birth plans’ at a party and see what happens. Hannah Dahlen – a midwife’s perspective

Mia Freedman once wrote about a woman who asked what her plan was for her placenta. Freedman felt birth plans were “most useful when you set them on fire and use them to toast marshmallows”. She labelled people who make these plans as “birthzillas” more interested in birth than having a baby.

In response, Tara Moss argued:

The majority of Australian women choose to birth in hospital and all hospitals do not have the same protocols. It is easy to imagine they would, but they don’t, not from state to state and not even from hospital to hospital in the same city. Even individual health practitioners in the same facility sometimes do not follow the same protocols.

The debate

Why the controversy over a woman and her partner writing down what they would like to have done or not done during their birth?  The debate seems not to be about the birth plan itself, but the issue of women taking control and ownership of their births and what happens to their bodies.

Some oppose birth plans on the basis that all experts should be trusted to have the best interests of both mother and baby in mind at all times. Others trust the mother as the person most concerned for her baby and believe women have the right to determine what happens to their bodies during this intimate, individual, and significant life event.

As a midwife of some 26 years, I wish we didn’t need birth plans. I wish our maternity system provided women with continuity of care so by the time a woman gave birth her care provider would fully know and support her well-informed wishes. Unfortunately, most women do not have access to continuity of care. They deal with shift changes, colliding philosophical frameworks, busy maternity units, and varying levels of skill and commitment from staff.

There are so many examples of interventions that are routine in maternity care but lack evidence they are necessary or are outright harmful. These include immediate clamping and cutting of the umbilical cord at birth, episiotomy, continuous electronic foetal monitoring, labouring or giving birth laying down and unnecessary caesareans. Other deeply personal choices such as the use of immersion in water for labour and birth or having a natural birth of the placenta are often not presented as options, or are refused when requested.

The birth plan is a chance to raise and discuss your wishes with your healthcare provider. It provides insight into areas of further conversation before labour begins.

I once had a woman make three birth plans when she found out her baby was in a breech presentation at 36 weeks. One for a vaginal breech birth, one for a cesarean, and one for a normal birth if the baby turned. The baby turned and the first two plans were ditched. But she had been through each scenario and carved out what was important for her.

Bashi Hazard – a legal perspective

Birth plans were introduced in the 1980s by childbirth educators to help women shape their preferences in labour and to communicate with their care providers. Women say preparing birth plans increases their knowledge and ability to make informed choices, empowers them, and promotes their sense of safety during childbirth. Some (including in Australia) report that their carefully laid plans are dismissed, overlooked, or ignored.

There appears to be some confusion about the legal status or standing of birth plans. Neither is reflective of international human rights principles or domestic law. The right to informed consent is a fundamental principle of medical ethics and human rights law and is particularly relevant to the provision of medical treatment. In addition, our common law starts from the premise that every human body is inviolate and cannot be subject to medical treatment without autonomous, informed consent.

Pregnant women are no exception to this human rights principle nor to the common law.

If you start from this legal and human rights premise, the authoritative status of a birth plan is very clear. It is the closest expression of informed consent that a woman can offer her caregiver prior to commencing labour. This is not to say she cannot change her mind but it is the premise from which treatment during labour or birth should begin.

Once you accept that a woman has the right to stipulate the terms of her treatment, the focus turns to any hostility and pushback from care providers to the preferences a woman has the right to assert in relation to her care.

Mothers report their birth plans are criticised or outright rejected on the basis that birth is “unpredictable”. There is no logic in this.

Care providers who understand the significance of the human and legal right to informed consent begin discussing a woman’s options in labour and birth with her as early as the first antenatal visit. These discussions are used to advise, inform, and obtain an understanding of the woman’s preferences in the event of various contingencies. They build the trust needed to allow the care provider to safely and respectfully support the woman through pregnancy and childbirth. Such discussions are the cornerstone of woman-centred maternity healthcare.

Human Rights in Childbirth

Reports received by Human Rights in Childbirth indicate that care provider pushback and hostility towards birth plans occurs most in facilities with fragmented care or where policies are elevated over women’s individual needs. Mothers report their birth plans are criticised or outright rejected on the basis that birth is “unpredictable”. There is no logic in this. If anything, greater planning would facilitate smoother outcomes in the event of unanticipated eventualities.

In truth, it is not the case that these care providers don’t have a birth plan. There is a birth plan – one driven purely by care providers and hospital protocols without discussion with the woman. This offends the legal and human rights of the woman concerned and has been identified as a systemic form of abuse and disrespect in childbirth, and as a subset of violence against women.

It is essential that women discuss and develop a birth plan with their care providers from the very first appointment. This is a precious opportunity to ascertain your care provider’s commitment to recognising and supporting your individual and diverse needs.

Gauge your care provider’s attitude to your questions as well as their responses. Expect to repeat those discussions until you are confident that your preferences will be supported. Be wary of care providers who are dismissive, vague or non-responsive. Most importantly, switch care providers if you have any concerns. The law is on your side. Use it.

Making a birth plan – some practical tips

  1. Talk it through with your lead care provider. They can discuss your plans and make sure you understand the implications of your choices.
  2. Make sure your support network know your plan so they can communicate your wishes.
  3. Attending antenatal classes will help you feel more informed. You’ll discover what is available and the evidence is behind your different options.
  4. Talk to other women about what has worked well for them, but remember your needs might be different.
  5. Remember you can change your mind at any point in the labour and birth. What you say is final, regardless of what the plan says.
  6. Try not to be adversarial in your language – you want people working with you, not against you. End the plan with something like “Thank you so much for helping make our birth special”.
  7. Stick to the important stuff.

Some tips on the specific content of your birth plan are available here.


Does ethical porn exist?

It’s hard to separate violence and sex in lots of today’s internet pornography. Easily accessible content includes simulated rape, women being slapped, punched, and subject to slews of misogynistic insults.

It’s also harder than ever to deny that pornography use, given its addictive, misogynistic, and violent nature, has a range of negative impacts on consumers. First exposure to internet porn in Western countries takes place before puberty for a significant fraction of children today.  A disturbingly high proportion of teenage boys and young men today believe rape myths as a result of porn exposure. There is also evidence suggesting exposure to violent, X-rated material leads to a dramatic increase in the perpetration of sexual violence.

Before we can answer questions about the ethics of porn we need to address fundamental questions about the ethics of sex.

It is also difficult to deny that the practices of the porn industry are exploitative to performers themselves. Stories such as the Netflix documentary Hot Girls Wanted depict cases of female performers agreeing to shoot a scene involving a particular act, only to be coerced on the spot by the producers into a more hard-core scene not previously agreed to. Anecdotes suggest this isn’t uncommon.

While these facts about disturbing content and exploitative practices lead some people to believe consumption of internet porn is unethical or anti-feminist, it prompts others to ask whether there could be such a thing as ethical porn. Are the only objections to pornography circumstantial – based in violent content, exploitation or particular types of pornography? Or is there some deeper fact about porn – any porn – that renders it ethically objectionable?

Suppose the kind of porn commonly found online:

  • Depicted realistic, consensual, non-misogynistic, and safe sex – condoms and all.
  • Was free of exploitation (a pipe-dream, but let’s imagine).
  • Performers fully and properly consented to everything filmed.
  • Regulation ensured only people who were educated and had other employment options were allowed to perform.
  • Performers did not have a history of sexual abuse or underage porn exposure
  • Pristine sexual health was a prerequisite for becoming a porn performer.
  • The porn industry cut any ties they are alleged to have with sex trafficking and similarly exploitative activity.

If all this came true, would any plausible ethical objections to the production and consumption of pornography remain?

Before we can answer questions about the ethics of porn, we need to address fundamental questions about the ethics of sex.

One question is this: is sex simply another bodily pleasure, like getting a massage, or do sex acts have deeper significance?  Philosopher Anne Barnhill describes sexual intercourse as a type of body language. She thinks that when you have sex with a person, you are not just going through physically pleasurable motions, you are expressing something to another person.

If you have sex with someone you care for deeply, this loving attitude is expressed through the body language of sex. But using the expressive act of sex for mere pleasure with a person you care little about can express a range of callous or hurtful attitudes. It can send the message that the other person is simply an object to be used.

Even if not, the messages can be confusing. The body language of tender kissing, close bodily contact and caresses say one thing to a sexual partner, while the fact that one has few emotional strings attached to them – especially if this is stated beforehand – says another.

We know that such mixed messages are often painful. The human brain is flooded with oxytocin – the same bonding chemical responsible for attaching mothers to their children – when humans have sex.  There is a biological basis to the claim that ‘casual sex’ is a contradiction in terms. Sex bonds people to each other, whether we want this to happen or not. It is a profound and relationally significant act.

Porn consumption can become a refuge that prevents people otherwise capable of the daunting but character-building work of seeking a meaningful sexual relationship with a real person.

Let’s bring these ideas about the specialness of sex back to the discussion about porn. If the above ideas about sex are correct, then there is cause for doubt over the idea that it is the sort of thing that people in a casual or even non-existent relationship should be paid for.  So long as there are ethical problems with casual sex itself, there will be ethical problems with consuming filmed casual sex.

So what should we say about porn made by adults in a loving relationship, as much ‘amateur’ (unpaid) pornography is?  Suppose we have a film made by a happily married couple who love each other deeply and simply want to film and show realistic, affectionate, loving sex.  Could consumption of such material pass as ethical?

Maybe it could, but many doubts remain. Porn consumption can become a refuge that prevents people otherwise capable of the daunting but character-building work of seeking a meaningful sexual relationship with a real person from doing so.  Porn (even of the relatively wholesome kind described above) carries no risk of rejection, requires no relational effort and doesn’t demand consideration of another person’s sexual wishes or preferences.

Because it promises high reward for little effort, porn has the potential to prolong adolescence – that phase of life dominated by lone sexual fantasies – and be a disincentive to grow into the complicated, sexual relationship building of adulthood.

Based on this line of thinking, there may still be something unvirtuous about the consumption of porn, even that was produced ethically.  Perhaps the only truly ethical, sexually explicit film would be of people in a loving relationship, which is seen only by them.


Anzac Day: militarism and masculinity don’t mix well in modern Australia

In 2016, the then Prime Minister Tony Abbott penned a passionate column on the relevance of Anzac Day to modern Australia. For Abbott, the Anzacs serve as the moral role models that Australians should seek to emulate. He wrote, “We hope that in striving to emulate their values, we might rise to the challenges of our time as they did to theirs”.

The notion that Anzacs embody a quintessentially Australian spirit is a century old. The official World War I journalist C.E.W. Bean wrote Gallipoli was the crucible in which the rugged resilience and camaraderie of (white) Australian masculinity, forged in the bush, was decisively tested and proven on the world stage.

At the time, this was a potent way of making sense out of the staggering loss of 8000 Australian lives in a single military campaign. Since then, it has been common for politicians and journalists to claim that Australia was ‘baptised’ in the ‘blood and fire’ of Gallipoli.

The dark side to the Anzac myth is a view of violence as powerful and creative.

However, public interest in Anzac Day has fluctuated over the course of the 20th century. Ambivalence over Australia’s role in the Vietnam War had a major role in dampening enthusiasm from the 1970s.

The election of John Howard in 1996 signalled a new era for the Anzac myth. The ‘digger’ was, for Prime Minister Howard, the embodiment of Australian mateship, loyalty and toughness. Since then, government funding has flowed to Anzac-related school curricula as well as related books, films and research projects. Old war memorials have been refurbished and new ones built. Attendance at Anzac events in Australia and overseas has swelled.

On Anzac Day, we are reminded how crucial it is for individuals to be willing to forgo self-interest in exchange for the common national good. Theoretically, Anzac Day teaches us not to be selfish and reminds us of our duties to others. But it does so at a cost. Because military role models bring with them militarism – which sees the horror and tragedy of war as a not only justifiable but desirable way to solve problems.

The dark side to the Anzac myth is a view of violence as powerful and creative. Violence is glorified as the forge of masculinity, nationhood and history. In this process, the acceptance and normalisation of violence culminates in celebration.

The renewed focus on the Anzac legend in Australian consciousness has brought with it a pronounced militarisation of Australian history, in which our collective past is reframed around idealised incidents of conflict and sacrifice. This effectively takes the politics out of war, justifying ongoing military deployment in conflict overseas, and stultifying debate about the violence of invasion and colonisation at home.

In the drama of militarism, the white, male and presumptively heterosexual soldier is the hero. The Anzac myth makes him the archetypical Australian, consigning the alternative histories of women, Aboriginal and Torres Strait Islanders, and sexual and ethnic minorities to the margins. I’d argue that for right-wing nationalist groups, the Anzacs have come to represent their nostalgia for a racially purer past. They have aggressively protested against attempts to critically analyse Anzac history.

Turning away from militarism does not mean devaluing the military or forgetting about Australia’s military history.

Militarism took on a new visibility during Abbott’s time as Prime Minister. Current and former military personnel have been appointed to major civilian policy and governance roles. Police, immigration, customs, and border security staff have adopted military-style uniforms and arms. The number of former military personnel entering state and federal politics has risen significantly in the last 15 years.

The notion that war and conflict is the ultimate test of Australian masculinity and nationhood has become the dominant understanding not only of Anzac day but, arguably, of Australian identity. Any wonder that a study compiled by McCrindle Research reveals that 34% of males, and 42% of Gen Y males, would enlist in a war that mirrored that of WWI if it occurred today.

This exaltation of violence sits uncomfortably alongside efforts to reduce and ultimately eradicate the use of violence in civil and intimate life. Across the country we are grappling with epidemic of violence against women and between men. But when war is positioned as the fulcrum of Australian history, when our leaders privilege force in policy making, and when military service is seen as the penultimate form of public service, is it any wonder that boys and men turn to violence to solve problems and create a sense of identity?

The glorification of violence in our past is at odds with our aspirations for a violence-free future.

In his writings on the dangers of militarism, psychologist and philosopher William James called for a “moral equivalent of war” – a form of moral education less predisposed to militarism and its shortcomings.

Turning away from militarism does not mean devaluing the military or forgetting about Australia’s military history. It means turning away from conflict as the dominant lens through which we understand our heritage and shared community. It means abjuring force as a means of solving problems and seeking respect. However, it also requires us to articulate an alternative ethos weighty enough to act as a substitute for militarism.

At a recent domestic violence conference in Sydney, Professor Bob Pease called for the rejection of the “militarisation of masculinity”, arguing that men’s violence in war was linked to men’s violence against women. At the same time, however, he called on us to foster “a critical ethic of care in men”, recognising that men who value others and care for them are less prone to violence.

For as long as militarism and masculinity are fused in the Australian imagination, it’s hard to see how this ethos of care can take root. It seems that the glorification of violence in our past is at odds with our aspirations for a violence-free future. The question is whether we value this potential future more than an idealised past.


Academia’s wicked problem

What do you do when a crucial knowledge system is under-resourced, highly valued, and is having its integrity undermined? That’s the question facing those working in academic research and publishing. There’s a risk Australians might lose trust in one of the central systems on which we rely for knowledge and innovation.

It’s one of those problems that defies easy solutions, like obesity, terrorism or a tax system to suit an entire population. Academics call these “wicked problems” – meaning they’re resistant to easy solutions, not that they are ‘evil’.

Charles West Churchman, who first coined the term, described them as:

That class of social system problems which are ill-formulated, where the information is confusing, where there are many clients and decision makers with conflicting values and where the ramifications in the whole system are thoroughly confusing.

The wicked problem I face day-to-day is that of research and publication ethics. Though most academics do their best within a highly pressured system I see many issues, which span a continuum, starting with cutting corners in the preparation of manuscripts and ending with outright fraud.

It’s helpful to know whether the problem we are facing is a wicked one or not. It can help us to rethink the problem, understand why conventional problem-solving approaches have failed and encourage novel approaches, even if solutions are not readily available.

Though publication ethics, which considers academic work submitted for publication, has traditionally been considered a problem solely for academic journal editors and publishers, it is necessarily entwined with research ethics – issues related to the actual performance of the academic work. For example, unethical human experimentation may only come to light at the time of publication though it clearly originates much earlier.

Given the pressure editors are under, the system is vulnerable to subversion.

Consider the ethical issues surrounding peer review, the process by which academic experts (peers) assess the work of others.

Though imperfect, formalisation of peer review has become an important mark of quality for a journal. Peer review by experts, mediated by journal editors, usually determines whether a paper is published. Though seemingly simple, there are many points where the system can be gamed or even completely subverted – a major one being in the selection of reviewers.

As the number of academics and submissions to journals increase, editors face a logistical challenge in keeping track of an ever-increasing number of submissions needing review. However, as the rise in submissions has not corresponded with a rise in editors – many of whom are volunteers – these journals are overworked and often don’t have a big enough circle of reviewers to call on for the papers being submitted.

A simple approach to increase the pool of reviewers adopted by a number of journals is to allow authors to suggest reviewers for their paper via the online peer review system. These suggestions can be valuable if overseen by editors who can assess reviewers’ credentials. But they are already overworked and often handling work at the edge of their area of expertise, meaning time is at a premium.

Given the pressure editors are under, the system is vulnerable to subversion. It’s always been a temptation for authors to submit the name of reviewers who they believed would view their work favourably. Recently, a small number took it a step further, suggesting fake reviewer names for their papers.

These fake reviewers (usually organised via a third party) promptly submitted favourable reviews which led to papers inappropriately being accepted for publication. The consequences were severe – papers had to be retracted with consequent potential reputational damage to the journal, editors, authors and their institutions. Note how a ‘simple’ view of a ‘wicked’ problem – under resourced editors can be helped by authors suggesting their reviewers – led to new and worse problems than before.

Removing the ability of authors to suggest reviewers … would be treating a symptom rather than the cause.

But why would some authors go to such extreme ends as submitting fake reviews? The answer takes us into a related problem – the way authors are rewarded for publications.

Manipulating peer review gives authors a higher chance of publication – and academic publications are crucial for being promoted at universities. Promotion often provides higher salary, prestige, perhaps less teaching allocation and other fringe benefits. So for those at the extreme, who lack the necessary skills to publish (or even firm command of academic English), it’s logical to turn to those who understand how to manipulate the system.

We could easily fix the problem of fake reviews by removing an author’s ability to suggest reviewers but this would be treating a symptom rather than the cause. Namely, a perverse reward system for authors.

Removing author suggestions does nothing to help overworked editors deal more easily with the huge amount of submissions they receive. Nor do editors have the power to address the underlying problem – an inappropriate system of academic incentives.

There are no easy solutions but accepting the complexity may at least help to understand what it is that needs to be solved. Could we change the incentive structure to reward authors for more than merely being published in a journal?

There are moves to understand these intertwined problems but all solutions will fail unless we come back to the first requirement for approaching a wicked problem – agreement it’s a problem shared by many. So while the issues are most notable in academic journals, we won’t find all the solutions there.


Philosophy must (and can) thrive outside universities

A recent article in ABC Religion by Steve Fuller described philosophy being “at the crossroads”. The article explores philosophy’s relationship to universities and what living a “philosophical life” really looks like.

Reading it, my mind whisked me back to some of my earliest days at The Ethics Centre. Before returning to Sydney, I enjoyed the good fortune to complete my doctorate at Cambridge – one of the great universities of the world. While there, I mastered the disciplines of academic philosophy. However, I also learned the one lesson that my supervisor offered me at our first meeting – I should always “go for the jugular”. As it happens, I was quite good at drawing blood.

Perhaps this was a young philosopher’s sport because, as I grew older and read more deeply, I came to realise what I’d learned to do was not really consistent with the purpose and traditions of philosophy at all. Rather, I had become something of an intellectual bully – more concerned with wounding my opponents than with finding the ‘truth’ in the matter being discussed.

This realisation was linked to my re-reading of Plato – and his account of the figure of Socrates who, to this day, remains my personal exemplar of a great philosopher.

The key to my new understanding of Socrates lay in my realisation that, contrary to what I had once believed, he was not a philosophical gymnast deliberately trying to tie his interlocutors in knots (going for the jugular). Rather, he was a man sincerely wrestling, with others, some of the toughest questions faced by humanity in order to better understand them. What is justice? What is a good life? How are we to live?

The route to any kind of answer worth holding is incredibly difficult – and I finally understood (I was a slow learner) that Socrates subjected his own ideas to the same critical scrutiny he required of others.

In short, he was totally sincere when he said that he really did not know anything. All of his questioning was a genuine exploration involving others who, in fact, did claim to ‘know’. That is why he would bail up people in the agora (the town square) who were heading off to administer ‘justice’ in the Athenian courts.

Surely, Socrates would say, if you are to administer justice – then you must know what it is. As it turned out, they did not.

The significance of Socrates’ work in the agora was not lost on me. Here was a philosopher working in the public space. The more I looked, the more it seemed that this had been so for most of the great thinkers.

So that is what I set out to do.

One of my earliest initiatives was to head down to Martin Place, in the centre of Sydney, where I would set up a circle of 10 plastic chairs and two cardboard signs that said something like, “If you want to talk to a philosopher about ideas, then take a seat”. And there I would sit – waiting for others.

Without fail they would come – young, old, rich, poor – wanting to talk about large, looming matters in their lives. I remember cyclists discussing their place on our roads, school children discussing their willingness to cheat in exams (because they thought the message of society is ‘do whatever it takes’).

Occasionally, people would come from overseas – having heard of this odd phenomenon. A memorable occasion involved a discussion with a very senior and learned rabbi from Amsterdam – the then global head (I think) of Progressive Judaism. On another occasion, a woman brought her mother (visiting from England) to discuss her guilt at no longer believing in God. I remember we discussed what it might mean to feel guilt in relation to a being you claimed not to exist. There were few answers – but some useful insights.

Anyway, I came to imagine a whole series of philosophers’ circles being dotted around Martin Place and other parts of Sydney (and perhaps Australia). After all, why should I be the only philosopher pursuing this aspect of the philosophical life. So I reached out to the philosophy faculty at Sydney University – thinking (naively as it turned out) I would have a rush of colleagues wishing to join me.

Alas – not one was interested. The essence of their message was that they doubted the public would be able to engage with ‘real philosophy’ – that the techniques and language needed for philosophy would be bewildering to non-philosophers. I suspect there was also an undeclared fear of being exposed to their fellow citizens in such a vulnerable position.

Actually, I still don’t really know what led to such a wholesale rejection of the idea.

However, I think it was a great pity other philosophers should have felt more comfortable within the walls of their universities rather than out in the wider world.

I doubt that anything I write or say will be quoted in the centuries to come. However, I would not, for a moment, change the choice I made to step outside of the university and work within the agora. Life then becomes messy and marvellous in equal measure. Everything needs to be translated into language anyone can understand (and I have found that this is possible without sacrificing an iota of philosophical nuance).

I think it was a great pity other philosophers should have felt more comfortable within the walls of their universities rather than out in the wider world.

You constantly need to challenge unthinking custom and practice most people simply take for granted. This does not make you popular. You are constantly accused of being ‘unethical’ because you entertain ideas one group or another opposes. You please almost nobody. You cannot aim to be liked. And you have to deal with the rawness of people’s lives – discovering just how much the issues philosophers consider (especially in the field of ethics) really matter.

This is not to say that ‘academic’ philosophy should be abandoned. However, I can see no good reason why philosophers should think this is the only (or best) way to be a philosopher. Surely, there is room (and a need) for philosophers to live larger, more public lives.

You constantly need to challenge unthinking custom and practice most people simply take for granted. This does not make you popular.

I have scant academic publications to my name. However, at the height of the controversy surrounding the introduction of ethics classes for children not attending scripture in NSW, I enjoyed the privilege of being accused of “impiety” and “corrupting the youth” by the Anglican and Catholic Archbishops of Sydney. Why a ‘privilege’? Because these were precisely the same charges alleged against Socrates. So far, I have avoided the hemlock. For a philosopher, what could be better than that?


‘Eye in the Sky’ and drone warfare

Warning – general plot spoilers to follow.

Collateral damage

Eye in the Sky begins as a joint British and US surveillance operation against known terrorists in Nairobi. During the operation, it becomes clear a terrorist attack is imminent, so the goals shift from surveillance to seek and destroy.

Moments before firing on the compound, drone pilots Steve Watts (Aaron Paul) and Carrie Gershon (Phoebe Fox) see a young girl setting up a bread stand near the target. Is her life acceptable collateral damage if her death saves many more people?

In military ethics, the question of collateral damage is a central point of discussion. The principle of ‘non-combatant immunity’ requires no civilian be intentionally targeted, but it doesn’t follow from this that all civilian casualties are unethical.

Most scholars and some Eye in the Sky characters, such as Colonel Katherine Powell (Helen Mirren), accept even foreseeable casualties can be justified under certain conditions – for instance, if the attack is necessary, the military benefits outweigh the negative side effects and all reasonable measures have been taken to avoid civilian casualties.

Risk-free warfare

The military and ethical advantages of drone strikes are obvious. By operating remotely, we prevent the risk of our military men and women being physically harmed. Drone strikes are also becoming increasingly precise and surveillance resources mean collateral damage can be minimised.

However, the damage radius of a missile strike drastically exceeds most infantry weapons – meaning the tools used by drones are often less discriminate than soldiers on the ground carrying rifles. If collateral damage is only justified when reasonable measures have been taken to reduce the risk to civilians, is drone warfare morally justified, or does it simply shift the risk away from our war fighters to the civilian population? The key question here is what counts as a reasonable measure – how much are we permitted to reduce the risk to our own troops?

Eye in the Sky forces us to confront the ethical complexity of war.

Reducing risk can also have consequences for the morale of soldiers. Christian Enemark, for example, suggests that drone warfare marks “the end of courage”. He wonders in what sense we can call drone pilots ‘warriors’ at all.

The risk-free nature of a drone strike means that he or she requires none of the courage that for millennia has distinguished the warrior from all other kinds of killers.

How then should drone operators be regarded? Are these grounded aviators merely technicians of death, at best deserving only admiration for their competent application of technical skills? If not, by what measure can they be reasonably compared to warriors?

Moral costs of killing

Throughout the film, military commanders Catherine Powell and Frank Benson (Alan Rickman) make a compelling consequentialist argument for killing the terrorists despite the fact it will kill the innocent girl. The suicide bombers, if allowed to escape, are likely to kill dozens of innocent people. If the cost of stopping them is one life, the ‘moral maths’ seems to check out.

Ultimately it is the pilot, Steve Watts, who has to take the shot. If he fires, it is by his hand a girl will die. This knowledge carries a serious ethical and psychological toll, even if he thinks it was the right thing to do.

There is evidence suggesting drone pilots suffer from Post Traumatic Stress Disorder (PTSD) and other forms of trauma at the same rates as pilots of manned aircraft. This can arise even if they haven’t killed any civilians. Drone pilots not only kill their targets, they observe them for weeks beforehand, coming to know their targets’ habits, families and communities. This means they humanise their targets in a way many manned pilots do not – and this too has psychological implications.

Who is responsible?

Modern military ethics insist all warriors have a moral obligation to refuse illegal or unethical orders. This sits in contrast to older approaches, by which soldiers had an absolute duty to obey. St Augustine, an early writer on the ethics of war, called soldiers “swords in the hand” of their commanders.

In a sense, drone pilots are treated in the same way. In Eye in the Sky, a huge number of senior decision-makers debate whether or not to take the shot. However, as Powell laments, “no one wants to take responsibility for pulling the trigger”. Who is responsible? The pilot who has to press the button? The highest authority in the ‘kill chain’? Or the terrorists for putting everyone in this position to begin with?


Ethics Explainer: Naturalistic Fallacy

The naturalistic fallacy is an informal logical fallacy which argues that if something is ‘natural’ it must be good. It is closely related to the is/ought fallacy – when someone tries to infer what ‘ought’ to be done from what ‘is’.

The is/ought fallacy is when statements of fact (or ‘is’) jump to statements of value (or ‘ought’), without explanation. First discussed by Scottish philosopher, David Hume, he observed a range of different arguments where writers would be using the terms ‘is’ and ‘is not’ and suddenly, start saying ‘ought’ and ‘ought not’.

For Hume, it was inconceivable that philosophers could jump from ‘is’ to ‘ought’ without showing how the two concepts were connected. What were their justifications?

If this seems weird, consider the following example where someone might say:

  1. It is true that smoking is harmful to your health.
  2. Therefore, you ought not to smoke.

The claim that you ‘ought’ not to smoke is not just saying it would be unhealthy for you to smoke. It says it would be unethical. Why? Lots of ‘unhealthy’ things are perfectly ethical. The assumption that facts lead us directly to value claims is what makes the is/ought argument a fallacy.

As it is, the argument above is unsound – much more is needed. Hume thought no matter what you add to the argument, it would be impossible to make the leap from ‘is’ to ‘ought’ because ‘is’ is based on evidence (facts) and ‘ought’ is always a matter of reason (at best) and opinion or prejudice (at worst).

Later, another philosopher named G.E. Moore coined the term naturalistic fallacy. He said arguments that used nature, or natural terms like ‘pleasant’, ‘satisfying’ or ‘healthy’ to make ethical claims, were unsound.

The naturalistic fallacy looks like this:

  1. Breastfeeding is the natural way to feed children.
  2. Therefore, mothers ought to breastfeed their children and ought not to use baby formula (because it is unnatural).

This is a fallacy. We act against nature all the time – with vaccinations, electricity, medicine – many of which are ethical. Lots of things that are natural are good, but not all unnatural things are unethical. This is what the naturalistic fallacy argues.

Philosophers still debate this issue. For example, G. E. Moore believed in moral realism – that some things are objectively ‘good’ or ‘bad’, ‘right’ or ‘wrong’. This suggests there might be ‘ethical facts’ from which we can make value claims and which are different from ordinary facts. But that’s a whole new topic of discussion.


Male suicide is a global health issue in need of understanding

“This is a worldwide phenomenon in which men die at four times the rate of women. The four to one ratio gets closer to six or seven to one as people get older.”

That’s Professor Brian Draper, describing one of the most common causes of death among men: Suicide.

Suicide is the cause of death with the highest gender disparity in Australia – an experience replicated in most places around the world, according to Draper.

So what is going on? Draper is keen to avoid debased speculation – there are a lot of theories but not much we can say with certainty. “We can describe it, but we can’t understand it,” he says. One thing that seems clear to Draper is it’s not a coincidence so many more men die by suicide than women.

If you are raised by damaged parents, it could be damaging to you.

“It comes back to masculinity – it seems to be something about being male,” he says.

“I think every country has its own way of expressing masculinity. In the Australian context not talking about feelings and emotions, not connecting with intimate partners are factors…”

The issue of social connection is also thought to be connected in some way. “There is broad reluctance by many men to connect emotionally or build relationships outside their intimate partners – women have several intimate relationships, men have a handful at most,” Draper says.

You hear this reflection fairly often. Peter Munro’s feature in the most recent edition of Good Weekend on suicide deaths among trade workers tells a similar story.

Mark, an interviewee, describes writing a suicide note and feeling completely alone until a Facebook conversation with his girlfriend “took the weight of his shoulders”. What would have happened if Mark had lost Alex? Did he have anyone else?

None of this, Draper cautions, means we can reduce the problem to idiosyncrasies of Aussie masculinity – toughness, ‘sucking it up’, alcohol… It’s a global issue.

“I’m a strong believer in looking at things globally and not in isolation. Every country will do it differently, but you’ll see these issues in the way men interact – I think it’s more about masculinity and the way men interact.”

Another piece of the puzzle might – Draper suggests – be early childhood. If your parents have suffered severe trauma, it’s likely to have an effect.

“If you are raised by damaged parents, it could be damaging to you. Children of survivors of concentration camps, horrendous experiences like the killing fields in Cambodia or in Australia the Stolen Generations…”

It comes back to masculinity – it seems to be something about being male.

There is research backing this up. For instance, between 1988 and 1996 the children of Vietnam War veterans died by suicide at over three times the national average.

Draper is careful not to overstate it – there’s still so much we don’t know, but he does believe there’s something to early childhood experiences. “Sexual abuse in childhood still conveys suicide risk in your 70s and 80s … but there’s also emotional trauma from living with a person who’s not coping with their own demons.”

“I’m not sure we fully understand these processes.” The amount we still need to understand is becoming a theme.

What we don’t know is a source of optimism for Draper. “We’ve talked a lot about factors that might increase your risk but there’s a reverse side to that story.”

“A lot of our research is focussed predominantly on risk rather than protection. We don’t look at why things have changed for the better … For example, there’s been a massive reduction in suicides in men between 45-70 in the last 50 years.”

“Understanding what’s happened in those age groups would help.”

It’s pretty clear we need to keep talking – researchers, family, friends, support workers and those in need of support can’t act on what they don’t know.

If you or someone you know needs support, contact:

  • Lifeline 13 11 14

  • Men’s Line 1300 78 99 78

  • beyondblue 1300 224 636

  • Kids Helpline 1800 551 800


Ethics Explainer: Logical Fallacies

A logical fallacy occurs when an argument contains flawed reasoning. These arguments cannot be relied on to make truth claims. There are two general kinds of logical fallacies: formal and informal.

First off, let’s define some terms.

  • Argument: a group of statements made up of one or more premises and one conclusion.
  • Premise: a statement that provides reason or support for the conclusion
  • Truth: a property of statements, i.e. that they are the case
  • Validity: a property of arguments, i.e. that they are logically structured
  • Soundness: a property of statements and arguments, i.e. that they are valid and true
  • Conclusion: the final statement in an argument that indicates the idea the arguer is trying to prove

Formal logical fallacies

These are arguments with true premises, but a flaw in its logical structure. Here’s an example:

  • Premise 1: In summer, the weather is hot.
  • Premise 2: The weather is hot.
  • Conclusion: Therefore, it is summer.

Even though statement 1 and 2 are true, the argument goes in circles. By using an effect to determine a cause, the argument becomes invalid. Therefore, statement 3 (the conclusion) can’t be trusted.

Informal logical fallacies 

These are arguments with false premises. They are based on claims that are not even true. Even if the logical structure is valid, it becomes unsound. For example:

  • Premise 1: All men have hairy beards.
  • Premise 2: Tim is a man.
  • Conclusion: Therefore, Tim has a hairy beard.

Statement 1 is false – there are plenty of men without hairy beards. Statement 2 is true. Though the logical structure is valid (it doesn’t go in circles), the argument is still unsound. The conclusion is false.

A famous example of an argument that is both valid, true, and sound is as follows.

  • Premise 1: All men are mortal.
  • Premise 2: Socrates is a man.
  • Conclusion: Socrates is mortal.

It’s important to look out for logical fallacies in the arguments people make. Bad arguments can lead to true conclusions, but there is no reason for us to trust the argument that got us to the conclusion. We might have missed something or it might not always be the case.