Anzac Day: militarism and masculinity don’t mix well in modern Australia

In 2016, the then Prime Minister Tony Abbott penned a passionate column on the relevance of Anzac Day to modern Australia. For Abbott, the Anzacs serve as the moral role models that Australians should seek to emulate. He wrote, “We hope that in striving to emulate their values, we might rise to the challenges of our time as they did to theirs”.

The notion that Anzacs embody a quintessentially Australian spirit is a century old. The official World War I journalist C.E.W. Bean wrote Gallipoli was the crucible in which the rugged resilience and camaraderie of (white) Australian masculinity, forged in the bush, was decisively tested and proven on the world stage.

At the time, this was a potent way of making sense out of the staggering loss of 8000 Australian lives in a single military campaign. Since then, it has been common for politicians and journalists to claim that Australia was ‘baptised’ in the ‘blood and fire’ of Gallipoli.

The dark side to the Anzac myth is a view of violence as powerful and creative.

However, public interest in Anzac Day has fluctuated over the course of the 20th century. Ambivalence over Australia’s role in the Vietnam War had a major role in dampening enthusiasm from the 1970s.

The election of John Howard in 1996 signalled a new era for the Anzac myth. The ‘digger’ was, for Prime Minister Howard, the embodiment of Australian mateship, loyalty and toughness. Since then, government funding has flowed to Anzac-related school curricula as well as related books, films and research projects. Old war memorials have been refurbished and new ones built. Attendance at Anzac events in Australia and overseas has swelled.

On Anzac Day, we are reminded how crucial it is for individuals to be willing to forgo self-interest in exchange for the common national good. Theoretically, Anzac Day teaches us not to be selfish and reminds us of our duties to others. But it does so at a cost. Because military role models bring with them militarism – which sees the horror and tragedy of war as a not only justifiable but desirable way to solve problems.

The dark side to the Anzac myth is a view of violence as powerful and creative. Violence is glorified as the forge of masculinity, nationhood and history. In this process, the acceptance and normalisation of violence culminates in celebration.

The renewed focus on the Anzac legend in Australian consciousness has brought with it a pronounced militarisation of Australian history, in which our collective past is reframed around idealised incidents of conflict and sacrifice. This effectively takes the politics out of war, justifying ongoing military deployment in conflict overseas, and stultifying debate about the violence of invasion and colonisation at home.

In the drama of militarism, the white, male and presumptively heterosexual soldier is the hero. The Anzac myth makes him the archetypical Australian, consigning the alternative histories of women, Aboriginal and Torres Strait Islanders, and sexual and ethnic minorities to the margins. I’d argue that for right-wing nationalist groups, the Anzacs have come to represent their nostalgia for a racially purer past. They have aggressively protested against attempts to critically analyse Anzac history.

Turning away from militarism does not mean devaluing the military or forgetting about Australia’s military history.

Militarism took on a new visibility during Abbott’s time as Prime Minister. Current and former military personnel have been appointed to major civilian policy and governance roles. Police, immigration, customs, and border security staff have adopted military-style uniforms and arms. The number of former military personnel entering state and federal politics has risen significantly in the last 15 years.

The notion that war and conflict is the ultimate test of Australian masculinity and nationhood has become the dominant understanding not only of Anzac day but, arguably, of Australian identity. Any wonder that a study compiled by McCrindle Research reveals that 34% of males, and 42% of Gen Y males, would enlist in a war that mirrored that of WWI if it occurred today.

This exaltation of violence sits uncomfortably alongside efforts to reduce and ultimately eradicate the use of violence in civil and intimate life. Across the country we are grappling with epidemic of violence against women and between men. But when war is positioned as the fulcrum of Australian history, when our leaders privilege force in policy making, and when military service is seen as the penultimate form of public service, is it any wonder that boys and men turn to violence to solve problems and create a sense of identity?

The glorification of violence in our past is at odds with our aspirations for a violence-free future.

In his writings on the dangers of militarism, psychologist and philosopher William James called for a “moral equivalent of war” – a form of moral education less predisposed to militarism and its shortcomings.

Turning away from militarism does not mean devaluing the military or forgetting about Australia’s military history. It means turning away from conflict as the dominant lens through which we understand our heritage and shared community. It means abjuring force as a means of solving problems and seeking respect. However, it also requires us to articulate an alternative ethos weighty enough to act as a substitute for militarism.

At a recent domestic violence conference in Sydney, Professor Bob Pease called for the rejection of the “militarisation of masculinity”, arguing that men’s violence in war was linked to men’s violence against women. At the same time, however, he called on us to foster “a critical ethic of care in men”, recognising that men who value others and care for them are less prone to violence.

For as long as militarism and masculinity are fused in the Australian imagination, it’s hard to see how this ethos of care can take root. It seems that the glorification of violence in our past is at odds with our aspirations for a violence-free future. The question is whether we value this potential future more than an idealised past.


Academia’s wicked problem

What do you do when a crucial knowledge system is under-resourced, highly valued, and is having its integrity undermined? That’s the question facing those working in academic research and publishing. There’s a risk Australians might lose trust in one of the central systems on which we rely for knowledge and innovation.

It’s one of those problems that defies easy solutions, like obesity, terrorism or a tax system to suit an entire population. Academics call these “wicked problems” – meaning they’re resistant to easy solutions, not that they are ‘evil’.

Charles West Churchman, who first coined the term, described them as:

That class of social system problems which are ill-formulated, where the information is confusing, where there are many clients and decision makers with conflicting values and where the ramifications in the whole system are thoroughly confusing.

The wicked problem I face day-to-day is that of research and publication ethics. Though most academics do their best within a highly pressured system I see many issues, which span a continuum, starting with cutting corners in the preparation of manuscripts and ending with outright fraud.

It’s helpful to know whether the problem we are facing is a wicked one or not. It can help us to rethink the problem, understand why conventional problem-solving approaches have failed and encourage novel approaches, even if solutions are not readily available.

Though publication ethics, which considers academic work submitted for publication, has traditionally been considered a problem solely for academic journal editors and publishers, it is necessarily entwined with research ethics – issues related to the actual performance of the academic work. For example, unethical human experimentation may only come to light at the time of publication though it clearly originates much earlier.

Given the pressure editors are under, the system is vulnerable to subversion.

Consider the ethical issues surrounding peer review, the process by which academic experts (peers) assess the work of others.

Though imperfect, formalisation of peer review has become an important mark of quality for a journal. Peer review by experts, mediated by journal editors, usually determines whether a paper is published. Though seemingly simple, there are many points where the system can be gamed or even completely subverted – a major one being in the selection of reviewers.

As the number of academics and submissions to journals increase, editors face a logistical challenge in keeping track of an ever-increasing number of submissions needing review. However, as the rise in submissions has not corresponded with a rise in editors – many of whom are volunteers – these journals are overworked and often don’t have a big enough circle of reviewers to call on for the papers being submitted.

A simple approach to increase the pool of reviewers adopted by a number of journals is to allow authors to suggest reviewers for their paper via the online peer review system. These suggestions can be valuable if overseen by editors who can assess reviewers’ credentials. But they are already overworked and often handling work at the edge of their area of expertise, meaning time is at a premium.

Given the pressure editors are under, the system is vulnerable to subversion. It’s always been a temptation for authors to submit the name of reviewers who they believed would view their work favourably. Recently, a small number took it a step further, suggesting fake reviewer names for their papers.

These fake reviewers (usually organised via a third party) promptly submitted favourable reviews which led to papers inappropriately being accepted for publication. The consequences were severe – papers had to be retracted with consequent potential reputational damage to the journal, editors, authors and their institutions. Note how a ‘simple’ view of a ‘wicked’ problem – under resourced editors can be helped by authors suggesting their reviewers – led to new and worse problems than before.

Removing the ability of authors to suggest reviewers … would be treating a symptom rather than the cause.

But why would some authors go to such extreme ends as submitting fake reviews? The answer takes us into a related problem – the way authors are rewarded for publications.

Manipulating peer review gives authors a higher chance of publication – and academic publications are crucial for being promoted at universities. Promotion often provides higher salary, prestige, perhaps less teaching allocation and other fringe benefits. So for those at the extreme, who lack the necessary skills to publish (or even firm command of academic English), it’s logical to turn to those who understand how to manipulate the system.

We could easily fix the problem of fake reviews by removing an author’s ability to suggest reviewers but this would be treating a symptom rather than the cause. Namely, a perverse reward system for authors.

Removing author suggestions does nothing to help overworked editors deal more easily with the huge amount of submissions they receive. Nor do editors have the power to address the underlying problem – an inappropriate system of academic incentives.

There are no easy solutions but accepting the complexity may at least help to understand what it is that needs to be solved. Could we change the incentive structure to reward authors for more than merely being published in a journal?

There are moves to understand these intertwined problems but all solutions will fail unless we come back to the first requirement for approaching a wicked problem – agreement it’s a problem shared by many. So while the issues are most notable in academic journals, we won’t find all the solutions there.


Philosophy must (and can) thrive outside universities

A recent article in ABC Religion by Steve Fuller described philosophy being “at the crossroads”. The article explores philosophy’s relationship to universities and what living a “philosophical life” really looks like.

Reading it, my mind whisked me back to some of my earliest days at The Ethics Centre. Before returning to Sydney, I enjoyed the good fortune to complete my doctorate at Cambridge – one of the great universities of the world. While there, I mastered the disciplines of academic philosophy. However, I also learned the one lesson that my supervisor offered me at our first meeting – I should always “go for the jugular”. As it happens, I was quite good at drawing blood.

Perhaps this was a young philosopher’s sport because, as I grew older and read more deeply, I came to realise what I’d learned to do was not really consistent with the purpose and traditions of philosophy at all. Rather, I had become something of an intellectual bully – more concerned with wounding my opponents than with finding the ‘truth’ in the matter being discussed.

This realisation was linked to my re-reading of Plato – and his account of the figure of Socrates who, to this day, remains my personal exemplar of a great philosopher.

The key to my new understanding of Socrates lay in my realisation that, contrary to what I had once believed, he was not a philosophical gymnast deliberately trying to tie his interlocutors in knots (going for the jugular). Rather, he was a man sincerely wrestling, with others, some of the toughest questions faced by humanity in order to better understand them. What is justice? What is a good life? How are we to live?

The route to any kind of answer worth holding is incredibly difficult – and I finally understood (I was a slow learner) that Socrates subjected his own ideas to the same critical scrutiny he required of others.

In short, he was totally sincere when he said that he really did not know anything. All of his questioning was a genuine exploration involving others who, in fact, did claim to ‘know’. That is why he would bail up people in the agora (the town square) who were heading off to administer ‘justice’ in the Athenian courts.

Surely, Socrates would say, if you are to administer justice – then you must know what it is. As it turned out, they did not.

The significance of Socrates’ work in the agora was not lost on me. Here was a philosopher working in the public space. The more I looked, the more it seemed that this had been so for most of the great thinkers.

So that is what I set out to do.

One of my earliest initiatives was to head down to Martin Place, in the centre of Sydney, where I would set up a circle of 10 plastic chairs and two cardboard signs that said something like, “If you want to talk to a philosopher about ideas, then take a seat”. And there I would sit – waiting for others.

Without fail they would come – young, old, rich, poor – wanting to talk about large, looming matters in their lives. I remember cyclists discussing their place on our roads, school children discussing their willingness to cheat in exams (because they thought the message of society is ‘do whatever it takes’).

Occasionally, people would come from overseas – having heard of this odd phenomenon. A memorable occasion involved a discussion with a very senior and learned rabbi from Amsterdam – the then global head (I think) of Progressive Judaism. On another occasion, a woman brought her mother (visiting from England) to discuss her guilt at no longer believing in God. I remember we discussed what it might mean to feel guilt in relation to a being you claimed not to exist. There were few answers – but some useful insights.

Anyway, I came to imagine a whole series of philosophers’ circles being dotted around Martin Place and other parts of Sydney (and perhaps Australia). After all, why should I be the only philosopher pursuing this aspect of the philosophical life. So I reached out to the philosophy faculty at Sydney University – thinking (naively as it turned out) I would have a rush of colleagues wishing to join me.

Alas – not one was interested. The essence of their message was that they doubted the public would be able to engage with ‘real philosophy’ – that the techniques and language needed for philosophy would be bewildering to non-philosophers. I suspect there was also an undeclared fear of being exposed to their fellow citizens in such a vulnerable position.

Actually, I still don’t really know what led to such a wholesale rejection of the idea.

However, I think it was a great pity other philosophers should have felt more comfortable within the walls of their universities rather than out in the wider world.

I doubt that anything I write or say will be quoted in the centuries to come. However, I would not, for a moment, change the choice I made to step outside of the university and work within the agora. Life then becomes messy and marvellous in equal measure. Everything needs to be translated into language anyone can understand (and I have found that this is possible without sacrificing an iota of philosophical nuance).

I think it was a great pity other philosophers should have felt more comfortable within the walls of their universities rather than out in the wider world.

You constantly need to challenge unthinking custom and practice most people simply take for granted. This does not make you popular. You are constantly accused of being ‘unethical’ because you entertain ideas one group or another opposes. You please almost nobody. You cannot aim to be liked. And you have to deal with the rawness of people’s lives – discovering just how much the issues philosophers consider (especially in the field of ethics) really matter.

This is not to say that ‘academic’ philosophy should be abandoned. However, I can see no good reason why philosophers should think this is the only (or best) way to be a philosopher. Surely, there is room (and a need) for philosophers to live larger, more public lives.

You constantly need to challenge unthinking custom and practice most people simply take for granted. This does not make you popular.

I have scant academic publications to my name. However, at the height of the controversy surrounding the introduction of ethics classes for children not attending scripture in NSW, I enjoyed the privilege of being accused of “impiety” and “corrupting the youth” by the Anglican and Catholic Archbishops of Sydney. Why a ‘privilege’? Because these were precisely the same charges alleged against Socrates. So far, I have avoided the hemlock. For a philosopher, what could be better than that?


‘Eye in the Sky’ and drone warfare

Warning – general plot spoilers to follow.

Collateral damage

Eye in the Sky begins as a joint British and US surveillance operation against known terrorists in Nairobi. During the operation, it becomes clear a terrorist attack is imminent, so the goals shift from surveillance to seek and destroy.

Moments before firing on the compound, drone pilots Steve Watts (Aaron Paul) and Carrie Gershon (Phoebe Fox) see a young girl setting up a bread stand near the target. Is her life acceptable collateral damage if her death saves many more people?

In military ethics, the question of collateral damage is a central point of discussion. The principle of ‘non-combatant immunity’ requires no civilian be intentionally targeted, but it doesn’t follow from this that all civilian casualties are unethical.

Most scholars and some Eye in the Sky characters, such as Colonel Katherine Powell (Helen Mirren), accept even foreseeable casualties can be justified under certain conditions – for instance, if the attack is necessary, the military benefits outweigh the negative side effects and all reasonable measures have been taken to avoid civilian casualties.

Risk-free warfare

The military and ethical advantages of drone strikes are obvious. By operating remotely, we prevent the risk of our military men and women being physically harmed. Drone strikes are also becoming increasingly precise and surveillance resources mean collateral damage can be minimised.

However, the damage radius of a missile strike drastically exceeds most infantry weapons – meaning the tools used by drones are often less discriminate than soldiers on the ground carrying rifles. If collateral damage is only justified when reasonable measures have been taken to reduce the risk to civilians, is drone warfare morally justified, or does it simply shift the risk away from our war fighters to the civilian population? The key question here is what counts as a reasonable measure – how much are we permitted to reduce the risk to our own troops?

Eye in the Sky forces us to confront the ethical complexity of war.

Reducing risk can also have consequences for the morale of soldiers. Christian Enemark, for example, suggests that drone warfare marks “the end of courage”. He wonders in what sense we can call drone pilots ‘warriors’ at all.

The risk-free nature of a drone strike means that he or she requires none of the courage that for millennia has distinguished the warrior from all other kinds of killers.

How then should drone operators be regarded? Are these grounded aviators merely technicians of death, at best deserving only admiration for their competent application of technical skills? If not, by what measure can they be reasonably compared to warriors?

Moral costs of killing

Throughout the film, military commanders Catherine Powell and Frank Benson (Alan Rickman) make a compelling consequentialist argument for killing the terrorists despite the fact it will kill the innocent girl. The suicide bombers, if allowed to escape, are likely to kill dozens of innocent people. If the cost of stopping them is one life, the ‘moral maths’ seems to check out.

Ultimately it is the pilot, Steve Watts, who has to take the shot. If he fires, it is by his hand a girl will die. This knowledge carries a serious ethical and psychological toll, even if he thinks it was the right thing to do.

There is evidence suggesting drone pilots suffer from Post Traumatic Stress Disorder (PTSD) and other forms of trauma at the same rates as pilots of manned aircraft. This can arise even if they haven’t killed any civilians. Drone pilots not only kill their targets, they observe them for weeks beforehand, coming to know their targets’ habits, families and communities. This means they humanise their targets in a way many manned pilots do not – and this too has psychological implications.

Who is responsible?

Modern military ethics insist all warriors have a moral obligation to refuse illegal or unethical orders. This sits in contrast to older approaches, by which soldiers had an absolute duty to obey. St Augustine, an early writer on the ethics of war, called soldiers “swords in the hand” of their commanders.

In a sense, drone pilots are treated in the same way. In Eye in the Sky, a huge number of senior decision-makers debate whether or not to take the shot. However, as Powell laments, “no one wants to take responsibility for pulling the trigger”. Who is responsible? The pilot who has to press the button? The highest authority in the ‘kill chain’? Or the terrorists for putting everyone in this position to begin with?


Ethics Explainer: Naturalistic Fallacy

The naturalistic fallacy is an informal logical fallacy which argues that if something is ‘natural’ it must be good. It is closely related to the is/ought fallacy – when someone tries to infer what ‘ought’ to be done from what ‘is’.

The is/ought fallacy is when statements of fact (or ‘is’) jump to statements of value (or ‘ought’), without explanation. First discussed by Scottish philosopher, David Hume, he observed a range of different arguments where writers would be using the terms ‘is’ and ‘is not’ and suddenly, start saying ‘ought’ and ‘ought not’.

For Hume, it was inconceivable that philosophers could jump from ‘is’ to ‘ought’ without showing how the two concepts were connected. What were their justifications?

If this seems weird, consider the following example where someone might say:

  1. It is true that smoking is harmful to your health.
  2. Therefore, you ought not to smoke.

The claim that you ‘ought’ not to smoke is not just saying it would be unhealthy for you to smoke. It says it would be unethical. Why? Lots of ‘unhealthy’ things are perfectly ethical. The assumption that facts lead us directly to value claims is what makes the is/ought argument a fallacy.

As it is, the argument above is unsound – much more is needed. Hume thought no matter what you add to the argument, it would be impossible to make the leap from ‘is’ to ‘ought’ because ‘is’ is based on evidence (facts) and ‘ought’ is always a matter of reason (at best) and opinion or prejudice (at worst).

Later, another philosopher named G.E. Moore coined the term naturalistic fallacy. He said arguments that used nature, or natural terms like ‘pleasant’, ‘satisfying’ or ‘healthy’ to make ethical claims, were unsound.

The naturalistic fallacy looks like this:

  1. Breastfeeding is the natural way to feed children.
  2. Therefore, mothers ought to breastfeed their children and ought not to use baby formula (because it is unnatural).

This is a fallacy. We act against nature all the time – with vaccinations, electricity, medicine – many of which are ethical. Lots of things that are natural are good, but not all unnatural things are unethical. This is what the naturalistic fallacy argues.

Philosophers still debate this issue. For example, G. E. Moore believed in moral realism – that some things are objectively ‘good’ or ‘bad’, ‘right’ or ‘wrong’. This suggests there might be ‘ethical facts’ from which we can make value claims and which are different from ordinary facts. But that’s a whole new topic of discussion.


Male suicide is a global health issue in need of understanding

“This is a worldwide phenomenon in which men die at four times the rate of women. The four to one ratio gets closer to six or seven to one as people get older.”

That’s Professor Brian Draper, describing one of the most common causes of death among men: Suicide.

Suicide is the cause of death with the highest gender disparity in Australia – an experience replicated in most places around the world, according to Draper.

So what is going on? Draper is keen to avoid debased speculation – there are a lot of theories but not much we can say with certainty. “We can describe it, but we can’t understand it,” he says. One thing that seems clear to Draper is it’s not a coincidence so many more men die by suicide than women.

If you are raised by damaged parents, it could be damaging to you.

“It comes back to masculinity – it seems to be something about being male,” he says.

“I think every country has its own way of expressing masculinity. In the Australian context not talking about feelings and emotions, not connecting with intimate partners are factors…”

The issue of social connection is also thought to be connected in some way. “There is broad reluctance by many men to connect emotionally or build relationships outside their intimate partners – women have several intimate relationships, men have a handful at most,” Draper says.

You hear this reflection fairly often. Peter Munro’s feature in the most recent edition of Good Weekend on suicide deaths among trade workers tells a similar story.

Mark, an interviewee, describes writing a suicide note and feeling completely alone until a Facebook conversation with his girlfriend “took the weight of his shoulders”. What would have happened if Mark had lost Alex? Did he have anyone else?

None of this, Draper cautions, means we can reduce the problem to idiosyncrasies of Aussie masculinity – toughness, ‘sucking it up’, alcohol… It’s a global issue.

“I’m a strong believer in looking at things globally and not in isolation. Every country will do it differently, but you’ll see these issues in the way men interact – I think it’s more about masculinity and the way men interact.”

Another piece of the puzzle might – Draper suggests – be early childhood. If your parents have suffered severe trauma, it’s likely to have an effect.

“If you are raised by damaged parents, it could be damaging to you. Children of survivors of concentration camps, horrendous experiences like the killing fields in Cambodia or in Australia the Stolen Generations…”

It comes back to masculinity – it seems to be something about being male.

There is research backing this up. For instance, between 1988 and 1996 the children of Vietnam War veterans died by suicide at over three times the national average.

Draper is careful not to overstate it – there’s still so much we don’t know, but he does believe there’s something to early childhood experiences. “Sexual abuse in childhood still conveys suicide risk in your 70s and 80s … but there’s also emotional trauma from living with a person who’s not coping with their own demons.”

“I’m not sure we fully understand these processes.” The amount we still need to understand is becoming a theme.

What we don’t know is a source of optimism for Draper. “We’ve talked a lot about factors that might increase your risk but there’s a reverse side to that story.”

“A lot of our research is focussed predominantly on risk rather than protection. We don’t look at why things have changed for the better … For example, there’s been a massive reduction in suicides in men between 45-70 in the last 50 years.”

“Understanding what’s happened in those age groups would help.”

It’s pretty clear we need to keep talking – researchers, family, friends, support workers and those in need of support can’t act on what they don’t know.

If you or someone you know needs support, contact:

  • Lifeline 13 11 14

  • Men’s Line 1300 78 99 78

  • beyondblue 1300 224 636

  • Kids Helpline 1800 551 800


Ethics Explainer: Logical Fallacies

A logical fallacy occurs when an argument contains flawed reasoning. These arguments cannot be relied on to make truth claims. There are two general kinds of logical fallacies: formal and informal.

First off, let’s define some terms.

  • Argument: a group of statements made up of one or more premises and one conclusion.
  • Premise: a statement that provides reason or support for the conclusion
  • Truth: a property of statements, i.e. that they are the case
  • Validity: a property of arguments, i.e. that they are logically structured
  • Soundness: a property of statements and arguments, i.e. that they are valid and true
  • Conclusion: the final statement in an argument that indicates the idea the arguer is trying to prove

Formal logical fallacies

These are arguments with true premises, but a flaw in its logical structure. Here’s an example:

  • Premise 1: In summer, the weather is hot.
  • Premise 2: The weather is hot.
  • Conclusion: Therefore, it is summer.

Even though statement 1 and 2 are true, the argument goes in circles. By using an effect to determine a cause, the argument becomes invalid. Therefore, statement 3 (the conclusion) can’t be trusted.

Informal logical fallacies 

These are arguments with false premises. They are based on claims that are not even true. Even if the logical structure is valid, it becomes unsound. For example:

  • Premise 1: All men have hairy beards.
  • Premise 2: Tim is a man.
  • Conclusion: Therefore, Tim has a hairy beard.

Statement 1 is false – there are plenty of men without hairy beards. Statement 2 is true. Though the logical structure is valid (it doesn’t go in circles), the argument is still unsound. The conclusion is false.

A famous example of an argument that is both valid, true, and sound is as follows.

  • Premise 1: All men are mortal.
  • Premise 2: Socrates is a man.
  • Conclusion: Socrates is mortal.

It’s important to look out for logical fallacies in the arguments people make. Bad arguments can lead to true conclusions, but there is no reason for us to trust the argument that got us to the conclusion. We might have missed something or it might not always be the case.


The myths of modern motherhood

It seems as if three successive waves of feminism haven’t resolved the chronic mismatch between the ideal of the ‘good’ and ‘happy’ mother and the realities of women’s lives.

Even if you consciously reject them, ideas about what a mother ought to be and ought to feel are probably there from the minute you wake up until you go to bed at night. Even in our age of increased gender equality it seems as if the culture loves nothing more than to dish out the myths about how to be a better mother (or a thinner, more fashionable, or better-looking one).

It’s not just the celebrity mums pushing their prams on magazine covers, or the continuing dearth of mothers on TV who are less than exceptionally good-looking, or that mothers in advertising remain ubiquitously obsessed with cleaning products and alpine-fresh scents. While TV dramas have pleasingly increased the handful of roles that feature working mothers, most are unduly punished in the twists of the melodramatic plot. They have wimpy husbands or damaged children, and of course TV’s bad characters are inevitably bad due to the shortcomings of their mothers (serial killers, for example, invariably have overbearing mothers or alcoholic mothers, or have never really separated from their mothers).

It seems we are living in an age of overzealous motherhood. Indeed, in a world in which the demands of the workplace have increased, so too the ideals of motherhood have become paradoxically more – not less – demanding. In recent years, commonly accepted ideas about what constitutes a barely adequate level of mothering have dramatically expanded to include extraordinary sacrifices of time, money, feelings, brains, social relationships, and indeed sleep.

In Australia, most mothers work. But recent studies show that working mothers now spend more time with their children than their non-working mothers did in 1975. Working mothers achieve this extraordinary feat by sacrificing leisure, mental health, and even personal hygiene to spend more time with their kids.

This is coupled with a new kind of anxious sermonising that is having a profound impact on mothers, especially among the middle class. In Elisabeth Badinter’s book The Conflict, she argues that an ideology of ‘Naturalism’ has given rise to an industry of experts advocating increasingly pristine forms of natural birth and natural pregnancy, as well as an ever-expanding list of increasingly time-intensive child rearing duties that are deemed to fall to the mother alone. These duties include most of the classic practices of 21st century child rearing, including such nostrums as co-sleeping, babywearing and breastfeeding-on-demand until the age of two.

It seems we are living in an age of overzealous motherhood.

Whether it is called Intensive Mothering or Natural Parenting, these new credos of motherhood are wholly taken up with the idea that there is a narrowly prescribed way of doing things. In the West, 21st century child rearing is becoming increasingly time-consuming, expert-guided, emotionally draining, and incredibly expensive. In historical terms, I would be willing to hazard a guess that never before has motherhood been so heavily scrutinised. It is no longer just a question of whether you should or should not eat strawberries or prawns or soft cheese, or, heaven forbid, junk food, while you are pregnant, but so too, the issue of what you should or should not feel has come under intense scrutiny.

Never before has there been such a microscopic investigation of a pregnant woman’s emotional state, before, during and after birth. Indeed, the construction of new psychological disorders for mothers appears to have become something of a psychological pastime, with the old list of mental disorders expanding beyond prenatal anxiety, postnatal depression, postpartum psychosis and the baby blues, to include the baby pinks (a label for a woman who is illogically and inappropriately happy to be a mother), as well as Prenatal and Postnatal Stress Disorder, Maternal Anxiety and Mood Imbalance and Tokophobia—the latter being coined at the start of this millennium as a diagnosis for an unreasonable fear of giving birth.

The problem with the way in which this pop psychology is played out in the media is that it performs an endless re-inscription of the ideologies of mothering. These ideologies are often illogical, contradictory and – one suspects – more often dictated by what is convenient for society and not what is actually good for the children and parents involved. Above all else, mothers should be ecstatically happy mothers, because sad mothers are failed mothers. Indeed, according to the prevailing wisdom, unhappy mothers are downright unnatural, if not certifiably insane.

Never before has motherhood been so heavily scrutinised.

Little wonder there has been an outcry against such miserable standards of perfection. The same decade that saw the seeming triumph of the ideologies of Intensive and Natural mothering, also saw the rise of what has been called the ‘Parenting Hate Read’ — a popular outpouring of books and blogs written by mothers (and even a few fathers) who frankly confess that they are depressed about having children for no better reason than it is often mind-numbing, exhausting and dreadful. Mothers love their children, say the ‘Parenting Hate Reads’, but they do not like what is happening to their lives.

The problem is perhaps only partly about the disparity between media images of ecstatically happy mummies and the reality of women’s lives. It is also because our ideas about happiness have grown impoverished. Happiness, as it is commonly understood in the western world, is made up of continuous moments of pleasure and the absence of pain.

These popular assumptions about happiness are of comparatively recent origin, emerging in the works of philosophers such as Jeremy Bentham, who argued in the 18th century that people act purely in their self-interest and the goal to which self-interest aspires is happiness. Ethical conduct, according to Bentham and James Mill (father to John Stuart), should therefore aspire to maximise pleasure and minimise pain.

Our ideas about happiness have grown impoverished.

This ready equation of goodness, pleasure and happiness flew in the face of ideas that had been of concern to philosophers since Aristotle argued that a person is not made happy by fleeting pleasures, but by fulfilment stemming from meaning and purpose. Or, as Nietzsche, the whirling dervish of 19th century philosophy, put it, “Man does not strive for happiness – only the Englishman does”.

Nevertheless, Western assumptions about happiness have remained broadly utilitarian, giving rise to the culturally constructed notion of happiness we see in TV commercials, showing families becoming happier with every purchase. Or by life coaches peddling the dubious hypothesis that self-belief can overcome the odds, whatever your social or economic circumstance.

Unless you are Mother Teresa, you have probably been spending your life up until the time you have children in a reasonably independent and even self-indulgent way. You work hard through the week but sleep in on the weekend. You go to parties. You come home drunk. You see your friends when you want. Babies have different ideas. They stick forks in electric sockets, go berserk in the car seat, and throw up on your work clothes. They want to be carried around in the day and wake in the night.

If society can solve its social problems then maybe parenting will cease to be a misery competition. Mothers might not be happy in a utilitarian or hedonistic sense but will lead rich and satisfying lives. Then maybe a stay-at-home dad can change a nappy without a choir of angels descending from heaven singing ‘Hallelujah’.

This is an edited extract from “On Happiness: New Ideas for The 21st Century” UWA Publishing.


Ending workplace bullying demands courage

Despite increasing measures to combat workplace harassment, bullies remain entrenched in organisations. Changes made to laws and regulations in order to stamp out bullying have instead transformed it into an underground set of behaviours. Now hidden, these behaviours often remain unaddressed.

In other cases, anti-bullying policies can actually work to support perpetrators. Where regulations specify what bullying is, some people will cleverly use those rules as a guide to work around. Although these people are no longer bullying in the narrow sense outlined by policies or regulations, their acts of shunning, scapegoating and ostracism have the same effect. Rules that explicitly define bullying create exemptions, or even permissions, for behaviours that do not meet the formal standard.

Because they are more difficult to notice or prove, these insidious behaviours can remain undetected for long periods. As Kipling Williams and Steve Nida argued in a 2011 research paper, “being excluded or ostracized is an invisible form of bullying that doesn’t leave bruises, and therefore we often underestimate its impact”.

The bruises, cuts and blows are less evident but the internal bleeding is real. This new, psychological violence can have severe, long-term effects. According to Williams, “Ostracism or exclusion may not leave external scars, but it can cause pain that often is deeper and lasts longer than a physical injury”.

Bullies tend to be very good at office politics and working upwards, and attack those they consider rivals through innuendo and social networks.

This is a costly issue for both individuals and organisations. No-one wins. Individuals can suffer symptoms akin to Post-Traumatic Stress Disorder. Organisations in which harassment occurs must endure lost time, absences, workers’ compensation claims, employee turnover, lack of productivity, the risk of costly and lengthy lawsuits, as well as a poor reputation.

So why does it continue?

First, bullies tend to be very good at office politics and working upwards, and attack those they consider rivals through innuendo and social networks. Bullies are often socially savvy, even charming. Because of this, they are able to strategically abuse co-workers while receiving positive work evaluations from managers.

In addition, anti-bullying policies aren’t the panacea they are sometimes painted to be. If they exist at all they are often ignored or ineffective. A 2014 report by corporate training company VitalSmarts showed that 96 percent of the 2283 people it surveyed had experienced workplace bullying. But only 7 percent knew someone who had used a workplace anti-bullying policy – the majority didn’t see it as an option. Plus, we now know some bullies use such policies as a base to craft new means of enacting their power – ones that aren’t yet defined as bullying behaviour by these policies.

Finally, cases often go unreported, undetected and unchallenged. This inaction rewards perpetrators and empowers them to continue behaving in the same way. This is confusing for the victim, who is stressed, unsure, and can feel isolated in the workplace. This undermines the confidence they need to report the bullying. Because of this, many opt for a less confrontational path – hoping it will go away in time. It usually doesn’t.

Cases often go unreported, undetected and unchallenged. This inaction rewards perpetrators and empowers them to continue behaving in the same way.

What can you do if a colleague is being shunned or ostracised by peers or managers? The first step is not to participate. However, most people are already likely to be aware of this. More relevant for most people is to not become complicit by remaining silent. As 2016 Australian of the Year David Morrison famously said, “The standard you walk by is the standard you accept.”

The onus is on you to take positive steps against harassment where you witness it. By doing nothing you allow psychological attacks to continue. In this way, silent witnesses bear partial responsibility for the consequences of bullying. Moreover, unless the toxic culture that enables bullying is undone, logic says you could be the next victim.

However, merely standing up to harassment isn’t likely to be a cure-all. Tackling workplace bullying is a shared responsibility. It takes regulators, managers and individuals in cooperation with law, policy and healthy organisational culture.

The onus is on you to take positive steps against harassment where you witness it. By doing nothing you allow psychological attacks to continue.

Organisational leaders in particular need to express public and ongoing support for clearly worded policies. In doing so, policies begin to shape and inform the culture of an organisation rather than serving as standalone documents. It is critical that managers understand the impacts of bullying on culture, employee wellbeing, and their own personal liability.

When regulation fails – the dilemma most frequently seen today – we need to depend on individual moral character. Herein lies the ethical challenge. ‘Character’ is an underappreciated ethical trait in many executive education programs, but the moral virtues that form a person’s character are the foundation of ethical leadership.

A return to character might diminish the need for articles like this. In the meantime, workplace bullying provides us all with the opportunity to practise courage.


What your email signature says about you

Getting too many unethical business requests? Sreedhari Desai’s research suggests a quote in your email signature may be the answer to your woes.

In a recent study Desai enrolled subjects to participate in a virtual game to earn money. The subjects were told they’d earn more money if they could convince their fellow players to spread a lie without knowing about it. Basically, subjects had to trick their fellow players into believing a lie, and then get those other players to spread the lie around the game.

What subjects didn’t know is that all their fellow ‘players’ were in fact researchers studying how they would go about their deception. Subjects communicated with the researchers by email. Some researchers had a virtuous quote underneath their email – “Success without honor is worse than fraud”. Others had a neutral quote in their email signature – “Success and luck go hand in hand”. Others had no quote at all.

And wouldn’t you know it? Subjects were less likely to try to recruit people with a virtuous quote in their email. The quote served as a “moral symbol”, shielding the person from receiving unethical requests from other players. In an interview with Harvard Business Review, Desai outlines what’s happening in these situations:

When someone is in a position to request an unethical thing, they may not consciously be thinking, “I won’t ask that person.” Instead, they may perceive a person as morally “pure” and feel that asking them to get “dirty” makes an ethical transgression even worse. Or they may be concerned that someone with moral character will just refuse the request.

So, if you want to keep your hands clean it may be as simple as updating your email signature. It won’t guarantee you’ll do the right thing when you’re tempted (there’s more to ethics than pretty words!) but it will ensure you’re tempted less.

There are other, more expensive ways to avoid unethical approaches via email.

And in case you’re looking for a virtuous quote for your email signature, we surveyed some of our staff for their favourite virtuous quotes. Here’s a sample:

  • “The unexamined life is not worth living” – Socrates
  • “No man wishes to possess the whole world if he must first become someone else” – Aristotle
  • “Protect me from what I want” – Jenny Holzer
  • “A true man goes on to the end of his endurance and then goes twice as far again” – Norwegian proverb
  • “Knowledge is no guarantee of good behaviour, but ignorance is a virtual guarantee of bad behaviour” – Martha Nussbaum

A small disclaimer to all of this – it might not work if you work with Australians. Apparently our propensity to cut down tall poppies and our discomfort for authority extend to moral postulations in email signatures. Instead of sanctimony, Aussies are likely to protect people with fun or playful quotes in their emails. Desai explains:

“We’re studying how people react to moral symbols in Australia. Our preliminary study showed that people there were sceptical of moral displays. They seemed to think the bloke with the quote was being ‘holier than thou’ and probably had something to hide.”

So, as well as your favourite virtuous quote, you might want to bung a joke on the bottom of your emails to please your sceptical Antipodean colleagues, lest they lead you into temptation.