Anzac Day: militarism and masculinity don’t mix well in modern Australia

In 2016, the then Prime Minister Tony Abbott penned a passionate column on the relevance of Anzac Day to modern Australia. For Abbott, the Anzacs serve as the moral role models that Australians should seek to emulate. He wrote, “We hope that in striving to emulate their values, we might rise to the challenges of our time as they did to theirs”.

The notion that Anzacs embody a quintessentially Australian spirit is a century old. The official World War I journalist C.E.W. Bean wrote Gallipoli was the crucible in which the rugged resilience and camaraderie of (white) Australian masculinity, forged in the bush, was decisively tested and proven on the world stage.

At the time, this was a potent way of making sense out of the staggering loss of 8000 Australian lives in a single military campaign. Since then, it has been common for politicians and journalists to claim that Australia was ‘baptised’ in the ‘blood and fire’ of Gallipoli.

The dark side to the Anzac myth is a view of violence as powerful and creative.

However, public interest in Anzac Day has fluctuated over the course of the 20th century. Ambivalence over Australia’s role in the Vietnam War had a major role in dampening enthusiasm from the 1970s.

The election of John Howard in 1996 signalled a new era for the Anzac myth. The ‘digger’ was, for Prime Minister Howard, the embodiment of Australian mateship, loyalty and toughness. Since then, government funding has flowed to Anzac-related school curricula as well as related books, films and research projects. Old war memorials have been refurbished and new ones built. Attendance at Anzac events in Australia and overseas has swelled.

On Anzac Day, we are reminded how crucial it is for individuals to be willing to forgo self-interest in exchange for the common national good. Theoretically, Anzac Day teaches us not to be selfish and reminds us of our duties to others. But it does so at a cost. Because military role models bring with them militarism – which sees the horror and tragedy of war as a not only justifiable but desirable way to solve problems.

The dark side to the Anzac myth is a view of violence as powerful and creative. Violence is glorified as the forge of masculinity, nationhood and history. In this process, the acceptance and normalisation of violence culminates in celebration.

The renewed focus on the Anzac legend in Australian consciousness has brought with it a pronounced militarisation of Australian history, in which our collective past is reframed around idealised incidents of conflict and sacrifice. This effectively takes the politics out of war, justifying ongoing military deployment in conflict overseas, and stultifying debate about the violence of invasion and colonisation at home.

In the drama of militarism, the white, male and presumptively heterosexual soldier is the hero. The Anzac myth makes him the archetypical Australian, consigning the alternative histories of women, Aboriginal and Torres Strait Islanders, and sexual and ethnic minorities to the margins. I’d argue that for right-wing nationalist groups, the Anzacs have come to represent their nostalgia for a racially purer past. They have aggressively protested against attempts to critically analyse Anzac history.

Turning away from militarism does not mean devaluing the military or forgetting about Australia’s military history.

Militarism took on a new visibility during Abbott’s time as Prime Minister. Current and former military personnel have been appointed to major civilian policy and governance roles. Police, immigration, customs, and border security staff have adopted military-style uniforms and arms. The number of former military personnel entering state and federal politics has risen significantly in the last 15 years.

The notion that war and conflict is the ultimate test of Australian masculinity and nationhood has become the dominant understanding not only of Anzac day but, arguably, of Australian identity. Any wonder that a study compiled by McCrindle Research reveals that 34% of males, and 42% of Gen Y males, would enlist in a war that mirrored that of WWI if it occurred today.

This exaltation of violence sits uncomfortably alongside efforts to reduce and ultimately eradicate the use of violence in civil and intimate life. Across the country we are grappling with epidemic of violence against women and between men. But when war is positioned as the fulcrum of Australian history, when our leaders privilege force in policy making, and when military service is seen as the penultimate form of public service, is it any wonder that boys and men turn to violence to solve problems and create a sense of identity?

The glorification of violence in our past is at odds with our aspirations for a violence-free future.

In his writings on the dangers of militarism, psychologist and philosopher William James called for a “moral equivalent of war” – a form of moral education less predisposed to militarism and its shortcomings.

Turning away from militarism does not mean devaluing the military or forgetting about Australia’s military history. It means turning away from conflict as the dominant lens through which we understand our heritage and shared community. It means abjuring force as a means of solving problems and seeking respect. However, it also requires us to articulate an alternative ethos weighty enough to act as a substitute for militarism.

At a recent domestic violence conference in Sydney, Professor Bob Pease called for the rejection of the “militarisation of masculinity”, arguing that men’s violence in war was linked to men’s violence against women. At the same time, however, he called on us to foster “a critical ethic of care in men”, recognising that men who value others and care for them are less prone to violence.

For as long as militarism and masculinity are fused in the Australian imagination, it’s hard to see how this ethos of care can take root. It seems that the glorification of violence in our past is at odds with our aspirations for a violence-free future. The question is whether we value this potential future more than an idealised past.


Academia’s wicked problem

What do you do when a crucial knowledge system is under-resourced, highly valued, and is having its integrity undermined? That’s the question facing those working in academic research and publishing. There’s a risk Australians might lose trust in one of the central systems on which we rely for knowledge and innovation.

It’s one of those problems that defies easy solutions, like obesity, terrorism or a tax system to suit an entire population. Academics call these “wicked problems” – meaning they’re resistant to easy solutions, not that they are ‘evil’.

Charles West Churchman, who first coined the term, described them as:

That class of social system problems which are ill-formulated, where the information is confusing, where there are many clients and decision makers with conflicting values and where the ramifications in the whole system are thoroughly confusing.

The wicked problem I face day-to-day is that of research and publication ethics. Though most academics do their best within a highly pressured system I see many issues, which span a continuum, starting with cutting corners in the preparation of manuscripts and ending with outright fraud.

It’s helpful to know whether the problem we are facing is a wicked one or not. It can help us to rethink the problem, understand why conventional problem-solving approaches have failed and encourage novel approaches, even if solutions are not readily available.

Though publication ethics, which considers academic work submitted for publication, has traditionally been considered a problem solely for academic journal editors and publishers, it is necessarily entwined with research ethics – issues related to the actual performance of the academic work. For example, unethical human experimentation may only come to light at the time of publication though it clearly originates much earlier.

Given the pressure editors are under, the system is vulnerable to subversion.

Consider the ethical issues surrounding peer review, the process by which academic experts (peers) assess the work of others.

Though imperfect, formalisation of peer review has become an important mark of quality for a journal. Peer review by experts, mediated by journal editors, usually determines whether a paper is published. Though seemingly simple, there are many points where the system can be gamed or even completely subverted – a major one being in the selection of reviewers.

As the number of academics and submissions to journals increase, editors face a logistical challenge in keeping track of an ever-increasing number of submissions needing review. However, as the rise in submissions has not corresponded with a rise in editors – many of whom are volunteers – these journals are overworked and often don’t have a big enough circle of reviewers to call on for the papers being submitted.

A simple approach to increase the pool of reviewers adopted by a number of journals is to allow authors to suggest reviewers for their paper via the online peer review system. These suggestions can be valuable if overseen by editors who can assess reviewers’ credentials. But they are already overworked and often handling work at the edge of their area of expertise, meaning time is at a premium.

Given the pressure editors are under, the system is vulnerable to subversion. It’s always been a temptation for authors to submit the name of reviewers who they believed would view their work favourably. Recently, a small number took it a step further, suggesting fake reviewer names for their papers.

These fake reviewers (usually organised via a third party) promptly submitted favourable reviews which led to papers inappropriately being accepted for publication. The consequences were severe – papers had to be retracted with consequent potential reputational damage to the journal, editors, authors and their institutions. Note how a ‘simple’ view of a ‘wicked’ problem – under resourced editors can be helped by authors suggesting their reviewers – led to new and worse problems than before.

Removing the ability of authors to suggest reviewers … would be treating a symptom rather than the cause.

But why would some authors go to such extreme ends as submitting fake reviews? The answer takes us into a related problem – the way authors are rewarded for publications.

Manipulating peer review gives authors a higher chance of publication – and academic publications are crucial for being promoted at universities. Promotion often provides higher salary, prestige, perhaps less teaching allocation and other fringe benefits. So for those at the extreme, who lack the necessary skills to publish (or even firm command of academic English), it’s logical to turn to those who understand how to manipulate the system.

We could easily fix the problem of fake reviews by removing an author’s ability to suggest reviewers but this would be treating a symptom rather than the cause. Namely, a perverse reward system for authors.

Removing author suggestions does nothing to help overworked editors deal more easily with the huge amount of submissions they receive. Nor do editors have the power to address the underlying problem – an inappropriate system of academic incentives.

There are no easy solutions but accepting the complexity may at least help to understand what it is that needs to be solved. Could we change the incentive structure to reward authors for more than merely being published in a journal?

There are moves to understand these intertwined problems but all solutions will fail unless we come back to the first requirement for approaching a wicked problem – agreement it’s a problem shared by many. So while the issues are most notable in academic journals, we won’t find all the solutions there.


Philosophy must (and can) thrive outside universities

A recent article in ABC Religion by Steve Fuller described philosophy being “at the crossroads”. The article explores philosophy’s relationship to universities and what living a “philosophical life” really looks like.

Reading it, my mind whisked me back to some of my earliest days at The Ethics Centre. Before returning to Sydney, I enjoyed the good fortune to complete my doctorate at Cambridge – one of the great universities of the world. While there, I mastered the disciplines of academic philosophy. However, I also learned the one lesson that my supervisor offered me at our first meeting – I should always “go for the jugular”. As it happens, I was quite good at drawing blood.

Perhaps this was a young philosopher’s sport because, as I grew older and read more deeply, I came to realise what I’d learned to do was not really consistent with the purpose and traditions of philosophy at all. Rather, I had become something of an intellectual bully – more concerned with wounding my opponents than with finding the ‘truth’ in the matter being discussed.

This realisation was linked to my re-reading of Plato – and his account of the figure of Socrates who, to this day, remains my personal exemplar of a great philosopher.

The key to my new understanding of Socrates lay in my realisation that, contrary to what I had once believed, he was not a philosophical gymnast deliberately trying to tie his interlocutors in knots (going for the jugular). Rather, he was a man sincerely wrestling, with others, some of the toughest questions faced by humanity in order to better understand them. What is justice? What is a good life? How are we to live?

The route to any kind of answer worth holding is incredibly difficult – and I finally understood (I was a slow learner) that Socrates subjected his own ideas to the same critical scrutiny he required of others.

In short, he was totally sincere when he said that he really did not know anything. All of his questioning was a genuine exploration involving others who, in fact, did claim to ‘know’. That is why he would bail up people in the agora (the town square) who were heading off to administer ‘justice’ in the Athenian courts.

Surely, Socrates would say, if you are to administer justice – then you must know what it is. As it turned out, they did not.

The significance of Socrates’ work in the agora was not lost on me. Here was a philosopher working in the public space. The more I looked, the more it seemed that this had been so for most of the great thinkers.

So that is what I set out to do.

One of my earliest initiatives was to head down to Martin Place, in the centre of Sydney, where I would set up a circle of 10 plastic chairs and two cardboard signs that said something like, “If you want to talk to a philosopher about ideas, then take a seat”. And there I would sit – waiting for others.

Without fail they would come – young, old, rich, poor – wanting to talk about large, looming matters in their lives. I remember cyclists discussing their place on our roads, school children discussing their willingness to cheat in exams (because they thought the message of society is ‘do whatever it takes’).

Occasionally, people would come from overseas – having heard of this odd phenomenon. A memorable occasion involved a discussion with a very senior and learned rabbi from Amsterdam – the then global head (I think) of Progressive Judaism. On another occasion, a woman brought her mother (visiting from England) to discuss her guilt at no longer believing in God. I remember we discussed what it might mean to feel guilt in relation to a being you claimed not to exist. There were few answers – but some useful insights.

Anyway, I came to imagine a whole series of philosophers’ circles being dotted around Martin Place and other parts of Sydney (and perhaps Australia). After all, why should I be the only philosopher pursuing this aspect of the philosophical life. So I reached out to the philosophy faculty at Sydney University – thinking (naively as it turned out) I would have a rush of colleagues wishing to join me.

Alas – not one was interested. The essence of their message was that they doubted the public would be able to engage with ‘real philosophy’ – that the techniques and language needed for philosophy would be bewildering to non-philosophers. I suspect there was also an undeclared fear of being exposed to their fellow citizens in such a vulnerable position.

Actually, I still don’t really know what led to such a wholesale rejection of the idea.

However, I think it was a great pity other philosophers should have felt more comfortable within the walls of their universities rather than out in the wider world.

I doubt that anything I write or say will be quoted in the centuries to come. However, I would not, for a moment, change the choice I made to step outside of the university and work within the agora. Life then becomes messy and marvellous in equal measure. Everything needs to be translated into language anyone can understand (and I have found that this is possible without sacrificing an iota of philosophical nuance).

I think it was a great pity other philosophers should have felt more comfortable within the walls of their universities rather than out in the wider world.

You constantly need to challenge unthinking custom and practice most people simply take for granted. This does not make you popular. You are constantly accused of being ‘unethical’ because you entertain ideas one group or another opposes. You please almost nobody. You cannot aim to be liked. And you have to deal with the rawness of people’s lives – discovering just how much the issues philosophers consider (especially in the field of ethics) really matter.

This is not to say that ‘academic’ philosophy should be abandoned. However, I can see no good reason why philosophers should think this is the only (or best) way to be a philosopher. Surely, there is room (and a need) for philosophers to live larger, more public lives.

You constantly need to challenge unthinking custom and practice most people simply take for granted. This does not make you popular.

I have scant academic publications to my name. However, at the height of the controversy surrounding the introduction of ethics classes for children not attending scripture in NSW, I enjoyed the privilege of being accused of “impiety” and “corrupting the youth” by the Anglican and Catholic Archbishops of Sydney. Why a ‘privilege’? Because these were precisely the same charges alleged against Socrates. So far, I have avoided the hemlock. For a philosopher, what could be better than that?


Ethics Explainer: Double-Effect Theory

Double-effect theory recognises that a course of action might have a variety of ethical effects, some ‘good’ and some ‘bad’.

It can be seen as a way of balancing consequentialist and deontological approaches to ethics.

According to the theory, an action with both good and bad effects may be ethical as long as:

  • Only the good consequences are intended (we don’t want the bad effects to occur, they’re just inescapable, even if they can be foreseen).
  • The good done by the action outweighs the harm it inflicts.
  • The bad effect is not the means by which the good effect occurs (we can’t do evil to bring about good – the good and bad consequences have to occur simultaneously).
  • The act we are performing is not unethical for some other reason (for example, an attack on human dignity).
  • We seek to minimise, if possible, the unintended and inadvertent harm that we cause.

Double-effect is best explained through the classic thought experiment: the Trolley problem.

Imagine a runaway train carriage is hurtling down the tracks toward five railroad workers. The workers are wearing earmuffs and unable to hear the carriage approaching. You have no way of warning them. However, you do have access to a lever which will divert the train onto a side-track on which only one person is working. Should you pull the lever and kill the one man to save five lives?

Take a moment to think about what you would do and your reasons for doing it. Now, consider this alternative.

The train is still hurtling toward the five workers but this time there’s no lever. Instead, you’re a lightweight person standing on a bridge above the railroad. Next to you is a very large man who would be heavy enough to stop the train. You could push the man onto the tracks and stop the train, but it would kill the heavy man. Should you push him off the bridge?

Again, think about what you would do and why you would do it.

Did you say ‘yes’ in the first scenario and ‘no’ in the second? That’s the most common response, but why? After all, in each case you’re killing one person to save five. According to many consequentialists that would be the right thing to do. By refusing to push the man off the bridge, are we being inconsistent?

Double-effect theory provides a way of consistently explaining the difference in our responses.

In the first case, our intention is to save five lives. An unintended, foreseeable consequence of pulling the lever is the death of one worker. But because the stakes are sufficiently high, our intended act (pulling a lever to redirect a train) isn’t intrinsically wrong. The good consequences outweigh the bad. The negative outcomes are side-effects of our good action, and so, we are permitted to pull the lever.

In the second case, the death of the heavy man is not a side-effect. Rather, it is the means (pushing the man off the bridge to stop the train) by which we achieve our goal (saving the five men). The negative outcomes are not unavoidable side-effects that occur at the same time as the good deed. It is causally prior to and directly linked to the good outcome.

This fact has ethical significance because it changes the structure of the action.

Instead of ‘saving lives whilst unavoidably causing someone to die’, it is a case of ‘killing one person deliberately in order to save five’. In the lever scenario, we don’t need the one worker to die in order to save the five. In the latter, we need the heavy man to die. Which means when we push him, we are intentionally killing him.

Double-effect is used in a range of different contexts. In medical ethics it can be used to explain why it would be ethical for a pro-life pregnant woman to take life-saving medicine even if it would likely kill her unborn child (unintended side-effect). It also explains the actions of doctors who increase the dose of opiates to end pain – even though they know that the dosage will end the patient’s life.

In military ethics it explains how an air strike which causes some unavoidable ‘collateral damage’ (the death or injury of non-combatants) might still be permissible – assuming it meets the criteria described above and involves the proportionate and discriminate use of force.


“Animal rights should trump human interests” – what’s the debate?

Are the ways humans subject animals to our own needs and wants justified?

Humans regularly impose our own demands on the animal world, whether it’s eating meat, scientific testing, keeping pets, sport, entertainment or protecting ourselves. But is it reasonable and ethical to do so?

Humans and animals

We often talk about humans and animals as though they are two separate categories of being. But aren’t humans just another kind of animal?

Many would say “no”, claiming humans have greater moral value than other animals. Humans possess the ability to use reason while animals act only on instinct, they say. This ability to think this way is held up as the key factor that makes humans uniquely worthy of protection and having greater moral value than animals.

“Animals are not self-conscious and are there merely as means to an end. That end is man.” – Immanuel Kant

Others argue that this is “speciesism” because it shows an unjustifiable bias for human beings. To prove this, they might point to cases where a particular animal shows more reason than a particular human being – for example, a chimpanzee might show more rational thought than a person in a coma. If we don’t grant greater moral value to the animal in these cases, it shows that our beliefs are prejudicial.

Some will go further and suggest that reason is not relevant to questions of moral value, because it measures the value of animals against human standards. In determining how a creature should be treated, philosopher Jeremy Bentham wrote, “… the question is not ‘Can they reason?’, nor ‘Can they talk?’, but ‘Can they suffer?’”

So in determining whether animal rights should trump human interests, we first need to figure out how we measure the value of animals and humans.

Rights and interests

What are rights and how do they correspond to interests? Generally speaking, you have a right when you are entitled to do something or prevent someone else from doing something to you. If humans have the right to free speech, this is because they are entitled to speak freely without anyone stopping them. The right protects an activity or status you are entitled to.

Rights come in a range of forms – natural, moral, legal and so on – but violating someone’s right is always a serious ethical matter.

“Animals are my friends. I don’t eat my friends.” – George Bernard Shaw

Interests are broader than rights and less serious from an ethical perspective. We have an interest in something when we have something to gain or lose by its success or failure. Humans have interests in a range of different projects because our lives are diverse. We have interests in art, medical research, education, leisure, health…

When we ask whether animal rights should trump human interests, we are asking a few questions. Do animals have rights? What are they? And if animals do have rights, are they more or less important than the interests of humans? We know human rights will always trump human interests, but what about animal rights?

Animal rights vs animal welfare

A crucial point in this debate is understanding the difference between animal rights and animal welfare. Animal rights advocates believe animals deserve rights to prevent them from being treated in certain ways. The exploitation of animals who have rights is, they say, always morally wrong – just like it would be for a human.

Animal welfare advocates, on the other hand, believe using animals can be either ethical or, in practice, unavoidable. These people aim to reduce any suffering inflicted on animals, but don’t seek to end altogether what others regard as exploitative practices.

As one widely used quote puts it, “Animal rights advocates are campaigning for no cages, while animal welfarists are campaigning for bigger cages”.

Are they mutually exclusive? What does taking a welfarist approach say about the moral value of animals?

Animal rights should trump human intereststook place on 3 May 2016 at the City Recital Hall in Sydney.


Male suicide is a global health issue in need of understanding

“This is a worldwide phenomenon in which men die at four times the rate of women. The four to one ratio gets closer to six or seven to one as people get older.”

That’s Professor Brian Draper, describing one of the most common causes of death among men: Suicide.

Suicide is the cause of death with the highest gender disparity in Australia – an experience replicated in most places around the world, according to Draper.

So what is going on? Draper is keen to avoid debased speculation – there are a lot of theories but not much we can say with certainty. “We can describe it, but we can’t understand it,” he says. One thing that seems clear to Draper is it’s not a coincidence so many more men die by suicide than women.

If you are raised by damaged parents, it could be damaging to you.

“It comes back to masculinity – it seems to be something about being male,” he says.

“I think every country has its own way of expressing masculinity. In the Australian context not talking about feelings and emotions, not connecting with intimate partners are factors…”

The issue of social connection is also thought to be connected in some way. “There is broad reluctance by many men to connect emotionally or build relationships outside their intimate partners – women have several intimate relationships, men have a handful at most,” Draper says.

You hear this reflection fairly often. Peter Munro’s feature in the most recent edition of Good Weekend on suicide deaths among trade workers tells a similar story.

Mark, an interviewee, describes writing a suicide note and feeling completely alone until a Facebook conversation with his girlfriend “took the weight of his shoulders”. What would have happened if Mark had lost Alex? Did he have anyone else?

None of this, Draper cautions, means we can reduce the problem to idiosyncrasies of Aussie masculinity – toughness, ‘sucking it up’, alcohol… It’s a global issue.

“I’m a strong believer in looking at things globally and not in isolation. Every country will do it differently, but you’ll see these issues in the way men interact – I think it’s more about masculinity and the way men interact.”

Another piece of the puzzle might – Draper suggests – be early childhood. If your parents have suffered severe trauma, it’s likely to have an effect.

“If you are raised by damaged parents, it could be damaging to you. Children of survivors of concentration camps, horrendous experiences like the killing fields in Cambodia or in Australia the Stolen Generations…”

It comes back to masculinity – it seems to be something about being male.

There is research backing this up. For instance, between 1988 and 1996 the children of Vietnam War veterans died by suicide at over three times the national average.

Draper is careful not to overstate it – there’s still so much we don’t know, but he does believe there’s something to early childhood experiences. “Sexual abuse in childhood still conveys suicide risk in your 70s and 80s … but there’s also emotional trauma from living with a person who’s not coping with their own demons.”

“I’m not sure we fully understand these processes.” The amount we still need to understand is becoming a theme.

What we don’t know is a source of optimism for Draper. “We’ve talked a lot about factors that might increase your risk but there’s a reverse side to that story.”

“A lot of our research is focussed predominantly on risk rather than protection. We don’t look at why things have changed for the better … For example, there’s been a massive reduction in suicides in men between 45-70 in the last 50 years.”

“Understanding what’s happened in those age groups would help.”

It’s pretty clear we need to keep talking – researchers, family, friends, support workers and those in need of support can’t act on what they don’t know.

If you or someone you know needs support, contact:

  • Lifeline 13 11 14

  • Men’s Line 1300 78 99 78

  • beyondblue 1300 224 636

  • Kids Helpline 1800 551 800


Ozi Batla: Fatherhood is the hardest work I’ve ever done

On his interest in ‘Mankind – Deconstructing Masculinity’:

Masculinity is something I’ve been thinking about a bit lately. I’ve been raising my boy for the past year and a half and having your first kid makes you wonder about the things you’ve learned and the things you want to pass on.

There are a lot of things I learned that I don’t want to pass on, and even more stuff I never really considered before I became a dad – things I don’t have the answers for.

On being a full-time dad:

I’ve had to come to grips with the challenges of being a stay-at-home dad.

Support and network groups are almost entirely set up for mums. Our entire parenting language is set up around mums. We have ‘mothers’ groups’ or in my case ‘mums’ surfers groups’ so someone could watch my son while I went for a surf. I felt really excluded from a lot of these activities.

It’s the hardest work I’ve ever done but it’s still not considered a man’s work.

There’s a patronising assumption about men in parenting roles. My boy had a meltdown at swimming the other day and other parents looked at me as though I wasn’t used to it. They said things like “Oh, you’re doing so well”, and I thought “Thanks, I’ve been doing this full time for a year and a half”. It felt pretty patronising.

Like a lot of men, I defined myself by my work, which has taken a back seat lately. I’ve been dealing with the shifting definitions of my own identity. It’s weird, because it’s the hardest work I’ve ever done but it’s still not considered a man’s work.

On the pressure fathers face to teach their sons ‘what it means to be a man’:

I think it’s probably the same for most men. I’m assuming it was for my dad – he didn’t have those answers for me when I was growing up. A lot of it gets left to outside sources to inform you.

I didn’t really take much of that on board. I just tried to keep my head down at school and get out of there. The way the school approached masculinity was completely at odds with the way my parents were trying to raise me and my brothers.

I’ve only recently realised the influence of all this. In a few years’ time my son is going to get picked on, get into fights, and ask me the same kind of questions.

On his journey toward hip hop:

The school I went to was very sort of ‘jock’, and I wasn’t like that at all. My journey into hip hop was a way of dealing with that and overcoming the trauma. It was a defensive mechanism – my parents didn’t instil this in me – but you do need to fight in one way or another. Words became my weapons. It’s only recently that I’ve realised that was a big influence in leading me into hip hop.

On masculinity and sexism in Australian hip hop:

Like a lot of the music industry, hip hop has been male dominated, although it hasn’t been part of my experience – aside from a few years of battle rapping, which was part of my journey to establish some boundaries. Battling was a way to make up for my time at school and I wish I’d been able to use those skills to create space around me.
There is a lot of very macho and sexist culture around hip hop music, but I don’t think it’s exclusively that way, and I think it’s been changing, in lots of ways, The Herd was a challenge to that whole notion of hip hop.

On The Herd’s re-imagination of Redgum’s ‘I Was Only 19’:

War is an extension of those more negative aspects of masculinity. It’s almost the biggest manifestation of them. There was a lot of anger around the Vietnam War – seeing these patterns repeated. I think the original is quite angry in its own folksy way.

I know from hanging out with John Schumann that the people he was writing about, and writing for, were certainly angry about the way they’d been treated.

War is an extension of those negative aspects of masculinity.

On veterans:

There’s a notion, especially in Australia – it probably comes from the Anglo tradition – that you should just “suck it up and get on with it”. I think it’s one of the most damaging parts of male identity in this country and a big contribution to high youth suicide rates, drug abuse and mental health issues.

There have been big campaigns to move that along, but generally men are still supposed to cop it on the chin and move on. I think that’s a major issue for a lot of returned soldiers and other men. It’s still considered fairly awkward to delve into your feelings with other men.

On radicalisation and alcohol violence among young people:

It’s all part of the same thing. I think a lot of kids involved with radical organisations are pretty stupid, but kids tend to be.

You do need to fight in one way or another. For me, words became my weapons.

These kids are caught between two worlds. Being a young male, I think feeling anger, learning to deal with it, and finding an outlet for it is really important. Anger does express itself in different ways, but not having a culture where it’s acceptable to show anger non-physically leads to a number of issues.

Combine all this with the fact that the one space where it is acceptable to be emotional is when you’re pissed, and it’s not surprising to see the problems we do.

For me, hip hop – when I was a teenager – was my angry refuge. The sort of stuff I listened to when I was a teenager isn’t stuff I listen to these days. The music is still nostalgic, but kind of embarrassing. It’s the stuff that attracts young men though. It resonates with something inside them or gives them a bit of an outlet.


Ethics Explainer: Virtue Ethics

Virtue ethics is arguably the oldest ethical theory in the world, with origins in Ancient Greece.

It defines good actions as ones that display embody virtuous character traits, like courage, loyalty, or wisdom. A virtue itself is a disposition to act, think and feel in certain ways. Bad actions display the opposite and are informed by vices, such as cowardice, treachery, and ignorance.

For Aristotle, ethics was a key element of human flourishing because it taught people how to differentiate between virtues and vices. By encouraging examination, more people could live a life dedicated to developing virtues.

It’s one thing to know what’s right, but it’s another to actually do it. How did Aristotle advise us to live our virtues?

By acting as though we already have them.

Excellence as habit

Aristotle explained that both virtues and vices are acquired by repetition. If we routinely overindulge a sweet tooth, we develop a vice — gluttony. If we repeatedly allow others to serve themselves dinner before us, we develop a virtue – selflessness.

Virtue ethics suggests treating our character as a lifelong project, one that has the capacity to truly change who we are. The goal is not to form virtues that mean we act ethically without thinking, but to form virtues that help us see the world clearly and make better judgments as a result.

In a pinch, remember: vices distort, virtues examine.

A quote most of the internet attributes to Aristotle succinctly reads: “We are what we repeatedly do. Excellence, then, is not an act, but a habit”.

Though he didn’t actually say this, it’s a good indication of what virtue ethics stands for. We can thank American philosopher, Will Durant, for the neat summary.

 

Aim for in between

There are two practical principles that virtue ethics encourages us to use in ethical dilemmas. The first is called The Golden Mean. When we’re trying to work out what the virtuous thing to do in a particular situation is, look to what lies in the middle between two extreme forms of behaviour. The mean will be the virtue, and the extremes at either end, vices.

Here’s an example. Imagine your friend is wearing a horrendous outfit and asks you how they look. What are the extreme responses you could take? You could a) burst out laughing or b) tell them they look wonderful when they don’t.

These two extremes are vices – the first response is malicious, the second is dishonest. The virtuous response is between these two. In this case, that would be gently — but honestly — telling your friend you think they’d look nicer in another outfit.

Imagination

The second is to use our imagination. What would we do if we were already a virtuous person? By imagining the kind of person we’d like to be and how we would want to respond we can start to close the gap between our aspirational identity and who we are at the moment.

Virtue ethics can remind us of the importance of role models. If you want someone to learn ethics, show them an ethical person.

Some argue virtue ethics is overly vague in guiding actions. They say its principles aren’t specific enough to help us overcome difficult ethical conundrums. “Be virtuous” is hard to conceptualise. Others have expressed concern that virtues or vices aren’t agreed on by everybody. Stoicism or sexual openness can be a virtue to some, a vice to others.

Finally, some people think virtue ethics breeds ‘moral narcissism’, where we are so obsessed with our own ethical character that we value it above anyone or anything else.


The myths of modern motherhood

It seems as if three successive waves of feminism haven’t resolved the chronic mismatch between the ideal of the ‘good’ and ‘happy’ mother and the realities of women’s lives.

Even if you consciously reject them, ideas about what a mother ought to be and ought to feel are probably there from the minute you wake up until you go to bed at night. Even in our age of increased gender equality it seems as if the culture loves nothing more than to dish out the myths about how to be a better mother (or a thinner, more fashionable, or better-looking one).

It’s not just the celebrity mums pushing their prams on magazine covers, or the continuing dearth of mothers on TV who are less than exceptionally good-looking, or that mothers in advertising remain ubiquitously obsessed with cleaning products and alpine-fresh scents. While TV dramas have pleasingly increased the handful of roles that feature working mothers, most are unduly punished in the twists of the melodramatic plot. They have wimpy husbands or damaged children, and of course TV’s bad characters are inevitably bad due to the shortcomings of their mothers (serial killers, for example, invariably have overbearing mothers or alcoholic mothers, or have never really separated from their mothers).

It seems we are living in an age of overzealous motherhood. Indeed, in a world in which the demands of the workplace have increased, so too the ideals of motherhood have become paradoxically more – not less – demanding. In recent years, commonly accepted ideas about what constitutes a barely adequate level of mothering have dramatically expanded to include extraordinary sacrifices of time, money, feelings, brains, social relationships, and indeed sleep.

In Australia, most mothers work. But recent studies show that working mothers now spend more time with their children than their non-working mothers did in 1975. Working mothers achieve this extraordinary feat by sacrificing leisure, mental health, and even personal hygiene to spend more time with their kids.

This is coupled with a new kind of anxious sermonising that is having a profound impact on mothers, especially among the middle class. In Elisabeth Badinter’s book The Conflict, she argues that an ideology of ‘Naturalism’ has given rise to an industry of experts advocating increasingly pristine forms of natural birth and natural pregnancy, as well as an ever-expanding list of increasingly time-intensive child rearing duties that are deemed to fall to the mother alone. These duties include most of the classic practices of 21st century child rearing, including such nostrums as co-sleeping, babywearing and breastfeeding-on-demand until the age of two.

It seems we are living in an age of overzealous motherhood.

Whether it is called Intensive Mothering or Natural Parenting, these new credos of motherhood are wholly taken up with the idea that there is a narrowly prescribed way of doing things. In the West, 21st century child rearing is becoming increasingly time-consuming, expert-guided, emotionally draining, and incredibly expensive. In historical terms, I would be willing to hazard a guess that never before has motherhood been so heavily scrutinised. It is no longer just a question of whether you should or should not eat strawberries or prawns or soft cheese, or, heaven forbid, junk food, while you are pregnant, but so too, the issue of what you should or should not feel has come under intense scrutiny.

Never before has there been such a microscopic investigation of a pregnant woman’s emotional state, before, during and after birth. Indeed, the construction of new psychological disorders for mothers appears to have become something of a psychological pastime, with the old list of mental disorders expanding beyond prenatal anxiety, postnatal depression, postpartum psychosis and the baby blues, to include the baby pinks (a label for a woman who is illogically and inappropriately happy to be a mother), as well as Prenatal and Postnatal Stress Disorder, Maternal Anxiety and Mood Imbalance and Tokophobia—the latter being coined at the start of this millennium as a diagnosis for an unreasonable fear of giving birth.

The problem with the way in which this pop psychology is played out in the media is that it performs an endless re-inscription of the ideologies of mothering. These ideologies are often illogical, contradictory and – one suspects – more often dictated by what is convenient for society and not what is actually good for the children and parents involved. Above all else, mothers should be ecstatically happy mothers, because sad mothers are failed mothers. Indeed, according to the prevailing wisdom, unhappy mothers are downright unnatural, if not certifiably insane.

Never before has motherhood been so heavily scrutinised.

Little wonder there has been an outcry against such miserable standards of perfection. The same decade that saw the seeming triumph of the ideologies of Intensive and Natural mothering, also saw the rise of what has been called the ‘Parenting Hate Read’ — a popular outpouring of books and blogs written by mothers (and even a few fathers) who frankly confess that they are depressed about having children for no better reason than it is often mind-numbing, exhausting and dreadful. Mothers love their children, say the ‘Parenting Hate Reads’, but they do not like what is happening to their lives.

The problem is perhaps only partly about the disparity between media images of ecstatically happy mummies and the reality of women’s lives. It is also because our ideas about happiness have grown impoverished. Happiness, as it is commonly understood in the western world, is made up of continuous moments of pleasure and the absence of pain.

These popular assumptions about happiness are of comparatively recent origin, emerging in the works of philosophers such as Jeremy Bentham, who argued in the 18th century that people act purely in their self-interest and the goal to which self-interest aspires is happiness. Ethical conduct, according to Bentham and James Mill (father to John Stuart), should therefore aspire to maximise pleasure and minimise pain.

Our ideas about happiness have grown impoverished.

This ready equation of goodness, pleasure and happiness flew in the face of ideas that had been of concern to philosophers since Aristotle argued that a person is not made happy by fleeting pleasures, but by fulfilment stemming from meaning and purpose. Or, as Nietzsche, the whirling dervish of 19th century philosophy, put it, “Man does not strive for happiness – only the Englishman does”.

Nevertheless, Western assumptions about happiness have remained broadly utilitarian, giving rise to the culturally constructed notion of happiness we see in TV commercials, showing families becoming happier with every purchase. Or by life coaches peddling the dubious hypothesis that self-belief can overcome the odds, whatever your social or economic circumstance.

Unless you are Mother Teresa, you have probably been spending your life up until the time you have children in a reasonably independent and even self-indulgent way. You work hard through the week but sleep in on the weekend. You go to parties. You come home drunk. You see your friends when you want. Babies have different ideas. They stick forks in electric sockets, go berserk in the car seat, and throw up on your work clothes. They want to be carried around in the day and wake in the night.

If society can solve its social problems then maybe parenting will cease to be a misery competition. Mothers might not be happy in a utilitarian or hedonistic sense but will lead rich and satisfying lives. Then maybe a stay-at-home dad can change a nappy without a choir of angels descending from heaven singing ‘Hallelujah’.

This is an edited extract from “On Happiness: New Ideas for The 21st Century” UWA Publishing.


To live well, make peace with death

“What do we say to the god of death?” swordsman Syrio Forel asks Arya Stark in George R.R. Martin’s Game of Thrones (and in HBO’s TV series). “Not today.”

This short refrain marks the beginning of a sustained exploration of humanity’s relationship with death told through Arya’s experiences. She becomes a murderer and later, in ‘The House of the Undying’ where death is seen as a god to be worshipped, an assassin and servant to that god.

Watching Arya’s story unfold, it seemed to me she’d never forgotten her former (you guessed it, now dead) teacher’s lesson – the only response to death is denial.

According to many thinkers, this isn’t surprising at all. Arya isn’t alone in running from death. Denying the reality of human mortality is a near universal behaviour. In The Antidote: Happiness for people who can’t stand positive thinking, Oliver Burkeman considers the writing of Ernest Becker, whose arguments Burkeman surmises. “Your life is one relentless attempt to avoid [thinking about death] – a struggle so elemental that … for much of the time you succeed.”

Becker believed to avoid confronting our mortality people invest in “immortality projects”. Art, family, business, nations, war, charity, and so on… Immortality projects aim to overcome physical death by ensuring our continued existence through symbols or ideas.

The late David Bowie promised “We can be heroes”, and that’s precisely Becker’s point. Immortality projects are attempts to become heroes, thereby avoiding the emptiness of death.

But research suggests the common instinct to avoid thinking about our mortality might be worth pushing against. In a paper entitled ‘Deliver us from Evil’, researchers found that mortality avoidance can cloud our judgements about life and death issues, leading to unreflective decisions in high-stakes situations.

The study asked two groups of people to undertake a long and generally dull questionnaire and then to read a short essay and tell researchers how strongly they agreed with it. The essay was a strong statement of support for the controversial policies of the Bush administration’s invasion of Iraq. It included lines like “It annoys me when I hear other people complain that President Bush is using his war against terrorism as a cover for instituting policies that, in the long run, will be detrimental to this country … Mr. Bush has been a source of strength and inspiration to us all. God bless him and God bless America”.

The only difference between the two groups was that one questionnaire forced subjects to consider their own mortality. The “mortality salience” group were asked to “briefly describe the emotions that the thought of your own death arouses in you” and to “jot down, as specifically as you can, what you think will happen to you as you physically die, and once you are physically dead”.

For some, even the idea of answering these questions might feel uncomfortable, as it did for the subjects in the experiment. Researchers found the “mortality salient” subjects invested more strongly in the nearest immortality project to hand – the war in Iraq. Subjects from the mortality salience group agreed strongly with the essay. By contrast, the control group generally disagreed with the essay’s sentiments.

When we’re forced to confront our own mortality, our default reaction may not be the product of rational thinking but an impulsive rejection of death.

This tells us something important – especially in a time when we are continually confronted with the threat and reality of terrorism and domestic violence nearly every day. When we’re forced to confront our own mortality our default reaction may not be the product of rational thinking but an impulsive rejection of death. The researchers argued similarly:

The fact that reminders of death and the events of 9/11 enhanced support for President Bush in the present studies may not bode well for the philosophical democratic ideal that political preferences are the result of rational choice based on an informed understanding of the relevant issues.

This poses a challenge for ethical behaviour – some of the most serious ethical decisions people face are made when they are confronted with death. Most obviously these include healthcare and political decisions with serious implications for the general populous. Is it possible to overcome mortality avoidance and make decisions based on moral values and principles instead?

Researchers weren’t optimistic on this point, but Burkeman indirectly suggests a way forward. His interest lay in whether thinking about death might enable us to live a happier life. He presents evidence that regular contemplation of death can enable us to avoid horror and shock when it ultimately arrives. “The more you become aware of life’s finitude, the more you will cherish it and the less likely you will be to fritter it away on distraction”.

The same might be true for our mortality avoidance in decision making. If regular acquaintance with death can remove some of its shocking strangeness, perhaps we will be less likely to invest in immortality projects as a way to distract ourselves from its reality. By making our peace with the fact we are all going to die, we will be less likely to make decisions based in our fear of death. If ‘Deliver us from Evil’ is any indication, this might also save lives in the long run by ensuring serious decisions are made reasonably and not from fear.

Plus, doing so might also make you happier.