Greer has the right to speak, but she also has something worth listening to

Early on in my transition I was physically assaulted whilst boarding a bus. My back had been turned, my hands occupied with digging in my purse for a ticket when a solid fist struck me from the side – a sucker punch.

He yelled “TRANNY!” and trotted away at a mild gait, unhindered by any witnesses.

This thug’s annoyance resulted from me having just declined his offer of a nugget of crack cocaine in exchange for an alleyway blowjob. Since I was a transwoman waiting for public transit, I was clearly available to be propositioned for sex.

I know one thing for certain as I look back on that incident. This vicious bloke had never read Simone de Beauvoir. He had never read Germaine Greer.

And yet according to students from Cardiff University, Germaine Greer is somehow responsible for me getting smacked on the skull because of her views about transgender issues. What are these violent ideas? In her own words:

I don’t think that post-operative transgender men – M to F transgender people – are women . . . I’m not saying that people should not be allowed to go through that procedure, what I’m saying is it doesn’t make them a woman.

petition written by Cardiff University Students’ Union’s women’s officer reads:

Such attitudes contribute to the high levels of stigma, hatred and violence towards trans people – particularly trans women – both in the UK and across the world.

So, an academic lecturing in Wales who understands “woman” to mean “an adult human female” is complicit in the murder of trans women (often poor and of a racial minority) by savage men (almost always by men)?

Let’s be honest about liberals and their armchair activism. Slagging off older women on Twitter or from the ivory tower is a hell of a lot easier than confronting actual male violence.

Greer, following feminists such as Simone de Beauvoir, assesses that male and female sexuation is not a myth or a personal feeling, but material states of embodiment within ethical circumstances. She rejects a world in which a bepenised Caitlyn Jenner is dubbed Woman of the Year without having actually lived as a woman for an entire year. Greer denies feeling you are actually female inside is enough to define you as female.

Greer denies feeling you are actually female inside is enough to define you as female.

I signed a petition in support of Germaine Greer because I support her right to speak. As an academic I’m not afraid of lively and vigorous argument. As a transsexual I’m tired of my experience being erased in service to genderism. As a human person I would like a world without gender where we’re free to express ourselves regardless of sex.

Trans activists tell us “gender is not sex” like a mantra bereft of enlightenment. Well, what is gender? They never answer. Where did it come from? They never answer.

Sexual difference is the reality of how mammals reproduce. Gender is a socially constructed hierarchy of sex-based norms imposed onto bodies. Feminism contends that the specific reproductive capacities of female persons are exploited and dominated by male power, with gender as a mechanism of control.

Transgenderism, however, disavows that biological sex is an actual, real category people can fall into. Instead, trans activists adhere to the claim that being male or female is a matter of arbitrary opinion. A male must really be female if ‘she’ possesses a subjectively-identifiable cache of feminine personality traits. By her own command, she was always female, will always be female because thinking makes it so.

Greer rejects gender identity as a coherent essence. Attentive to the practical circumstances of sexuality and power, Greer defines woman as the female sex, and this by definition is exclusive of males – no matter how arbitrarily feminine their inner disposition might be.

To claim males who express “feminine” preferences must actually be female inside is to try to turn ideology into reality.

By defining sex as a materially determined fact and not imaginary assignment, Greer states an anthropological truth. You may not fancy her tact but objecting to her tone is not sufficient to overcome the feminist analysis of gender that Greer advances.

Gender is a synthetic ideology imposed on sex. To claim males who express “feminine” preferences must actually be female inside is to try to turn ideology into reality. And it is to do so on the basis of sex-based stereotypes.

Because these views can appear harsh, troubling, and oppositional to the worldview of many trans sympathisers, Greer’s opponents turn to the most regressive, chauvinistic tactic – aggressively enforcing silence. Rather than providing cogent arguments concerning gender identity, trans activists choose the tactic of no platforming.

Why are people afraid of Greer? Because she is a woman saying no to gender.

Read a different take on trans women and Germaine Greer here, by Helen Boyd.


Voluntourism creates a sense of solidarity that unifies the poor and privileged

Orphanage ‘voluntourism’ makes school students complicit in abuse

It’s great that Australian schools want to encourage their students to help others and gain perspective on their privilege. But visits to orphanages overseas are not the answer. To quote from the Friends International campaign, “Children are not tourist attractions”.

The first thing to understand is that orphanage life is damaging to children.

Children in orphanages are cared for as a group rather than as individuals. Life is regimented – each child has many different caregivers and little individual attention. Such care hurts children and may result in psychological damage and developmental delays.

Rates of physical and sexual abuse are also high in orphanages. The detrimental impact of institutional care closed all orphanages in Australia decades ago.

Short-term orphanage volunteers who play with and care for children are just adding to this harm. They increase the number of caregivers a child experiences and are just more people who abandon them.

Most children living in orphanages around the world have at least one living parent.

Visiting students may not see these harms. Necessity has forced children in orphanages to act cute to get scarce attention – something called “indiscriminate affection”. School students easily mistake this for genuine happiness. Some of those who run orphanages will also encourage children to be friendly to the visitors in the hope this will increase donations.

Donations are a big problem. In some cases “orphans” are actually created by unscrupulous organisations who pay families to hand over their children in order to collect visitor donations. In Cambodia, orphanage numbers have doubled during a time when the number of children without parents has declined.

Australian schools sometimes seek to improve conditions in orphanages by funding education or medical resources. This can also draw children into orphanages. It’s a dire state of affairs when a loving family sends their child away because an orphanage is the only option for their child to go to school or get medical care.

This is what happened in Aceh, Indonesia where 17 new orphanages were built for “tsunami orphans”. However, 98% of the children in these orphanages had families and had been placed there to gain an education.

Most children living in orphanages around the world have at least one living parent.

Child protection authorities in Australia would not allow school students to go into the homes of vulnerable children so that they could gain an understanding of their situation. Schools should not take advantage of lower standards in other places to give their students a good experience.

What I know from talking to those involved in orphanage volunteering is that they often believe what they are doing is somehow exempt from these problems. Is it possible for school orphanage volunteering trips to be OK? What might harm mitigation look like?

Due diligence may reduce the possibility of working with orphanages that are exploiting children for financial gain.

Schools should resource orphanages in a way that avoids drawing children away from their families. They can do this by making the education programs or medical care they fund equally available to poor children in the community.

Schools can ensure their students do not interact with children. This prevents the harm to children arising from having too many caregivers. Students can instead take on tasks that free up caregivers to spend more time with children, such as cooking, cleaning or maintenance work.

Child protection authorities in Australia would not allow school students to go into the homes of vulnerable children so they could gain an understanding of their situation.

When visiting the orphanages, school staff might educate their students about orphanages. They might talk about children having at least one parent who could care for them if given support.

Perhaps they could discuss the high rates of physical and sexual abuse within orphanages. Or explain child development principles and the importance of one-on-one care for young children. They can help their students understand why keeping children in families and out of orphanages is important.

Theoretically, it might be possible for schools to do all of these things but I am not aware of any school that has. In particular, not allowing students to interact with children removes what schools seem to consider an essential component of these trips.

Schools should develop sister-school relationships with overseas schools or even schools in disadvantaged communities in Australia. It’s great to see that some schools are already leading the way on this front.

Such arrangements foster understanding in a situation where there is more equality in the relationships and fewer pitfalls. If Australian schools are genuine about cross-cultural exchange, they shouldn’t be fostering last century’s model of child welfare.

Read Rev Dr Richard Umbers‘ counter-argument here.


The undeserved doubt of the anti-vaxxer

For the last three years or so I’ve been arguing with anti-vaccination activists. In the process I’ve learnt a great deal – about science denial, the motivations of alternative belief systems and the sheer resilience of falsehood.

Since October 2012 I’ve also been actively involved in Stop the AVN (SAVN). SAVN was founded to counter the nonsense spread by the Australian Vaccination-skeptics Network. According to anti-vaxxers SAVN is a Big Pharma-funded “hate group” populated by professional trolls who stamp on their right to free speech.

I’m afraid the facts are far more prosaic. There’s no Big Pharma involvement – in fact there’s no funding at all. We’re just an informal group of passionate people from all walks of life (including several research scientists and medical professionals) who got fed up with people spreading dangerous untruths and decided to speak out.

When SAVN started in 2009, antivax activists were regularly appearing in the media for the sake of “balance”. This fostered the impression of scientific controversy where none existed. Nowadays, the media understand the harm of false balance and the antivaxxers are usually told to stay home.

There’s a greater understanding that scientists are best placed to say whether or not something is scientifically controversial. (Sadly we can’t yet say the same for the discussion around climate change.) And there’s much greater awareness of how wrong – and how harmful – antivax beliefs really are.

 

 

No Jab, No Pay

This shift in attitudes has been followed by significant legislative change. Last year NSW introduced ‘No Jab, No Play’ rules. These gave childcare centres the power to refuse to enrol non-vaccinated children. Queensland and Victoria are planning to follow suit.

In April, the Abbott government introduced ‘No Jab, No Pay‘ legislation. Conscientious objectors to vaccination could no longer access the Supplement to the Family Tax Benefit Part A payment.

The payment has been conditional on children being vaccinated since 2012, as was the payment it replaced. But until now vaccination refusers could still access the supplement by having a “conscientious objection” form signed by a GP or claiming a religious belief exemption. The new legislation removes all but medical exemptions.

The change closes loopholes that should never have been there in the first place. Claiming a vaccination supplement without vaccinating is rather like a childless person insisting on being paid the Baby Bonus despite being morally opposed to parenthood.

The new rules also make the Child Care Benefit (CCB) and Child Care Rebate (CCR) conditional on vaccinating children. That’s not a trivial impost – estimates at the time of the announcement suggested some families could lose around $15,000 over four years.

What should we make of this? A necessary response to an entrenched problem or a punitive overreaction?

Much of the academic criticism of the policy has been framed in terms of whether it will in fact improve vaccination rates. Conscientious objector numbers do now seem to be falling, although it remains to be seen whether this is due to the new policies.

Embedded in this line of criticism are three premises:

  • Improvements in the overall vaccination rate will come through targeting the merely “vaccine-hesitant” population.
  • Targeting the smaller group of hard core vaccine refusers, accounting for around 2% of families, would be counterproductive.
  • The hard core is beyond the reach of rational persuasion even via benefit cuts.

These are of course empirical questions and open to testing. I suspect the third assumption is true. It’s hard to see how someone who believes the entire medical profession and research sector is either corrupt, inept, or both, or that government and media deliberately hide “the Truth”, would ever be persuaded by evidence from just those sources.

A few antivaxxers even believe the germ theory of disease itself is false. In such cases no amount of time spent with a GP explaining the facts is going to help.

They base their “choices” on beliefs ranging from the ridiculous to the repugnant, but their fundamental objection is that the new policies are coercive.

In recent years, antivax activists have tended to frame their objections to legislation like No Jab, No Pay in terms of individual rights and freedom of choice.

Yes, they base their “choices” on beliefs ranging from the ridiculous to the repugnant (including the claim that Shaken Baby Syndrome is really the result of vaccination not child abuse), but their fundamental objection is that the new policies are coercive. They make the medical procedure of vaccination compulsory, which they regard as a violation of basic human rights.

Part of this isn’t in dispute – these measures are indeed coercive. Whether they amount to compulsory vaccination is a more complex question. In my view they do not, because they withhold payments rather than issuing fines or other sanctions, although that can still be a serious form of coercive pressure. Such moves also have a disproportionate impact on families who are less well-off, revealing a broader problem with using welfare to influence behaviour.

Nonetheless, it’s not particularly controversial that the state can use some coercive power in pursuit of public health goals. It does so in a range of cases – from taxing cigarettes to fining people for not wearing seatbelts. Of course there is plenty of room for disagreement about how much coercion is acceptable. Recent discussion in Canberra about so-called “nanny state” laws reflects such debate.

But vaccination doesn’t fall into the nanny state category because vaccination decisions aren’t just made by and for individuals. Several different groups rely on herd immunity to protect them. Herd immunity can only be maintained if vaccination rates within the community are kept at high levels. By refusing to contribute to a collective good they enjoy, vaccine refusers provide a classic example of the Free Rider Problem.

No Jab, No Pay legislation is not about people making vaccination decisions for themselves, but on behalf of their children. The suggestion that parents have some sort of absolute right to make health decisions for their children just doesn’t hold water. Children aren’t property, nor are our rights to parent our children how we see fit absolute. No-one thinks the choice to abuse or starve one’s child should be protected, for example.

And that gives lie to the “pro-choice” argument against these laws – not all choices deserve respect.

The suggestion that parents have some sort of absolute right to make health decisions for their children just doesn’t hold water. Children aren’t property, nor are our rights to parent our children how we see fit absolute.

Thinking in a vacuum

The pro-choice argument depends on the unspoken assumption there is room for legitimate disagreement about the harms and benefits of vaccination. That gets us to the heart of what motivates a great deal of anti-vaccination activism – the issue of who gets to decide what is empirically true.

Antivax belief may play on the basic human fears of hesitant parents but the specific contents of those beliefs don’t come out of nowhere. Much of it emerges from what sociologists have called the “cultic milieu” – a cultural space that trades in “forbidden” or “suppressed” knowledge. This milieu is held together by a common rejection of orthodoxy for the sake of rejecting orthodoxy. Believe whatever you want – so long as it’s not what the “mainstream” believes.

This sort of epistemic contrarianism might make you feel superior to the “sheeple”, the unawake masses too gullible, thick or corrupted to see what’s really going on. It might also introduce you to a network of like-minded people who can act as a buffer from criticism. But it’s also a betrayal of the social basis of knowledge – our radical epistemic interdependency.

The thinkers of the Enlightenment bid us sapere aude, to “dare to know” for ourselves. Knowledge was no longer going to be determined by religious or political authority, but by capital-r Reason. But that liberation kicked off a process of knowledge creation that became so enormous specialisation was inevitable. There is simply too much information now for any one of us to know it all.

Talk to antivaxxers and it becomes clear they’re stuck on page one of the Enlightenment project. As Emma Jane and Chris Fleming have recently argued, adherence to an Enlightenment conception of the individual autonomous knower drives much conspiracy theorising. It’s what happens when the Enlightenment conception of the individual as sovereign reasoner and sole source of epistemic authority confronts a world too complex for any individual to understand everything.

As a result of this complexity we are reliant on the knowledge of others to understand the world. Even suspicion of individual claims, persons, or institutions only makes sense against massive background trust in what others tell us.

Accepting the benefits of science requires us to do something difficult – it requires us to accept the word of people we’ve never met who make claims we can never fully assess.

Accepting the benefits of science requires us to do something difficult – something nothing in our evolutionary heritage prepares us to do. It requires us to accept that the testimony of our direct senses no longer has primary authority. And it requires us to accept the word of people we’ve never met who make claims we can never fully assess.

Anti-vaxxers don’t like that loss of authority. They want to think for themselves, but they don’t accept we can’t think in a vacuum. We do our thinking against the background of shared standards and processes of reasoning, argument and testimony. Rejecting those standards by making claims that go against the findings of science without using science isn’t “critical thinking”. No more than picking up the ball and throwing it is “better soccer”.

This point about authority tells us something ethically important too. Targeting the vaccine-hesitant rather than the hard core refusers makes a certain kind of empirical sense.

But it’s important to remember the hard core are the source of the misinformation that misleads the hesitant. In the end, the harm caused by antivax beliefs is due to people who abuse the responsibility that comes with free speech. Namely, the responsibility to only say things you’re entitled to believe are true.

Most antivaxxers are sincere in their beliefs. They honestly do think they’re doing the right thing by their children. That these beliefs are sincere, however, doesn’t entitle them to respect and forbearance. William Kingdon Clifford begins his classic 1877 essay The Ethics of Belief with a particularly striking thought experiment.

A shipowner was about to send to sea an emigrant-ship. He knew that she was old, and not well built at the first; that she had seen many seas and climes, and often had needed repairs. Doubts had been suggested to him that possibly she was not seaworthy. These doubts preyed upon his mind and made him unhappy; he thought that perhaps he ought to have her thoroughly overhauled and refitted, even though this should put him at great expense. Before the ship sailed, however, he succeeded in overcoming these melancholy reflections.

He said to himself that she had gone safely through so many voyages and weathered so many storms that it was idle to suppose she would not come safely home from this trip also. He would put his trust in Providence, which could hardly fail to protect all these unhappy families that were leaving their fatherland to seek for better times elsewhere. He would dismiss from his mind all ungenerous suspicions about the honesty of builders and contractors.

In such ways he acquired a sincere and comfortable conviction that his vessel was thoroughly safe and seaworthy; he watched her departure with a light heart, and benevolent wishes for the success of the exiles in their strange new home that was to be; and he got his insurance-money when she went down in mid-ocean and told no tales.

Note that the ship owner isn’t lying. He honestly comes to believe his vessel is seaworthy. Yet Clifford argues, “the sincerity of his conviction can in no way help him, because he had no right to believe on such evidence as was before him.”

In the 21st century nobody has the right to believe scientists are wrong about science without having earned that right through actually doing science. Real science, mind you, not untrained armchair speculation and frenetic googling. That applies as much to vaccination as it does to climate change, GMOs and everything else.

We can disagree about the policy responses to the science in these cases. We can also disagree about what financial consequences should flow from removing non-medical exemptions for vaccination refusers. But removing such exemptions sends a powerful signal.

We are not obliged to respect harmful decisions grounded in unearned beliefs, particularly not when this harms children and the wider community.


There’s no good reason to keep women off the front lines

The US. military may finally be coming around on the question of women on the front lines.

In a confidential briefing on September 30, military leaders presented their recommendations on having women on the front lines to the House Armed Services Subcommittee on Military Personnel, and Defense Secretary Ashton Carter.

In the 1940s, the US military faced similar debates regarding black service personnel. Arguments regarding unit cohesion and operational capability were the most prominent against integration of white and black personnel. With the power of hindsight, we can see those arguments for what they were – scare tactics intended to keep the military segregated.

The same arguments have returned today. At the command of Secretary Panetta, the US Army underwent a two-year study to develop gender-neutral standards for specialist roles currently closed to women. The success of this standardised approach was demonstrated recently when two women graduated from the Army Ranger School.

There have been vocal critics of allowing women to attempt the Army Ranger School course. Some claim the standards were lowered for these women. This was denied by the Army at the Ranger School graduation ceremony. It was rebuked again by the Chief of Army Public Affairs who described the allegations as “pure fiction”.

These allegations are unlikely to settle down any time soon. A Congressman has requested service records of the women who graduated to investigate “serious allegations” of bias and the lowering of standards by Ranger School instructors.

This incident reveals the depth of scepticism regarding women’s ability to serve alongside men within some quarters. The standardised approach has dismissed the issue of operational capacity – the other arguments against female service are equally weak.

The potential for women to be captured and raped has been raised by opponents of women serving in combat units. This discussion ignores the sad reality – women in defence are much more likely to be sexually assaulted by their own troops than by the enemy. The 2013 Department of Defense report into sexual assault found that while women make up 14.5% of the US military, they make up 86% of sexual assault victims.

Women in defence are much more likely to be sexually assaulted by their own troops than by the enemy.

Of the 301 reports of sexual assault in combat zones in 2013 to the Department of Defence, only 12 were by foreign military personnel. The vast majority of sexual abuse victims in combat areas were abused by their own comrades, not the enemy.

Sexual abuse in the military has been a problem for decades. Why would it increase if we allowed women in combat? Rape of captured soldiers has also not been limited to women. Many men have also been sexually assaulted on capture. Sexual assault in this sphere is not about sexual desire or gratification – it’s about power and denigration of your enemy.

The second argument suggests women in combat units will affect unit cohesion. First, “the boys” won’t be able to be themselves. And second, if a woman is injured in battle men will be unable to focus on the mission and instead will be driven to protect their female colleagues.

The first argument raises a question about military culture. Why is behaviour considered inappropriate around women tolerated at all? The second argument is insulting to currently serving soldiers, whose professionalism and commitment to the mission is questioned.

To suggest soldiers would ignore the mission in favour of some other goal undervalues the extent of their military professionalism.

Soldiers overcome a range of powerful instincts in a firefight – including protecting their own lives. To suggest soldiers would ignore the mission in favour of some other goal undervalues the extent of their military professionalism.

There is also an elephant in the room. Women have been serving in combat roles for years – as pilots, on ships, as interpreters and in female engagement teams. For these women a decision regarding the position of women in combat is irrelevant – they are already on the front lines.

The Australian situation sits in stark contrast to that of the US. Gay and lesbian members have served openly for a decade, women have been fully integrated into combat units since 2013, and the Australian Defence Force now actively recruits transgender personnel.

Australia has been able to integrate women, gay, lesbian and transgender soldiers into combat units without affecting operational capability.

Hopefully the US Defense Secretary will follow the advice of his Chiefs of Staff and the leadership of the ADF. He should allow military personnel to serve in all roles in the military according to universal standards rather than chromosomes or genitalia.


HSC exams matter – but not for the reasons you think

Every year at around the time of the Higher School Certificate (HSC) exams, the same messages appear. The HSC isn’t everything – don’t stress! One year the then NSW premier Mike Baird weighed in with, “Life isn’t defined by your exams. It begins after they have finished.”

I remember getting those messages when I did the HSC but they seemed hard to swallow at the time. I’d spent 13 years being told of the importance of school marks and HSC results. High achievers earned awards. The importance of ‘rankings’ put me in competition with my peers and I measured success in Band Sixes.

If we’re going to convince students not to stress too much about results we need to do more than tell them to relax.

If we’re going to convince students not to stress too much about results we need to do more than tell them to relax. Years of conditioning makes students believe the HSC and Australian Tertiary Admission Rank (ATAR) scores have the power to determine their future. For some, the numbers can determine their self-worth.

What we need to do is explain the moral place of education in our lives and how the HSC sits in relation to it.

Why do we worry about academic achievement at all?

One reason is because we recognise knowledge and learning as being beneficial to society. Prime minister Malcolm Turnbull talks to anyone who will listen about the importance of an agile innovation economy. Such an economy relies on creative thinking and education.

Sadly, we live in a world where not every person can receive an education. Still, if we’re wise, we can make sure that every person can benefit from education. As French philosopher Michel Foucault wrote, “knowledge is power”.

Knowledge controlled by a privileged few is a recipe for dictatorship. Used wisely it can provide the power to make our imperfect world a little bit better.

We don’t just value knowledge because it’s useful. Not all learning leads to new inventions, helps the poor or changes the world. That doesn’t mean it’s pointless. Knowledge is ‘intrinsically good’. Learning for learning’s sake is a completely reasonable and very human activity.

Excelling in academic life also takes more than just knowledge or intellect. It requires a curious mind, perseverance and open-mindedness among other things.

In this sense, the HSC results do matter. They show the extent to which students have developed certain virtues of mind and character.

The HSC is an opportunity to reflect on the huge amount of knowledge gained over years of education … It does not predict the future.

What can this tell us about the HSC? A few things. First, the praise we heap on high achievers is not only about the number itself but about the virtues demonstrated in achieving the mark. These virtues aren’t unique to students who score high marks.

Some high-achieving students might be getting by on natural ability rather than any special virtue. This means the final result matters less than the way it was achieved.

Second, the HSC is an opportunity to reflect on the huge amount of knowledge gained over years of education. It’s a chance for students to be proud of what they’ve learned. But that’s all it is. The HSC tells students what they have learned up until this point. It does not predict the future.

Many people who have struggled with exams have flourished and many who have excelled in school have struggled in the real world. The markers of success in school, work and life cannot be fully represented in a single number – much less the worth or value of a person.

Finally, excellence in academic life takes more than individual virtue. It takes a decent slice of luck and help from others. Individual academic achievement is the product of collective effort. Teachers, parents, friends and factors beyond our control help determine both our success and our failure. This provides a dash of both perspective and humility.

HSC marks and ATAR scores try to represent a range of complex processes in a useful and efficient way. But it is those processes that really matter – not the final number itself.


Farhad Jabar was a child – his death was an awful necessity

In the flurry of emotion and analysis that followed the fatal shooting of police accountant Curtis Cheng in Parramatta by 15-year-old Farhad Jabar on 2 October 2015, one fact was more or less overlooked.

A child had been killed.

One of the tragedies of extremism is the way it can make us contemplate or perform acts that ordinarily are unthinkable. The police constable who shot and killed Jabar probably never imagined having to kill a child. And yet on that Friday afternoon, the unthinkable occurred.

Farhad Jabar didn’t seem like your typical kid. He shot a man in cold blood. It’s alleged he was reciting “Allahu Akbar” in the gunfight preceding his death, and a letter containing extremist rantings was reportedly found in his backpack.

We shouldn’t forget the fact of Jabar’s childhood. The police constable who killed him won’t – killing a child is arguably the ultimate moral line in the sand.

But Jabar was a child. At his death he was effectively a child soldier – having allegedly been radicalised by extremist groups in Sydney.

We know how groups abroad use child soldiers, but we’ve rarely been confronted by it directly. We don’t see child soldiers in Australia. The context in this case – extremism, the murder of police staff and allegations of terrorism – made us less sensitive to the fact Jabar was a child.

This insensitivity needn’t be the subject of public cultural haranguing but we shouldn’t forget the fact of Jabar’s childhood. The police constable who killed him won’t – killing a child is arguably the ultimate moral line in the sand.

Even  the “American Sniper” Chris Kyle revolted at the idea of killing a child. Although the American Sniper film depicts Bradley Cooper’s Kyle taking the shot, the actual sniper never did so. In his book he writes, “I wasn’t going to kill a kid, innocent or not”.

Jabar was not innocent in any morally relevant sense. His actions meant he had forfeited his right not to be attacked – he had already killed one person and intended to kill several more.

The moral responsibility of the police officers was to neutralise the threat, and it’s reasonable to assume that no less harmful means were available. Killing may well have been the only option.

The constable was innocent of any wrongdoing. It was not the constable’s fault Jabar murdered a man in cold blood. Nor that there were no other means of neutralising him as a threat. In this sense, the constable was duty-bound to shoot Jabar.

But knowing that you’ve done your duty is likely to be cold comfort.

In my experience researching moral psychology among soldiers and veterans, I’ve learned that certain actions “stick” to the psyche more than others. The “stickiest” among them are acts that transgress deeply held moral and cultural norms.

These acts needn’t be crimes, either. Surviving when comrades did not, training accidents and collateral damage can all produce profound moral and psychological distress. In his book What it is Like to Go to War, veteran Karl Marlantes writes:

“Killing someone without splitting oneself from the feelings that the act engenders requires an effort of supreme consciousness that, quite frankly, is beyond most humans.”

How much worse is the killing of a child?

In similarly tragic cases, the military sphere has forms of counselling that aim to do one of two things. They might ignore the moral question altogether and treat this trauma as basic PTSD – Post Traumatic Stress Disorder – which it isn’t. Or they might explain the moral error in judgement – “you think you’ve done something wrong, but you actually haven’t”.

Neither would likely be helpful in a case like this.

Trauma related to the moral character of one’s actions isn’t PTSD in the standard sense. PTSD is about fear for one’s safety. This form of trauma, which I and others refer to as moral injury, is about guilt. Jonathan Shay, the psychiatrist who coined the term, calls this kind of trauma a “soul wound”.

“It’s not your fault” is well intended but – especially when repeated insistently – it can invalidate laudable moral emotions.

Pointing out the error in judgement will probably be equally ineffective. “It’s not your fault” is well intended but – especially when repeated insistently – it can invalidate laudable moral emotions. To feel remorse at having killed a child – even a child soldier like Jabar – is to accurately grasp the tragedy of what has transpired.

Australian philosopher Rai Gaita suggests cases like this reveal the problems with our concept of responsibility. He writes, it is “wrong to say that we should concern ourselves with what we did rather than with the fact that we did it”.

Gaita tells us remorse “is a kind of dying to the world”. We don’t yet know with certainty the best ways to address remorse in therapy, but a likely starting point is for us as a community to recognise the tragedy of what transpired in Parramatta.

A child was killed. And a good person was forced to kill him.


Banning euthanasia is an attack on human dignity

Australia’s persistent anti-euthanasia stance is unfair, cruel and insensitive. It provides limited means to adults of sound mind to die on their own terms.

The current law on euthanasia restricts control and choice for certain terminally ill patients. It does so by denying them access to death-enabling drugs, information on how to administer them and appropriate medical support – including a physician’s assistance when needed.

The law compels certain individuals to die in ways they abhor and cannot easily escape. Meanwhile, others die peacefully and in control – often in their homes surrounded by loved ones and with appropriate medical assistance, albeit clandestine.

I’m always intrigued by the announcement of a prominent Australians death. The deceased invariably dies peacefully at home, with loved ones by their side. Why is it that the powerful and well connected often depart gently? Why are the less privileged compelled to endure irremediable suffering over a prolonged period?

Why it is that the powerful and well connected often depart gently? Why are the less privileged compelled to endure irremediable suffering over a prolonged period?

I’m reminded of a story a young man told me last year. He spoke movingly about how he lost his mother under tragic circumstances that could have been avoided if not for the state’s stance on voluntary euthanasia.

His mother was suffering from a serious long-term mental illness. Her desperate pleas to her doctors to help end a life that she described as “a living hell” were consistently dismissed as irrational. She was ultimately forced to die a violent death at her own hand.

According to her son, the life she foresaw was a life that she did not want. There was very little the medical profession could do to help her end it. His mother’s only option was suicide, which she took one lonely night at home after manically swallowing a cocktail of prescribed pills washed down with alcohol.

Her neighbour discovered her decaying corpse with a plastic bag over its head days after she died what must have been a horrible death.

This had a profound psychological impact on her son. He experienced depression, anxiety and complicated grief. The horror of his mother dying painfully and alone in such a setting is something that will stay with him to his grave.

A humane, just and civilised society should never insist on laws that allow such tragic deaths to continue. It should certainly not allow those rendered powerless through serious illness to suffer horrible deaths of this kind.

Society should do far more to empower the vulnerable through the provision of appropriate medical assistance, guidance and legal support. It should help them govern their life in a way that minimises suffering and delivers dignity in death.

Horrible deaths are not only restricted to the home. They also take place in hospitals and inpatient hospices. Years ago, I spoke with a senior oncologist at The Royal Melbourne Hospital. He spoke candidly about the problem in accessing life-ending medication for his patients.

He spoke of a seriously unwell middle-aged female patient who he regarded as a friend. She had no immediate family or close friends to help her finish her life well. She did not have a network of people who could help enact a plan that allowed her to die peacefully and with dignity in the comfort of her home. The only person who could help was her oncologist but he was legally unable to do what he understood would give her dignity in her death.

In this case both patient and doctor were locked under state control. They were denied choices available to those who have the good fortune, legal nous and medical support to implement their plans away from the state’s reach.

The contradictory and confused nature of our anti-euthanasia laws become apparent when viewed in light of the state’s stance on suicide. Suicide is not illegal. Australians are at liberty to take their own lives through a variety of different means, assuming they have the physical capacity to do so. Despite the grief suicide can cause to bereaved loved ones, it is nearly impossible – and arguably unethical – to prohibit.
A humane, just and civilised society should never insist on laws that allow such tragic deaths to continue.

If someone has the ability to end their life, they are free to do so.  But those unable to end their life by their own hand are forced by the law to endure prolonged, unnecessary and irremediable suffering.

Anti-euthanasia advocates often argue palliative care is far more humane and caring than killing. They suggest more funding be directed to palliation rather than amending laws that allow the terminally ill to seek direct death. But those who take this line fail to acknowledge that some patients find death while under palliative sedation repugnant and unacceptable.

Denying a terminally ill person the option of choosing direct death over unwanted palliation is an infringement on their autonomy.

We need to appreciate that palliative care and physician-assisted death are not mutually exclusive. Indeed, terminally ill individuals who have high quality palliative care may be more open to the idea of assisted dying than those who do not.

Research conducted at Brunel University in London found terminal cancer patients in British hospices were more likely to consider doctor-assisted dying than those in hospitals. This contradicts the commonly-held view that assisted dying would decrease if options such as palliation and hospice care were readily available.

Voluntary euthanasia laws would not diminish the value of human life. They would enhance the prospect of a peaceful death by shifting control away from the state and other institutions. If individuals were granted control over this decision they would be empowered to achieve what they believe to be a good death.

If we are committed to delivering a good and peaceful death to all, then the law must extend personal autonomy, greater control and genuine informed choice to all Australians.

Nigel Baggar’s counter-argument is here.

In Australia, support is available at Lifeline 13 11 14, beyondblue 1300 224 636 and Kids Helpline 1800 551 800.


How to respectfully disagree

Why do we find it so hard to discuss difficult issues? We seem to have no trouble hurling opinions at each other. It is easy enough to form into irresistible blocks of righteous indignation. But discussion – why do we find it so hard?

What happened to the serious playfulness that used to allow us to pick apart an argument and respectfully disagree? When did life become ‘all or nothing’, a binary choice between ‘friend or foe’?

Perhaps this is what happens when our politics and our media come to believe they can only thrive on a diet of intense difference. Today, every issue must have its champions and villains. Things that truly matter just overwhelm us with their significance. Perhaps we feel ungainly and unprepared for the ambiguities of modern life and so clutch on to simple certainties.

Today, every issue must have its champions and villains. Perhaps we feel ungainly and unprepared for the ambiguities of modern life and so clutch on to simple certainties.

Indeed, I think this must be it. Most of us have a deep-seated dislike of ambiguity. We easily submit to the siren call of fundamentalists in politics, religion, science, ethics … whatever. They sing to us of a blissful state within which they will decide what needs to be done and release us from every burden except obedience.

But there is a price to pay for certainty. We must pay with our capacity to engage with difference, to respect the integrity of the person who holds a principled position opposed to our own. It is a terrible price we pay.

The late, great cultural theorist and historian, Robert Hughes, ended his history of Australia, The Fatal Shore, with an observation we would do well to heed:
The need for absolute goodies and absolute baddies runs deep in us, but it drags history into propaganda and denies the humanity of the dead: their sins, their virtues, their failures. To preserve complexity, and not flatten it under the weight of anachronistic moralising, is part of the historian’s task.

And so it is for the living. The ‘flat man’ of history is quite unreal. The problem is too many of us behave as if we are surrounded by such creatures. They are the commodities of modern society, the stockpile to be allocated in the most efficient and economical manner.

Each of them has a price, because none of them is thought to be of intrinsic value. Their beliefs are labels, their deeds are brands. We do not see the person within. So, we pitch our labels against theirs – never really engaging at a level below the slogan.

It was not always so. It need not be so.

I have learned one of the least productive things one can do is seek to prove to another person they are wrong. Despite knowing this, it is a mistake I often make and always end up wishing I had not.

The moment you set out to prove the error of another person is the moment they stop listening to you. Instead, they put up their defences and begin arranging counter-arguments (or sometimes just block you out).

The moment you set out to prove the error of another person is the moment they stop listening to you.

Far better it is to make the attempt (and it must be a sincere attempt) to take the person and their views entirely seriously. You have to try to get into their shoes, to see the world through their eyes. In many cases people will be surprised by a genuine attempt to understand their perspective. In most cases they will be intrigued and sometimes delighted.

The aim is to follow the person and their arguments to a point where they will go no further in pursuit of their own beliefs. Usually, the moment presents itself when your interlocutor tells you there is a line, a boundary they will not cross. That is when the discussion begins.

At that point, it is reasonable to ask, “Why so far, but no further?” Presented as a case of legitimate interest (and not as a ‘gotcha’ moment) such a question unlocks the possibility of a genuinely illuminating discussion.

To follow this path requires mutual respect and recognition that people of goodwill can have serious disagreements without either of them being reduced to a ‘monstrous’ flat man of history. It probably does not help that so much social media is used to blaze emotion or to rant and bully under cover of anonymity. People now say and do things online that few would dare if standing face-to-face with another.

It probably does not help that we are becoming desensitised to the pain we cause the invisible victims of a cruel jibe or verbal assault. Nor does it help that the liberty of free speech is no longer understood to be matched by an implied duty of ethical restraint.

I am hoping the concept of respectful disagreement might make a comeback. I am hoping we might relearn the ability to discuss things that really matter – those hot, contentious issues that justifiably inflame passions and drive people to the barricades. I am hoping we can do so with a measure of goodwill. If there is to be a contest of ideas, then let it be based on discussion.

Then we might discover there are far more bad ideas than there are bad people.


You can’t save the planet. But Dr. Seuss and your kids can.

Dr. Seuss’ The Lorax explores the consequences of deforestation and the environmental costs of development. It concludes with the Once-ler, the narrator of the story who is principally responsible for deforestation of the decimated Truffula tree, entrusting its final seed to a young boy. He implores the child, “Grow a forest. Protect it from axes that hack. Then the Lorax and all of his friends might come back.”

The Once-ler, wracked by guilt for his complicity in this environmental disaster, passes along responsibility for reversing damage done by his generation to a child. The Lorax suggests the young take on duties of planetary stewardship where adults have failed.

Is this fair? Perhaps the generation responsible for mucking up the planet has lost its moral authority to try and save it. So the task of conservation is inherited by those with a longer-term stake in its future.

That adults might vest hope for a better planet in our children is both edifying and deeply troubling.  Edifying because the environmental record of the world’s children bests that of adults by default. The young have not yet begun to reproduce the patterns of behavior that implicate their parents – resource depletion, biodiversity loss, climate change…

Troubling because they may reproduce them in future. We cannot realistically expect young people socialised into a world of willful environmental neglect to behave much differently than their parents have. Adults cannot so easily absolve themselves of the responsibility of addressing environmental harm they have caused.

Rather than saving the planet, a more modest objective might be to refrain from making it much worse for our children. Even this is a daunting prospect. Patterns of energy use dependent on fossil fuels all but guarantee that global temperatures will continue to rise. For most, climate change is no longer a debate about “if” but “how severe?”

We may still hope to make the planet better for our children in other ways. For instance, by adding to the richness of human culture and the stock of beneficial technologies. When it comes to climate though, a more appropriate aim might be to refrain from chopping that last Truffula tree. To preserve our remaining forests so our children might be able to see the proverbial Brown Bar-ba-loots, Swomee-Swans or Humming Fish in their native habitats rather than natural history museums.

Doing this will be challenging. It will require an often uncomfortable reflection on what drives global environmental degradation. In Seuss’ tale the insatiable demand for thneeds – the ultimate commodity – drives the Truffula deforestation. This implicates our heedless consumerism in the causal chain of degradation alongside the Once-ler.

When we consume things we don’t need, and when the industry around these commodities is obviously unsustainable despite our obliviousness to this fact, we contribute to resource depletion. What’s more, we add to the attitudes and norms that suggest this is a private matter, answerable only to private consumer preferences and not larger public concerns for equity or sustainability.

Worst of all, we teach our children to do the same.

The first step in reducing our negative impact upon the planet must be to understand how and where we make the impact we do. We need to understand alternatives that yield comparable value to us with a lighter toll upon the planet. Consuming more conscientiously will leave our children a better planet and make them better citizens of it. Though it requires us to consume differently and less.

Thinking about the long-term consequences of our choices will also help. We cannot plausibly claim to value our children’s future while discounting the future value of current investments in sustainable infrastructure or future costs of unsustainable current practices.

To help make better children for our planet we must teach them that out of sight is not out of mind.

Our deeds announce our concern for the welfare of future generations more accurately than our words or thoughts. Thinking about such choices must be accompanied by some changes in course. Citizens must demand better public choices be made rather than acquiescing to worse ones as unavoidable products of political inertia or inviolable market forces.

The tendency to shift problems across borders is no less insidious than passing them to our children or grandchildren. To help make better children for our planet we must teach them that out of sight is not out of mind.

As role models for our children our success in stopping environmental harm will matter less than our sincerity in the efforts we make. If we honestly try to maintain the planet, our example will help make them into the kind of people our planet needs.

For as the Once-ler interprets the Lorax’s cryptic final word, “UNLESS someone like you cares a whole awful lot, nothing is going to get better.  It’s not.”


Defining mental illness – what is normal anyway?

The fact that mental illness remains poorly understood is not particularly surprising. Even the authoritative Diagnostic and Statistical Manual of Mental Disorders struggles to reach a defensible definition – and it’s in its fifth edition!

Mental illness is often perceived as a chemical imbalance in the brain. This certainly accounts for an element of mental illness, but not all of it. We need to recognise that our definitions of illness are determined as much by our interpretations of those physical or mental changes as they are by the changes themselves.

We define illness based on whether a physical or mental change is incompatible – that is, maladaptive – with a person’s environment. Because our environments are both social and physical, our definitions entail value judgements of how an individual should behave.

The late Oliver Sacks described an island on which hereditary total colour blindness meant the majority of the population were born colour-blind. As such, communal practices reflect the needs of the majority – colour-blind people. The community is most active at dusk and dawn because the light at those times provides the best vision. Non-colour blindness is maladaptive on the island because social practices are designed around colour blindness.

This highlights the cultural influences involved in defining illness. In the case of mental illness, the normative element – what society sees as acceptable or unacceptable – is often more controversial and difficult to identify.

Each era tends to have a mental illness du jour, which seems to emerge as a product of social changes and the incompatibility of certain behaviours with those changes. One era’s shaman is another’s schizophrenic.

In the 1960s a movement called “anti-psychiatry” emerged under the influence of French philosopher Michel Foucault. The movement critiqued the assumptions underlying our concept of mental illness.

Anti-psychiatrist Thomas Szaz considers a person who believes he is Napoleon. To diagnose a disorder, the clinician would need to prove the patient is not Napoleon. Because Western society does not tend to embrace the idea of reincarnation, the man’s belief is maladaptive to his environment. But it would not be so everywhere. Societies with a firm faith in reincarnation, for instance, may not see the man’s beliefs as evidence of mental illness. John of God, a faith healer in Brazil, claims over 20 entities including King Solomon use his body as a healing vessel. Rather than being institutionalised, he is venerated.

In these cases social perception seems to be influencing our definitions of mental illness. Many will argue that the illnesses themselves still exist, but that cultural beliefs simply lead to a failure to diagnose. This may be true, but other cases are less clear. In some situations, culture itself may be the cause of mental illness.

Take the sculptress Camille Claudel. She decided to pursue a career in arts – an unusual decision in the 19th century. Claudel fell in love with the pre-eminent sculptor Auguste Rodin. She became his lover, and they started working together.

Claudel rose from his student to his equal, but Rodin’s reputation distracted from her own achievements. Over time, this led to feelings of exploitation, and paranoia. She was forcibly admitted to a mental institution, where she lived for 30 years.

In a letter, her brother explained to her, “genius doesn’t become women”. Was Claudel suffering a psychological illness, or reacting in the way we’d expect of someone continually overlooked and exploited?

The story of “hysteria” is a less tragic case. In the 19th century, European women began to challenge oppressive, patriarchal value systems. They became depressed, unruly, and threw tantrums. Sexual adventures and explorations were considered symptoms of hysteria.

Some believed hysteria to be a symptom of women’s maternal yearnings. Later, Sigmund Freud suggested hysteric women were sexually unfulfilled, so the prevailing treatment for hysteria became the massage of female genitals by doctors – physician-assisted masturbation.

Of course, we now know hysteria isn’t real. The diagnosis pathologised – made abnormal – women’s reasonable expectation for political freedom and sexual autonomy.

This highlights the fact that each era tends to have a mental illness du jour, which seems to emerge as a product of social changes and the incompatibility of certain behaviours with those changes. One era’s shaman is another’s schizophrenic.

Joan of Arc’s claims to hear the voice of God weren’t widely disputed. Today, we’d suspect she was in need of psychological intervention. Clare of Assisi lived an ascetic life to honour God. She spent the last 27 years of her life in bed, too weak to move. Today, she’d be treated for anorexia nervosa – instead, she was named as a saint.

What is our mental illness du jour? Dutch scientist and author Trudy Dehue describes a “depression epidemic”. She argues many cultures today fail to allow for the possibility people might not always be happy. Social norms make being happy a kind of imperative – “thou shalt be happy”.

The identification of new disorders and increasing diagnoses of previously uncommon ones can reveal social changes. For instance, less play-oriented modes of education may make ordinary childhood spontaneity seem deviant. Increasing rates of ADHD diagnosed among children might have less to do with behavioural problems in children and more to do with our expectations of both them and their attention spans.

That’s not to say mental illness isn’t real, but what we define as a mental illness doesn’t develop in a vacuum. The medical system has a great deal to offer the mentally unwell. We could further support them by acknowledging how our own assumptions of what constitutes ‘normal’ influences our attitudes toward mental illness.