The terrible ethics of nuclear weapons

“I have blood on my hands.” This is what Robert Oppenheimer, the mastermind behind the Manhattan Project, told US President Harry Truman after the bombs he created were dropped on Hiroshima and Nagasaki killing over an estimated 226,000 people.

The President reassured him, but in private was incensed by the ‘cry-baby scientist’ for his guilty conscience and told Dean Acheson, his Secretary of State, “I don’t want to see that son of a bitch in this office ever again.”  

With the anniversary of the bombings this week while Christopher Nolan’s Oppenheimer is in cinemas, it is a good moment to reflect on the two people most responsible for the creation and use of nuclear weapons: one wracked with guilt, the other with a clean conscience. 

Who is right? 

In his speech announcing the destruction of Hiroshima and Nagasaki, Truman provided the base from which apologists sought to defend the use of nuclear weapons: it “shortened the agony of war.”  

It is a theme developed by American academic Paul Fussell in his essay Thank God for the Atom Bomb. Fussell, a veteran of the European Theatre, defended the use of nuclear weapons because it spared the bloodshed and trauma of a conventional invasion of the Japanese home islands.  

Military planners believed that this could have resulted in over a million causalities and hundreds of thousands of deaths of service personnel, to say nothing of the effect on Japanese civilians. In the lead up to the invasion the Americans minted half a million Purple Hearts, medals for those wounded in battle; this supply has lasted through every conflict since. We can see here the simple but compelling consequentialist reasoning: war is hell and anything that brings it to an end is worthwhile. Nuclear weapons, while terrible, saved lives.  

The problem is that this argument rests on a false dichotomy. The Japanese government knew they had lost the war; weeks before the bombings the Emperor instructed his ministers to seek an end to the war via the good offices of the Soviet Union or another neutral state. There was a path to a negotiated peace. The Allies, however, wanted unconditional surrender.  

We might ask whether this was a just war aim, but even if it was, there were alternatives: less indiscriminate aerial attacks and a naval blockade of war materials into Japan would have eventually compelled surrender. The point here isn’t to play at ‘armchair general’, but rather to recognise that the path to victory was never binary.  

However, this reply is inadequate, because it doesn’t address the general question about the use of nuclear weapons, only the specific instance of their use in 1945. There is a bigger question: is it ever ethical to use nuclear weapons. The answer must be no.  

Why? 

Because, to paraphrase American philosopher Robert Nozick, people have rights and there are certain things that cannot be done to them without violating those rights. One such right must be against being murdered, because that is what the wrongful killing of a person is. It is murder. If we have these rights, then we must also be able to protect them and just as individuals can defend themselves so too can states as the guarantor of their citizen’s rights. This is a standard categorical check against the consequentialist reasoning of the military planners.  

The horror of war is that it creates circumstances where ordinary ethical rules are suspended, where killing is not wrongful.

A soldier fighting in a war of self-defence may kill an enemy soldier to protect themselves and their country. However, this does not mean that all things are permitted. The targeting of non-combatants such as wounded soldiers, civilians, and especially children is not permitted, because they pose no threat.   

We can draw an analogy with self-defence: if someone is trying to kill you and you kill them while defending yourself you have not done anything wrong, but if you deliberately killed a bystander to stop your attacker you have done something wrong because the bystander cannot be held responsible for the actions of your assailant.   

It is a terrible reality that non-combatants die in war and sometimes it is excusable, but only when their deaths were not intended and all reasonable measures were taken to prevent them. Philosopher Michael Walzer calls this ‘double intention’; one must intend not to harm non-combatants as the primary element of your act and if it is likely that non-combatants will be collaterally harmed you must take due care to minimise the risks (even if it puts your soldiers at risk).  

Hiroshima does not pass the double intention test. It is true that Hiroshima was a military target and therefore legitimate, but due care was not taken to ensure that civilians were not exposed to unnecessary harm. Nuclear weapons are simply too indiscriminate and their effects too terrible. There is almost no scenario for their use that does not include the foreseeable and avoidable deaths of non-combatants. They are designed to wipe out population centres, to kill non-combatants. At Hiroshima, for every soldier killed there were ten civilian deaths. Nuclear weapons have only become more powerful since then.  

Returning to Oppenheimer and Truman, it is impossible not to feel that the former was in the right. Oppenheimer’s subsequent opposition to the development of more powerful nuclear weapons and support of non-proliferation, even at the cost of being targeted in the Red Scare, was a principled attempt to make amends for his contribution to the Manhattan Project.  

The consequentialist argument that the use of nuclear weapons was justified because in shortening the war it saved lives and minimised human suffering can be very appealing, but it does not stand up to scrutiny. It rests on an oversimplified analysis of the options available to allied powers in August 1945; and, more importantly, it is an intrinsic part of the nature of nuclear weapons that their use deliberately and avoidably harms non-combatants. 

If you are still unconvinced, imagine if the roles were reversed in 1945: one could easily say that Sydney or San Francisco were legitimate targets just like Hiroshima and Nagasaki. If the Japanese dropped an atomic bomb on Sydney Harbour on the grounds that it would have compelled Australia to surrender thereby ending the “agony of war”, would we view this as ethically justifiable or an atrocity to tally alongside the Rape of Nanking, the death camps of the Burma railroad, or the terrible human experiments conducted by Unit 731? It must be the latter, because otherwise no act, however terrible, can be prohibited and war truly becomes hell. 


The philosophy of Virginia Woolf

While the stories of Virginia Woolf are not traditionally considered as works of philosophy, her literature has a lot to teach us about self-identity, transformation, and our relationship to others.

“A million candles burnt in him without his being at the trouble of lighting a single one.” – Virginia Woolf, Orlando

Woolf was not a philosopher. She was not trained as such, nor did she assign the title to herself, and she did not produce work which follows traditional philosophical construction. However, her writing nonetheless stands as comprehensive, albeit unique, work of philosophy.   

Woolf’s books, such as Orlando, The Waves, and Mrs Dalloway, are philosophical inquiries into ideas of the limits of the self and our capacity for transformation. At some point we all may feel a bit trapped in our own lives, worrying that we are not capable of making the changes needed to break free from routine, which in time has turned mundane.

Woolf’s characters and stories suggest that our own identities are endlessly transforming, whether we will them to or not.

More classical philosophers, like David Hume, explore similar questions in a more forthright manner. Also reflecting on matters of the stability of personal identity, Hume writes in his A Treatise of Human Nature:  

“Our eyes cannot turn in their sockets without varying our perceptions. Our thought is still more variable than our sight; and all our other senses and faculties contribute to this change; nor is there any single power of the soul, which remains unalterably the same, perhaps for one moment. The mind is a kind of theatre…”  

Woolf’s books make similar arguments. Rather than stating them in these explicit forms, she presents us with characters who depict the experience that Hume describes. Woolf’s surrealist story, Orlando, follows the long life of a man who one day awakens to find themselves a woman. Throughout we are made privy to the way the world and Orlando’s own mind alters as a result:

“Vain trifles as they seem, clothes have, they say, more important offices than to merely keep us warm. They change our view of the world and the world’s view of us.” 

When Hume describes the mind as a theatre, he suggests there is no core part of ourselves that remains untouched by the inevitability of layered experience. We may be moved to change our fashion sense and, as a result, see the world treat us differently in response to this change. In turn, we are transformed, either knowingly or unknowingly, by whatever this new treatment may be.  

Hume suggests that, while just as many different acts take place on a single stage, our personal identities also ebb and flow depending on whatever performance may be put before us at any given time. After all, the world does not merely pass us by; it speaks to us, and we remain entangled in conversation.  

While Hume constructs this argument in a largely classical philosophical form, Woolf explores similar themes in her works through more experimental ways:

“A million candles burnt in him”, she writes in Orlando, “without his being at the trouble of lighting a single one.”

Using the gender transforming character of Orlando, Woolf examines identity, its multiplicity, and how, despite its being an embodied sensation, our sense of self both wavers and feels largely out of our control. In the novel, any complexities in Orlando’s change of gender are overshadowed by the multitude of other complexities in the many transformations that one embarks in life. 

Throughout the book, readers are also given the opportunity to reflect on their own conceptions of self-identity. Do they also feel this ever-changing myriad of passions and selves within them? The character of Orlando allows readers to consider whether they also feel as though the world oftentimes presents itself unbidden, with force, shuffling the contents of their hearts and minds again and again. While Hume’s Treatise aims to convince us that who we are is constantly subject to change, Orlando gives readers the chance to spend time with a character actively embroiled in these changes.  

A Room of One’s Own presents a collated series of Woolf’s essays which explore the topics of women and fiction. Though non-fictional, and evidently a work of critical theory, Woolf meditates on her own experience of acquiring a large, lifetime inheritance. She reflects on the ways in which her assured income not only materially transformed her capacities to pursue creative writing, but also how it radically transformed her perceptions of the individuals and social structures surrounding her: 

“No force in the world can take from me my [monthly] five hundred pounds. Food, house and clothing are mine for ever. Therefore not merely do effort and labour cease, but also hatred and bitterness. I need not hate any man; he cannot hurt me. I need not flatter any man; he has nothing to give me. So imperceptibly I found myself adopting a new attitude towards the other half of the human race. It was absurd to blame any class or any sex, as a whole. Great bodies of people are never responsible for what they do. They are driven by instincts which are not within their control. They too, the patriarchs, the professors, had endless difficulties, terrible drawbacks to contend with.”

While Hume tells us, quite explicitly, about the fluidity of the self and of the mind’s susceptibility to its perceptual encounters, Woolf presents her readers with a personal instance of this very phenomenon. The acquisition of a stable income meant her thoughts about her world shifted. Woolf’s material security afforded her the freedom to choose how she interacts with those around her. Free from dependence, hatred and bitterness no longer preoccupied her mind, leaving space for empathy and understanding. The social world, which remained largely unchanged, began telling her a different story. With another candle lit, and the theatre of her mind changed, the perception of the world before her was also transformed, as was she. 

If a philosopher is an individual who provokes their audiences to think in new ways, who poses both questions and ways in which those questions may be responded to, we can begin to see the philosophy of Virginia Woolf. Woolf’s personal philosophical style is one that does not set itself up for a battle of agreement or disagreement. Instead, it contemplates ideas in theatrical, enlivened forms which are seemingly more preoccupied with understanding and exploration rather than mere agreement. 


What is all this content doing to us?

Three years ago, the eye-wateringly expensive television show See aired. Starring Jason Momoa, the budget for the show tapped out at around a million dollars per episode, a ludicrous amount of money even in today’s age. 

As to what See is about – well, that’s not really worth discussing. Because chances are you haven’t seen it, and, in all likelihood, you’re not going to. What matters is that this was a massive piece of content that sank without a single trace. Ten years ago, a product like that would have been a big deal, no matter whether people liked it or not. It would have been, regardless of reception, an event. Instead, it’s one of a laundry list of shows that feel like they simply don’t exist. 

This is what it means to be in the world of peak content. Every movie you loved as a kid is being rebooted; every franchise is being restarted; every actor you have even a passing interest in has their own four season long show. 

But what is so much content doing to us? And how is it affecting the way we consider art? 

The tyranny of choice

If you own a television, chances are you have found yourself parked in front of it, armed with the remote, at a complete loss as to what to watch. Not because your choices are limited. But because they are overwhelmingly large. 

This is an example of what is known as “the tyranny of choice.” Many of us might believe that more choice is necessarily better for us. As the philosopher Renata Salecl outlines, if you have the ability to choose from three options, and one of them is taken away, most of us would assume we have been harmed in some way. That we’ve been made less free. 

But the social scientists David G. Myers and Robert E. Lane have shown that an increase in choice tends to lead to a decrease in overall happiness. The psychologist Barry Schwartz has explained this through what he understands as our desire to “maximalise” – to get the best out of our decisions. 

And yet trying to decide what the best decision will be takes time and effort. If we’re doing that constantly, forever in the process of trying to analyse what will be best for us, we will not only wear ourselves down – we’ll also compare the choice we made against the other potential choices we didn’t take. It’s a kind of agonising “grass is always greener” process, where our decision will always seem to be the lesser of those available. 

The sea of content we swim in is therefore work. Choosing what to watch is labour. And we know, in our heart of hearts, that we probably could have chosen better – that there’s so much out there, that we’re bound to have made a mistake, and settled for good when we could have watched great. 

The big soup of modern life

When content begins to feel like work, it begins to feel like… well, everything else. So much of our lives are composed of labour, both paid and unpaid. And though art should, in its best formulation, provide transcendent moments – experiences that pull us out of ourselves, and our circumstances – the deluge of content has flattened these moments into more capitalist stew.  

Remember how special the release of a Star Wars movie used to feel? Remember the magic of it? Now, we have Star Wars spin-offs dropping every other month, and what was once rare and special is now an ever-decreasing series of diminishing returns. And these diminishing returns are not being made for the love of it – they’re coming from a cynical, money-grubbing place. Because they need to make money, due in no small part to their ballooning budgets, they are less adventurous, rehashing past story beats rather than coming up with new ones; playing fan service, instead of challenging audiences. After all, it’s called show business for a reason, and mass entertainment is profit-driven above all else, no matter how much it might enrich our lives. 

This kind of nullifying sameness of content, made by capitalism, was first outlined by the philosophers Theodor Adorno and Max Horkheimer. “Culture now impresses the same stamp on everything,” they wrote in Dialectic of Enlightenment. “Films, radio and magazines make up a system which is uniform as a whole and in every part.” 

Make a choice

So, what is to be done about all this? We obviously can’t stop the slow march of content. And we wouldn’t even want to – art still has the power to move us, even as it comes in a deluge.  

Of course, being more aware of what we consume, and when we consume it, and why won’t stop capitalism. But it will change our relationship with art.

The answer, perhaps, is intentionality. This is a mindfulness practice – thinking about what we’re doing carefully, making every choice with a weight and thrust. Not doing anything passively, or just because you can. But applying ourselves fully to what we decide, and accepting that is the decision that we have made.  

The filmmaker Jean-Luc Godard once said that at the cinema, audience goers look up, and at home, watching TV, audience goers look down. As it turns out, we look down at far too much these days, regardless of whether we’re at home or in the cinema. We take content for granted; allow it to blare out across us; reduce it to the status of wallpaper, just something to throw on and leave in the background. It becomes less special, and our relationship to it becomes less special too. 

The answer: looking up. Of course, being more aware of what we consume, and when we consume it, and why won’t stop capitalism. But it will change our relationship with art. It will make us decision-makers – active agents, who engage seriously with content and learn things through it about our world. It will preserve some of that transcendence. And it will reduce the exhausting tyranny of choice, and make these decisions feel impactful. 


Big Thinker: Judith Jarvis Thomson

Judith Jarvis Thomson (1929-2020) is one of the most influential ethicists and metaphysicians of the 20th century. She’s known for changing the conversation around abortion, as well as modernising what we now know as the trolley problem.

Thomson was born in New York City on October 4th, 1929. Her mother was Catholic of Czech heritage and her father was Jewish,  who both met at a socialist summer camp. While her parents were religious, they didn’t impose their beliefs on her.  

At the age of 14, Thomson converted to Judaism, after her mother died and her father remarried a Jewish woman two years later. As an adult, she wasn’t particularly religious but she did describe herself publicly as “feel[ing] concern for Israel and for the future of the Jewish people.”   

In 1950, Thomson graduated from Barnard College with a Bachelor of Arts (BA), majoring in philosophy, and then received a second BA in philosophy from Cambridge University in England in 1952. She then went on to receive her Masters in philosophy from Cambridge in 1956 and her PhD in philosophy from Columbia University in New York in 1959.   

Violinists, trolleys and philosophical work

Even though she had received her PhD from Columbia, the philosophy department wouldn’t keep her as a professor as they didn’t hire women. In 1962, she began working as an assistant professor at Barnard college, though she later moved to Boston University and then MIT with her husband, James Thomson, for the majority of her career.  

Thomson is most famous for her thought experiments, especially the violinist case and the trolley problem. In 1971, Thomson published her book A Defense of Abortion, which presented a new kind of argument for why abortions are permissible during a time of heightened debate in the US as a result of the second wave feminist movement. Arguments that defended a woman’s right to an abortion circulated feminist publications and eventually led to the Supreme Court ruling in favour of Roe v. Wade (1973) 

“Opponents of abortion commonly spend most of their time establishing that the foetus is a person, and hardly any time explaining the step from there to the impermissibility of abortion.” – Judith Jarvis Thomson

The famous violinist case asks us to imagine if it is permissible to “unplug” ourselves from a famous violinist, even if it is only for nine months and being plugged in is the only thing keeping them alive. As Thomas Nagel said, she expresses very clearly the essentially negative character of the right to life, which is that it’s a right not to be killed unjustly, and not a right to be provided with everything necessary for life.” To this day, the violinist case is taught in classrooms and recognised as one of the most influential thought experiments arguing for the permissibility of abortion.  

Thomson is famous for another famous thought experiment, the trolley problem. In her 1976 paper “Killing, Letting Die and the Trolley Problem,” Judith Jarvis Thomson articulates a famous thought experiment, first imagined by Philippa Foot, that encourages us to think about the moral relevance of killing people, as opposed to letting people die by doing nothing to save them.  

In the trolley problem thought experiment, a runaway trolley will kill five innocent people unless someone pulls a lever. If the lever is pulled, the trolley will divert onto a different track and only one person will die. As an extension to Foot’s argument, Thomson asks us to think if there is something different about pushing a large man off a bridge, thereby killing him, to prevent five people from dying from the runaway trolley. Why does it feel different to pull a lever rather than push a person? Both have the same potential outcomes and distinguish between killing a person and letting a person die.

In the end, what Thomson finds is that oftentimes, the action as well as the outcome are morally relevant in our decision making process.  

Legacy

Thomson’s extensive philosophical career hasn’t gone unnoticed. In 2012, she was awarded the American Philosophical Association’s prestigious Quinn Prize for her “service to philosophy and philosophers.” In 2015, she was awarded an honorary doctorate by the University of Cambridge, and then in 2016 she was awarded another honorary doctorate from Harvard.   

Thomson continues to inspire women in philosophy. As one of her colleagues, Sally Haslanger, says: “she entered the field when only a tiny number of women even considered pursuing a career in philosophy and proved beyond doubt that a woman could meet the highest standards of philosophical excellence … She is the atomic ice-breaker for women in philosophy.” 


The cost of curiosity: On the ethics of innovation

The billionaire has become a ubiquitous part of life in the 21st century.

In the past many of the ultra-wealthy were content to influence politics behind the scenes in smoke-filled rooms or limit their public visibility to elite circles by using large donations to chisel their names onto galleries and museums. Today’s billionaires are not so discrete; they are more overtly influential in the world of politics, they engage in eye-catching projects such as space and deep-sea exploration, and have large, almost cult-like, followings on social media. 

Underpinning the rise of this breed of billionaire is the notion that there is something special about the ultra-wealthy. That in ‘winning’ capitalism they have demonstrated not merely business acumen, but a genius that applies to the human condition more broadly. This ‘epistemic privilege’ casts them as innovators whose curiosity will bring benefits to the rest of us and the best thing that we normal people can do is watch on from a distance. This attitude is embodied in the ‘Silicon Valley Libertarianism’ which seeks to liberate technology from the shackles imposed on it by small-minded mediocrities such as regulation. This new breed seeks great power without much interest in checks on the corresponding responsibility.

Is this OK? Curiosity, whether about the physical world or the world of ideas, seems an uncontroversial virtue. Curiosity is the engine of progress in science and industry as well as in society. But curiosity has more than an instrumental value. Recently, Lewis Ross, a philosopher at the London School of Economics, has argued that curiosity is valuable in itself regardless of whether it reliably produces results, because it shows an appreciation of ‘epistemic goods’ or knowledge.  

We recognise curiosity as an important element of a good human life. Yet, it can sometimes mask behaviour we ought to find troubling.

Hubris obviously comes to mind. Curiosity coupled with an outsized sense of one’s capabilities can lead to disaster. Take Stockton Rush, for example, the CEO of OceanGate and the author of the tragic sinking of the Titan submarine. He was quoted as saying: “I’d like to be remembered as an innovator. I think it was General MacArthur who said, ‘You’re remembered for the rules you break’, and I’ve broken some rules to make this. I think I’ve broken them with logic and good engineering behind me.” The result was the deaths of five people.  

While hubris is a foible on a human scale, the actions of individuals cannot be seen in isolation from the broader social contexts and system. Think, for example, of the interplay between exploration and empire. It is no coincidence that many of those dubbed ‘great explorers’, from Columbus to Cook, were agents for spreading power and domination. In the train of exploration came the dispossession and exploitation of indigenous peoples across the globe.  

A similar point could be made about advances in technology. The industrial revolution was astonishing in its unshackling of the productive potential of humanity, but it also involved the brutal exploitation of working people. Curiosity and innovation need to be careful of the company they keep. Billionaires may drive innovation, but innovation is never without a cost and we must ask who should bear the burden when new technology pulls apart the ties that bind.  

Yet, even if we set aside issues of direct harm, problems remain. Billionaires drive innovation in a way that shapes what John Rawls called the ‘basic structure of society’. I recently wrote an article for International Affairs giving the example of the power of the Bill and Melinda Gates Foundation in global health. Since its inception the Gates Foundation has become a key player in global health. It has used its considerable financial and social power to set the agenda for global health, but more importantly it has shaped the environment in which global health research occurs. Bill Gates is a noted advocate of ‘creative capitalism’ and views the market as the best driver for innovation. The Gates Foundation doesn’t just pick the type of health interventions it believes to be worth funding, but shapes the way in which curiosity is harnessed in this hugely important field.  

This might seem innocuous, but it isn’t. It is an exercise of power. You don’t have to be Michel Foucault to appreciate that knowledge and power are deeply entwined. The way in which Gates and other philanthrocapitalists shape research naturalises their perspective. It shapes curiosity itself. The risk is that in doing so, other approaches to global health get drowned out by focussing on hi-tech market driven interventions favoured by Gates.  

The ‘law of the instrument’ comes to mind: if the only tool you have is a hammer, it is tempting to treat everything as if it were a nail. By placing so much faith in the epistemic privilege of billionaires, we are causing a proliferation of hammers across the various problems of the world. Don’t get me wrong, there is a place for hammers, they are very useful tools. However, at the risk of wearing this metaphor out, sometimes you need a screwdriver.  

Billionaires may be gifted people, but they are still only people. They ought not to be worshipped as infallible oracles of progress, to be left unchecked. To do so exposes the rest of us to the risk of making a world where problems are seen only through the lens created by the ultra-wealthy – and the harms caused by innovation risk being dismissed merely as the cost of doing business.


If politicians can’t call out corruption, the virus has infected the entire body politic

Nothing can or should diminish the good done by Gladys Berejiklian. And nothing can or should diminish the bad. One does not cancel the other. Both are true. Both should be acknowledged for what they are.

Yet, in the wake of Independent Commission Against Corruption’s finding that the former premier engaged in serious corrupt conduct, her political opponent, Premier Chris Minns, has refused to condemn the conduct that gave rise to this finding. Other politicians have gone further, putting personal and political allegiance ahead of sound principle to promote a narrative of denial and deflection.

Political corruption is like a highly contagious virus that infects the cells of the brain. It tends to target people who believe their superior virtue makes them immune to its effects. It protects itself from detection by convincing its hosts that they are in perfect ethical health, that the good they do outweighs the harm corruption causes, that noble intentions excuse dishonesty and that corruption only “counts” when it amounts to criminal conduct.

By any measure, Berejiklian was a good premier. Her achievements deserve to be celebrated. I am also certain that she is, at heart, a decent person who sincerely believes she always acted in the best interests of the people of NSW. By such means, corruption remains hidden – perhaps even from the infected person and those who surround them.

In painstaking legal and factual detail, those parts of the ICAC report dealing with Berejiklian reveal a person who sabotaged her own brilliant career, not least by refusing to avail herself of the protective measures built into the NSW Ministerial Code of Conduct. The code deals explicitly with conflicts of interest. In the case of a premier, it requires that a conflict be disclosed to other cabinet ministers so they can determine how best to manage the situation.

The code is designed to protect the public interest. However, it also offers protection to a conflicted minister. Yet, in violation of her duty and contrary to the public interest, Berejiklian chose not to declare her obvious conflict.

At the height of the COVID pandemic, did we excuse a person who, knowing themselves to be infected by the virus, continued to spread the disease because they were “a good person” doing ‘a good job’? Did we turn a blind eye to their disregard for public health standards just because they thought they knew better than anyone else? Did it matter that wilfully exposing others to risk was not a criminal offence? Of course not. They were denounced – not least by the leading politicians of the day.

But in the case of Berejiklian, what we hear in reply is the voice of corruption itself – the desire to excuse, to diminish, to deflect. Those who speak in its name may not even realise they do so. That is how insidious its influence tends to be. Its aim is to normalise deviance, to condition all whom it touches to think the indefensible is a mere trifle.

This is especially dangerous in a democracy. When our political leaders downplay conflicts of interest in the allocation of public resources, they reinforce the public perception that politicians cannot be trusted to use public power and resources solely in the public interest.

Our whole society, our economy, our future rest on the quality of our ethical infrastructure. It is this that builds and sustains trust. It is trust that allows society to be bold enough to take risks in the hope of a better future. We invest billions building physical and technical infrastructure. We invest relatively little in our ethical infrastructure. And so trust is allowed to decay. Nothing good can come of this.

When our ethical foundations are treated as an optional extra to be neglected and left to rot, then we are all the poorer for it.

What Gladys Berejiklian did is now in the past. What worries me is the uneven nature of the present response. Good people can make mistakes. Even the best of us can become the authors of bad deeds. But understanding the reality of human frailty justifies neither equivocation nor denial when the virus of corruption has infected the body politic.

 

This article was originally published in The Sydney Morning Herald.


Ethics explainer: Normativity

Have you ever spoken to someone and realised that they’re standing a little too close for comfort?

Personal space isn’t something we tend to actively think about; it’s usually an invisible and subconscious expectation or preference. However, when someone violates our expectations, they suddenly become very clear. If someone stands too close to you while talking, you might become uncomfortable or irritated. If a stranger sits right next to you in a public place when there are plenty of other seats, you might feel annoyed or confused.

That’s because personal space is an example of a norm. Norms are communal expectations that are taken up by various populations, usually serving shared values or principles, that direct us towards certain behaviours. For example, the norm of personal space is an expectation that looks different depending on where you are.

In some countries, the norm is to keep distance when talking to strangers, but very close when talking to close friends, family or partners. In other countries, everyone can be relatively close, and in others still, not even close relationships should invade your personal space. This is an example of a norm that we follow subconsciously.

We don’t tend to notice what our expectation even is until someone breaks it, at which point we might think they’re disrespecting personal or social boundaries.

Norms are an embodiment of a phenomenon called normativity, which refers to the tendency of humans and societies to regulate or evaluate human conduct. Normativity pervades our daily lives, influencing our decisions, behaviors, and societal structures. It encompasses a range of principles, standards, and values that guide human actions and shape our understanding of what’s considered right or wrong, good or bad.

Norms can be explicit or implicit, originating from various sources like cultural traditions, social institutions, religious beliefs, or philosophical frameworks. Often norms are implicit because they are unspoken expectations that people absorb as they experience the world around them.

Take, for example, the norms of handshakes, kisses, hugs, bows, and other forms of greeting. Depending on your country, time period, culture, age, and many other factors, some of these will be more common and expected than others. Regardless, though, each of them has a or function like showing respect, affection or familiarity.

While these might seem like trivial examples, norms have historically played a large role in more significant things, like oppression. Norms are effectively social pressures, so conformity is important to their effect – especially in places or times where the flouting of norms results in some kind of public or social rebuke.

So, norms can sometimes be to the detriment of people who don’t feel their preferences or values reflected in them, especially when conformity itself is a norm. One of the major changes in western liberal society has been the loosening of norms – the ability for people to live more authentically themselves.

Normative Ethics

Normativity is also an important aspect of ethical philosophy. Normative ethics is the philosophical inquiry into the nature of moral judgments and the principles that should govern human actions. It seeks to answer fundamental questions like “What should I do?”, “How should I live? and “Which norms should I follow?”. Normative ethical theories provide frameworks for evaluating the morality of specific actions or ethical dilemmas.

Some normative ethical theories include:

  • Consequentialism, which says we should determine moral valued based on the consequences of actions.
  • Deontology, which says we should determine moral value by looking at someone’s coherence with consistent duties or obligations.
  • Virtue ethics, which focuses on alignment with various virtues (like honesty, courage, compassion, respect, etc.) with an emphasis on developing dispositions that cultivate these virtues.
  • Contractualism, informed by the idea of the social contract, which says we should act in ways and for reasons that would be agreed to by all reasonable people in the same circumstances.
  • Feminist ethics, or the ethics of care, which says that we should challenge the understand and challenge the way that gender has operated to inform historical ethical beliefs and how it still affects our moral practices today.

Normativity extends beyond individual actions and plays a significant role in shaping societal norms, as we saw earlier, but also laws and policies. They influence social expectations, moral codes, and legal frameworks, guiding collective behavior and fostering social cohesion. Sometimes, like in the case of traffic laws, social norms and laws work in a circular way, reinforcing each other.

However, our normative views aren’t static or unchangeable.

Over time, societal norms and values evolve, reflecting shifts in normative perspectives (cultural, social, and philosophical). Often, we see social norms culminating in the changing of outdated laws that accurately reflected the normative views of the time, but no longer do.

While it’s ethically significant that norms shift over time and adapt to their context, it’s important to note that these changes often happen slowly. Eventually, changes in norms influence changes in laws, and this can often happen even more slowly, as we have seen with homosexuality laws around the world.


The ethics of drug injecting rooms

Should we allow people to use illicit drugs if it means that we can reduce the harm they cause? Or is doing so just promoting bad behaviour?

Illicit drug use costs the Australian economy billions of dollars each year, not to mention the associated social and health costs that it imposes on individuals and communities. For the last several decades, the policy focus has been on reducing illicit drug use, including making it illegal to possess and consume many drugs. 

Yet Australia’s response to illicit drug use is becoming increasingly aligned with the approach called ‘harm reduction,’ which includes initiatives like supervised injecting rooms and drug checking services, like pill testing 

Harm reduction initiatives effectively suspend the illegality of drug possession in certain spaces to prioritise the safety and wellbeing of people who use drugs. Supervised injecting rooms allow people to bring in their illicit drugs, acquire clean injecting equipment and receive guidance from medical professionals. Similarly, pill testing creates a space for festival-goers to learn about the contents and potency of their drugs, tacitly accepting that they will be consumed. 

Harm reduction is best understood in contrast with an abstinence-based approach, which has the goal of ceasing drug use altogether. Harm reduction does not enforce abstinence, instead focusing on reducing the adverse events that can result from unsafe drug use such as overdose, death and disease. 

Yet there is a great deal of debate around the ethics of harm reduction, with some people seeing it as being the obvious way to minimise the impact of drug use and to help addicts battle dependence, while those who favour abstinence often consider it to be unethical in principle.

Much of the debate is muddied by the fact that those who embrace one ethical perspective often fail to understand the issue from the other perspective, resulting in both sides talking past each other. In order for us to make an informed and ethical choice about harm reduction, it’s important to understand both perspectives. 

The ethics of drug use

Deontology and consequentialism are two moral theories that inform the various views around drug use. Deontology focuses on what kinds of acts are right or wrong, judging them according to moral norms or whether they accord with things like duties and human rights.

Immanuel Kant famously argued that we should only act in ways that we would wish to become universal laws. Accordingly, if you think it’s okay to take drugs in one context, then you’re effectively endorsing drug use for everyone. So a deontologist might argue that people should not be allowed to use illicit drugs in supervised injecting rooms, because we would not want to allow drug use in all spaces. 

An abstinence-based approach embodies this reasoning in its focus on stopping illicit drug use through treatment and incarceration. It can also explain the concern that condoning drug use in certain spaces sends a bad message to the wider community, as argued by John Barilaro in the Sydney Morning Herald: 

“…it’d be your taxpayer dollars spent funding a pill-testing regime designed to give your loved ones and their friends the green light to take an illicit substance at a music festival, but not anywhere else. If we’re to tackle the scourge of drugs in our regional towns and cities, we need one consistent message.” 

However, deontology can also be inflexible when it comes to dealing with different circumstances or contexts. Abstinence-based approaches can apply the same norms to long-term drug uses as it does to teenagers who have not yet engaged in illicit drug use. With still high rates of morbidity and mortality for the former group, some may prefer an alternative approach that highlights this context and these consequences in its moral reasoning.  

Harms and benefits

Enter consequentialism, which judges good and bad in terms of the outcomes of our actions. Harm reduction is strongly informed by consequentialism in asserting that the safety and wellbeing of people who use drugs are of primary concern. Whether drug use should be allowed in a particular space is answered by whether things like death, overdose and disease are expected to increase or decrease as a result. This is why scientific evaluations play an important role in harm reduction advocacy. As Stephen Bright argued in The Conversation: 

“…safe injecting facilities around the world: ’have been found to reduce the number of fatal and non-fatal drug overdoses and the spread of blood borne viral infections (including HIV and hepatitis B and C) both among people who inject drugs and in the wider community.’”

This approach also considers other potential societal harms, such as public injections and improper disposal of needles, as well as burden on the health system, crime and satisfaction in the surrounding community.  

This focus on consequences can also lead to the moral endorsement of some counter-intuitive initiatives. Because a consequentialist perspective will look at a wide range of the outcomes associated with a program, including the cost and harms caused by criminalisation, such as policing and incarceration, it can also conclude that some dangerous drugs should be decriminalised or legalised, if doing so would reduce their overall harm.

While a useful way to begin thinking about Australia’s approach to drug use, there is of course nuance worth noting. A deontological abstinence-based approach assumes that establishing a drug-free society is even possible, which is highly contested by harm reduction advocates. Disagreement on this possibility seems to reflect intuitive beliefs about people and about drugs. This is perhaps part of why discussions surrounding harm reduction initiatives often become so polarised. Nevertheless, these two moral theories can help us begin to understand how people view quite different dimensions of drug treatment and policy as ethically important.  


License to misbehave: The ethics of virtual gaming

Gaming was once just Pacman chasing pixelated ghosts through a digital darkness but, now, as we tumble headlong into virtual realities, ethical voids are being filled by the same ghosts humanity created IRL.

As a kid I sank an embarrassing amount of time into World of Warcraft online, my elven ranger was named Vereesa Windrunner and rode a silver covenant hippogryph. (Tl;dr: I was a cool chick with a bow and arrow and a flying horse.)  

Once I came across two other players and we chatted – then they attacked, took all my things and left me for dead. I was mad because walking back to the nearest town without my trusty hippogryph would take a good half hour. 

Gamers call this behaviour “griefing”; using the game in unintended ways to destroy or steal things another player values – even when that is not the objective. It’s pointless, it’s petty, in other words it’s being a huge jerk online. 

The only way to cope with that big old dose of griefing was rolling back from my screen, turning off the console and making a cup of tea.  

But gaming is changing and virtual reality means logging off won’t be so simple – as exciting and daunting as that sounds. 

500 years ago, or whenever Pacman was invented, gaming largely amounted to you being a little dot moving around the screen, eating slightly smaller dots, and avoiding slightly larger dots. 

Game developers have endlessly fought to create more realistic, more immersive, and more genuine experiences ever since. 

Gaming now stands as its own art, even inspiring seriously good television (try not to cry watching Last of Us) – and the long awaited leap into convincing, powerful VR (virtual reality) is now upon us. 

But in gaming’s few short decades we have already begun to realise the ethical dilemmas that come with a digital avatar – and the new griefing is downright criminal.

Last year’s Dispatches investigation found virtual spaces were already rife with hate speech, sexual harassment, paedophilia, and avatars simulating sex in spaces accessible to children. 

In the Metaverse also – which Mark Zuckerberg hopes we will all inhabit – people allege they were groped and abused, sexually harrassed and assaulted.

In one shocking experience a woman even claimed she was “virtually gang raped” while in the Metaverse.

So how can we better prepare for the ethical problems we are going to encounter as more people enter this brave new world of VR gaming? How will the lines between fantasy and reality blur when we find ourselves truly positioned in a game? What does “griefing” become in a world where our online avatars and real lives overlap? And why should we trust the creepy tech billionaires who are crafting this world with our safety and security?

The Ethics Centre’s Senior Philosopher and avid gamer Dr Tim Dean spent his formative years playing (and being griefed) in the game, Ultima Online – so he’s just the right person to ask.

Let the games begin

VR still requires cumbersome headsets. The most well known, which was bought by Facebook, is called Oculus but there are others. 

Once you’re strapped in, you can turn your head left and right and you see the expansive computer generated landscape stretching out before you. 

Your hands are the fiery gauntlets of a wizard, your feet their armoured boots, you might have more rippling abs than you’re used to – but you get the point, you’re seeing what your character sees.

The space between yourself and your avatar quickly closes.

Tim says, there is that kind of “verisimilitude” which feels like you’re right there – for better or for worse.

“If you have a greater sense of identity with your avatar, it magnifies the feelings of violation,” he said.

Videogames were once an escape from reality, a way to unshackle to the point you can steal a car, rob a bank and even kill, but Dean suggests this escapism creates new moral quandaries once we become our characters.

“A fantasy can give you an opportunity to get some satisfaction where you might not otherwise have,” he said.

“But also, if your desires are unhealthy – if you want to be violent, if you want to take things from people, if you enjoy experiencing other people’s suffering – then a fantasy can also allow you to play that out.”

Make your own rules

Dean has hope, despite the grim headlines, saying “norms emerge” in these virtual moral voids and norms begin to form between users  – or as they used to be called; “people”.

“Where there are no explicit top down norms that prevent people from harming or griefing other people, sometimes bottom up community norms emerge,” he said.

Dean’s PhD is about the birth of norms: the path from a lawless, warring chaos to self-regulating society because humans learned about the impacts they were having on one another.

It sounds promising, but when Metaverse headsets begin at $1500 you’ll quickly realise the gates to the future open only to the privileged, often wealthy white men become the early adopters.

Mark Zuckerberg seems to have the same concerns.

Metaverse proposed a solution in the form of a virtual “personal bubble” to protect people from groping… but aside from feeling very lame to walk around in a big safety bubble, it demonstrates that there’s no attempt to curb the bad behaviour in the first place.  

The solution, in the real world, to combat abuse has typically come in the form of including people from diverse backgrounds, more women, more people of colour, all sharing in the power structure.

For virtual reality – now is the time to have that discussion, not after everyone has a horror story. 

Dean thinks there are a few big questions yet to be answered:

Will people, en masse, act horribly in the virtual world?

How do you change behaviour in that world without imposing oppressive rules or… bubbles? Who gets to decide what those rules are? Would we be happy with the rules Meta comes up with? At least in a democracy, we have some power to choose who makes up the rules. That’s not the case with most technology companies.

And how does behaviour in the virtual world translate to our behaviour outside of the virtual world? 

Early geeks hoped the internet would be a virtual town square with people sharing ideas – a vision that missed racist chatbots, revenge porn and swatting.

Dean hopes the VR landscape might offer a clean slate, a chance at least to learn from the past and increase people’s capacity for empathy. 

“We can literally put on goggles and walk a mile in someone else’s shoes,” he said.

So maybe there’s hope yet.


The new normal: The ethical equivalence of cisgender and transgender identities

We’ve witnessed some pretty shocking hostility, anger, and violence directed towards the trans community recently, from anti-transgender rights activist Kellie-Jay Keen-Minshull’s national speaking tour, to an attack on LGBTQIA+ peaceful protestors by far-right Christian men.

This hostility isn’t new. Recall the former Warringah candidate Katherine Deves derogatorily described Trans and Gender Diverse (TGD) people as “surgically mutilated and sterilised”. Or that gender dysphoria was described as a “social contagion” at a Gender Identity in Law forum at Hobart’s Town Hall, and there was lament that “women and girls would miss out on opportunities to reach elite levels” in professional sport. 

It is no secret that transgender women, in particular, are at the forefront of the public discourse around trans rights, and worldwide the slogan “trans women are women” has been catching on. But there is still resistance when certain collectives retort that “women” just are “adult human females”. 

What drives these clashes is divergence over the metaphysics of sex and gender: what is ‘sex’, really? What is ‘gender’, really? The way we answer these questions shapes our estimation of which people can belong to which group, how people can behave, and in what ways society should recognise our identities. 

Some people think the relationship from sex to gender is straightforward: male sex = man, female sex = woman. But others think that their relationship is more complicated and unpredictable. People might be assigned female at birth but develop a transmasculine gender identity. That person is transgender, and many believe it is both polite and metaphysically accurate to identify these people as men. Likewise, a person assigned male at birth could develop a transfeminine gender identity, and it is appropriate to identify her as a woman. 

If this is what you think about sex and gender, then you believe that gender does not reduce to sex. Your view of gender is ‘trans inclusive’.  

Enter ‘cisgender’

To help us better articulate the gendered positions that are available to us, in the 1990s trans activists coined the term ‘cisgender’, or ‘cis’ for short referring to people whose gender identity and expression matches the biological sex they were assigned when they were born. Although ‘cisgender’ has been around a long time, it only went ‘mainstream’ pretty recently, being added to the Oxford English Dictionary only in 2015. 

I’ve come to believe that cisgender is an ethically necessary concept. And, speaking personally, it is a concept that also allows me to better understand myself and how we can be more inclusive of others.

As B. Aultman explains, “The terms man and woman, left unmarked, tend to normalise cisness — reinforcing the unstated ‘naturalness’ of being cisgender. Using the identifications of ‘cis man’ or ‘cis woman’, alongside the usage of ‘transman’ and ‘transwoman’, resists that norm reproduction and the marginalisation of trans people that such norms effect.”  

Putting the point another way, being non-trans is a common, not normal self-expression. Simultaneously, being trans is an uncommon, but nonetheless normal expression. Introducing a positive marker of nontrans identity through the term ‘cisgender’ can have a really important social impact: it can help to destigmatise being trans.

For this reason, ‘cis’ is not just a label for me; it’s meaningful. It allows me to make sense of myself, my experiences, and my social surrounds in a deeply existential way.

Embodied identity 

I was raised on a diet of feminist materialism, or ‘difference feminism’, so how I see my self — my own being — is thoroughly embodied; the self is, first and foremost, fundamentally corporeal. But there is no simple line from embodied experience to social identity. All identities are socially constructed to an extent; we don’t decide who we are on our own but do so in response to how other people perceive and interact with us within a rich and layered social environment. And all subjectivity has its basis in the flesh, by virtue of us being embodied creatures.  

I believe it is perfectly plausible to acknowledge that different women have different bodies (whether assigned female at birth or not), and so will have qualitatively distinct experiences of being women.

Let me offer some examples, I have always been acutely aware that, because I look the way that I do, because I (appear to) have a certain body, I am treated differently to cisgender men. When I was growing up, people would sexually objectify me. This is something that I share with some, but not all cisgender women, and some, but not all transgender women. I have been harassed because I am a cisgender woman, and a trans woman is harassed because she is a trans woman. Our stories may have similarities, but they will also have distinctiveness. 

Today, I spend most of my time dealing with sex-specific chronic pain and the associated medical sexism that goes with the territory. Since this type of pain (endometriosis and adenomyosis) is registered as “women’s pain” in our collective consciousness, the recognition that I am a cisgender woman is significant. It’s easy to comprehend that a transgender man would have a qualitatively distinct experience when he seeks healthcare for his endometriosis.  

It is imperative to acknowledge that there are certainly some structural privileges shared by both cis men and women. Acknowledging that we are cis is the first step in owning that privilege. However, the fact remains that cis women stand in a relation of oppression and privilege in relation to cis men, which is to say that, like trans women, cis women are oppressed because of their sex-gender configuration too, whereas cis men are not. 

Talking about myself as a ‘cis woman’, thinking of myself as a ‘cis woman’, keeps this rich complexity front and centre as I navigate my day-to-day life.

Owning our gender

So what are the implications of this? Must we always use a prefix (‘cis’ or ‘trans’)?  Do we need to make structural or institutional changes? And should all non-trans people recognise themselves as ‘cisgender’?  

We don’t always need a prefix. Say you want to compare the average earnings of women to the average earnings of men. In this instance, there is no need to distinguish between trans women and men, and cis women and men. Or, say you want to tally the members of a political party by gender; again, there is no reason to distinguish the trans members from the cis members. Whether we need a prefix is context-dependent. 

Let’s say you’re filling out an admission form for an in-patient stay in hospital. Hospitals are the sorts of places where both your sex and gender identity need to be foregrounded. That form could say, “What is your gender: Man; Woman; Other” and “What was your sex-assignment at birth: Male; Female; Other”. But that form could also say: “What is your gender identity: cisgender man; cisgender woman; trans man; trans woman; other”. It doesn’t really matter, so long as the hospital can obtain the information it needs. 

But do you have to identify as cis? In my experience, taking up the label in a self-recognitive way helps to decentre the view that trans people are somehow abnormal. I think that’s a good reason, especially in light of the stigma and violence confronting trans people today. I have found value in it – it helps me to understand my own struggles and reminds me not to universalise my experiences.

 

This article is part of our Why identity matters series.