Is it wrong to care about Ukraine more than other wars?

The invasion of Ukraine by Russian forces in the early hours of 24 February 2022 came as a violent shock to most onlookers.

Even after the visible buildup of Russian forces and weeks of sabre rattling by Russian President Vladimir Putin, the images of rockets striking apartment blocks and tanks rolling through city streets triggered an outpouring of support for Ukraine from people within Australia and around the world.  

But those who dwell on social media might have seen some voices express a different perspective: that the focus on Ukraine is suggestive of a darker underlying bias on behalf of the onlookers; that the conflict has only gained so much attention because the victims of the war are white Europeans.  

The argument suggests that if the victims were non-white, such as those involved in the ongoing wars in Yemen, Syria or Ethiopia, then the media and Western onlookers would be far less engaged. 

So is it wrong to focus our attention acutely on the war in Ukraine while investing less energy on conflicts in other parts of the world, especially if those conflicts affect non-white people? Is it OK to care more about a war in Europe than it is a war in Africa or the Middle East? 

We can unpack the argument in a few different ways. The least charitable interpretation is that it’s an accusation of racism, suggesting that people who care about the war in Ukraine only care because the victims are white. That might be true for some onlookers, but it’s highly doubtful that this applies to the majority of people.  

Rather, there are many reasons why someone in Australia might place great significance on the events unfolding in Ukraine. First of all is the shock factor, particularly given the relative stability and lack of open wars between nations in Europe since the end of the Second World War. This means the war in Ukraine is not just a concern for that region but is of tremendous global significance, with the potential to reshape the geopolitical landscape in a way that could affect people around the world. In this way, the war in Ukraine very much qualifies as being worthy of our attention due to its historical significance. 

There’s also the matter of familiarity, in the sense that Ukraine is a modern, industrialised and democratic nation that shares many political and moral values with countries like Australia. Beyond the human toll, the invasion represents an attack against values that most Australians cherish. 

Many Australians also have friends, family or coworkers with connections to Ukraine or other European countries who are impacted by the war. To them, the war is not just news of distant events but is felt in their immediate circles in a way that other conflicts might not be. Of course, there are many Austrlians who are also affected by conflicts in other parts of the world, such as in Syria and Yemen. 

Finally, on a more mundane level, the war in Ukraine is likely to have a material impact on our lives through its destabilisation of the international economy, as well as on commodity prices such as wheat and oil, in a way that most other ongoing conflicts don’t. 

All this said, while the above can help explain why someone might take a more acute interest in the war in Ukraine, it doesn’t answer the ethical question of whether they should take greater interest in conflicts elsewhere at the same time. It’s possible that these explanations don’t justify an undue focus on one population experiencing conflict rather than another.  

A more charitable interpretation of the argument is that all suffering deserves our attention, all violence deserves our rebuke and all people involved in wars deserve our empathy. This stems from a universalist ethic, such as that promoted by philosophers like Peter Singer. It argues that all people deserve equal concern, no matter their background, ethnicity or nationality. Singer famously argued that if you’d dive into a pond to save a drowning child, even at the cost of muddying your clothes and being late to work, then you ought to be willing to incur a similar cost to save the life of a dying child on the other side of the world. 

From this perspective, the same reasons that justify our empathy towards the suffering of the Ukrainian people should similarly apply to the people of Yemen, Ethiopia, Syria and elsewhere. 

However, a truly universalist ethic is difficult, if not impossible, to fully achieve in practice. Few people would be willing to take the ethic to the extreme, and treat strangers in distant countries with as much care and concern that they reserve for our family. If this is so, then it is difficult to know where to draw the line around who deserves more or less of our concern. 

Furthermore, everybody has a finite budget of time, emotional energy and power to act. It is not possible to be engaged with every conflict, every injustice and every instance of ethical wrongdoing taking place in the world, let alone to be able to act on them. It might be reasonable for people to choose where to invest their limited energy, or to preserve their energy for causes they can positively impact. That doesn’t mean they don’t care about other issues, only that they’ve chosen to do good where they can.

Which brings us to the most charitable interpretation of the argument, which is that any conflict should remind us of the horrors of war, and should motivate us to extend our empathy to people who are suffering anywhere in the world. The saturation media footage of violence and destruction in Ukraine can help us better understand the plight of people living through other conflicts. The plight of embattled civilians in Kyiv can help us better understand and empathise with people living in Aleppo in Syria or Sanaa in Yemen.  

It is unlikely that those promoting this argument on social media would want people to retreat from engaging with all news of conflict or suffering, whether it is in Europe or elsewhere. Rather, we might forgive people for having some bias in where they choose to direct their attention, while reminding them that all people are deserving of ethical consideration. Moral consideration need not be a zero-sum game; elevating our concern for one population doesn’t have to come at the expense of concern for others.  


Breakdowns and breakups: Euphoria and the moral responsibility of artists

Euphoria has been, for almost two years now, approaching a fever pitch of horror, addiction, heartbreak and self-destruction.

Its assembled cast of characters – most notably Rue (Zendaya), who starts the first season emerging straight out of rehab – sit constantly on the verge of total nervous collapse. They are always one bad party away from cataclysmic suffering, their lives hanging in a painful balance between “just about getting by” and “absolute devastation.”

Indeed, even if its utter melodrama means that Euphoria doesn’t actually reflect how high school is – who could cram in that much explosive melancholy before the lunch bell? – it certainly reflects how high school feels. There are few experiences more tortured and heightened than being a teenager, when your whole skin feels on fire, and possibilities splinter out from in front of your feet at every single moment. There is the sense of the future being unwritten; of your life being terrifyingly in your own hands.

But what does Euphoria’s constant hysteria do to its viewers, particularly its younger ones? If the devastation of adolescence really is that severe, then are artists failing, somehow, if they merely reflect that devastation? Should we ask our art to serve an instructional purpose; to pull us out of the traps we have built for ourselves? Or should art settle into those traps, letting their metal teeth sink into their skin?


Image: Euphoria, HBO

The Long History Of “Evil” Art

The question of the moral responsbility of artists is particularly pertinent in the case of Euphoria because of its emphasis on what have been typically viewed as “illicit” activities, from drug-taking to underage sex. These are – to the great detriment of a truly free society – taboo subjects, deemed inappropriate for discussion in public spaces, and condemned to be whispered, rather than shouted about. 

Indeed, there is a long history of conservatives and moral puritans rallying against artworks that they feel ‘glamorize’ or somehow indulge bad and illegal behaviour. Take, for instance, the Satanic Panic that gripped the United Kingdom in the ‘80s. Shortly after the advent of home video, the market became flooded with what were then termed “video nasties”, a wave of cheaply made horror films that actively marketed themselves for their moral repugnance. The point was how many taboos could be broken; into how much blood and muck and horror that filmmakers could sink themselves, like half-formed and discarded babies being thrown to rest in a mud puddle.

This, to many pro-censorship thinkers at the time, was seen as a kind of moral crime – an unspeakable act, with the ability to influence and addle the minds of Britain’s younger generation. The demand from conservatives was that art be a way of modelling good ethical behaviour, and the worry, expressed furiously in the tabloids, was that any other alternative would lead to the breakdown of society itself.  

So no, the question as to whether art should be instructional is not new; the fear that it might lead the minds of the younger generation astray far from fresh. Euphoria might seem relentlessly modern, with its lived-in cinematic voice, and its restless politics. But it is part of a tradition of artworks that submerge themselves in darkness and despair; vice and what some, most of them on the right, deem the immoral.

The Unspoken Becomes Spoken

The mistake made, however, by those who imagine such art is failing an explicit moral purpose, a kind of sentimental education, rests on an outdated and functionally useless understanding of morality. These critics imagine that there is just one way to live well. They believe in uncrossable boundaries of taboo and immorality; that there are iron-wrought moral rules, and that any art that breaks those rules will lead to some kind of negative and harmful shifting of what is acceptable amongst the citizens of any democratic society.

But why should we believe that morality is so strict? We would do well to move away from an objective, centralised view of morality, where there exists a list of rules, printed in indelible ink somewhere, that are inflexible and pre-ordained. Societally, as well as personally, change is the only constant. If we abide by a set of constructed ethical principles that do not reflect that change, we will be forever torn between a possible future and a weighty past, bogged down in a system of conduct that no longer represents the complexity of what it means to be human. 

If we have any true moral imperative, it is to constantly be in the process of testing and re-shaping our morals. It was John Stuart Mill who developed a similar concept of truth – who believed that we could only remain honest, and democratic, if we were forever challenging that which we had taken for granted. Art is a process of this moral re-shaping. Great art need not shy away from that which we hold to be “good” or “right”, or, on the flipside, “harmful” and “taboo.” 

It is not that art need to be amoral, free from ethical concerns, with artists resisting any urge to provide some form of moral instruction – it is that we need to let go of the idea that this moral instruction can only take the form of propping up old and unchanging notions of goodness. The immoral and the moral are only useful concepts if they teach us something about how to live, and they will only teach us something about how to live if we make sure they are forever being tested and examined.

Finding Yourself


Image: Euphoria, HBO

This is what Euphoria does. By basking in that which has been taken as illicit – in particular, the sex and chemical lives of America’s teenagers – the show makes the unspoken spoken. It draws into focus an outdated and ancient view of the good life, and challenges us to stare our conceptions of self-perpetuation and self-destruction in the face.  

Rue, forever in the process of re-shaping herself in the shadow of her great addiction, makes mistakes. Cassie (Sydney Sweeney), Euphoria’s shaking, panic-addled heart, makes even more. Both of them stray from pre-written social conceptions of the “good girl”, dissolving an ancient and harmful angel/whore dichotomy, and proving that there are no static boundaries between what is admirable and what is abhorrent. 

Just as the show itself skirts back and forth across the line between our notions of the ethical and the immoral, so too do these characters forever find themselves testing the limits of what is good for them, and those around them. They are flawed, vulnerable people. But in these flaws – in this very notion of trembling possibility, the rules of good conduct forever being written in sand – they do provide us with a moral education. Not one that rests on simplistic notions of what we should do, and when. But one that proves that as both a society, and as individuals in that society, we should always be taking that which has been shrouded in darkness and throw it – sometimes painfully – into the light.


Hallucinations that help: Psychedelics, psychiatry, and freedom from the self

Dr. Chris Letheby, a pioneer in the philosophy of psychedelics, is looking at a chair. He is taking in its individuated properties – its colour, its shape, its location – and all the while, his brain is binding these properties together, making them parts of a collective whole.

This, Letheby explains, is also how we process the self. We know that there are a number of distinct properties that make us who we are: the sensation of being in our bodies, the ability to call to mind our memories or to follow our own trains of thought. But there is a kind of mental glue that holds these sensations together, a steadfast, mostly uncontested belief in the concrete entity to which we refer when we use the word “me.”

“Binding is a theoretical term,” Letheby explains. “It refers to the integration of representational parts into representational wholes. We have all these disparate representations of parts of our bodies and who we were at different points at time and different roles we occupy and different personality traits. And there’s a very high-level process that binds all of these into a unified representation; that makes us believe these are all properties and attributes of one single thing. And different things can be bound to this self model more tightly.”

Freed from the Self

So what happens when these properties become unbound from one another – when we lose a cohesive sense of who we are? This, after all, is the sensation that many experience when taking psychedelic drugs. The “narrative self” – the belief that we are an individuated entity that persists through time – dissolves. We can find ourselves at one with the universe, deeply connected to those around us.

Perhaps this sounds vaguely terrifying – a kind of loss. But as Letheby points out, this “ego dissolution” can have extraordinary therapeutic results in those who suffer from addiction, or experience deep anxiety and depression.

“People can get very harmful, unhealthy, negative forms of self-representation that become very rigidly and deeply entrenched,” Letheby explains.

“This is very clear in addiction. People very often have all sorts of shame and negative views of themselves. And they find it very often impossible to imagine or to really believe that things could be different. They can’t vividly imagine a possible life, a possible future in which they’re not engaging in whatever the addictive behaviours are. It becomes totally bound in the way they are. It’s not experienced as a belief, it’s experienced as reality itself.”

This, Letheby and his collaborator Philip Gerrans write, is key to the ways in which psychedelics can improve our lives. “Psychedelics unbind the self model,” he says. “They decrease the brain’s confidence in a belief like, ‘I am an alcoholic’ or ‘I am a smoker’. And so for the first time in perhaps a very long time [addicts] are able to not just intellectually consider, but to emotionally and experientially imagine a world in which they are not an alcoholic. Or if we think about anxiety and depression, a world in which there is hope and promise.”

A comforting delusion?

Letheby’s work falls into a naturalistic framework: he defers to our best science to make sense of the world around us. This is an unusual position, given some philosophers have described psychedelic experiences as being at direct odds with naturalism. After all, a lot of people who trip experience what have been called “metaphysical hallucinations”: false beliefs about the “actual nature” of the universe that fly in the face of what science gives us reason to believe.

For critics of the psychedelic experience then, these psychedelic hallucinations can be described as little more than comforting falsehoods, foisted upon the sick – whether mentally or physically – and dying. They aren’t revelations. They are tricks of the mind, and their epistemic value remains under question.

But Letheby disagrees. He adopts the notion of “epistemic innocence” from the work of philosopher Lisa Bortolotti, the belief that some falsehoods can actually make us better epistemic agents. “Even if you are a naturalist or a materialist, psychedelic states aren’t as epistemically bad as they have been made out to be,” he says, simply. “Sometimes they do result in false beliefs or unjustified beliefs … But even when psychedelic experiences do lead to people to false beliefs, if they have therapeutic or psychological benefits, they’re likely to have epistemic benefits too.”

To make this argument, Letheby returns again to the archetype of the anxious or depressed person. This individual, when suffering from their illness, commonly retreats from the world, talking less to their friends and family, and thus harming their own epistemic faculties – if you don’t engage with anyone, you can’t be told that you are wrong, can’t be given reasons for updating your beliefs, can’t search out new experiences.

“If psychedelic states are lifting people out of their anxiety, their depression, their addiction and allowing people to be in a better mode of functioning, then my thought is, that’s going to have significant epistemic benefits,” Letheby says. “It’s going to enable people to engage with the world more, be curious, expose their ideas to scrutiny. You can have a cognition that might be somewhat inaccurate, but can have therapeutic benefits, practical benefits, that in turn lead to epistemic benefits.”

As Letheby has repeatedly noted in his work, the study of the psychiatric benefits of psychedelics is in its early phases, but the future looks promising. More and more people are experiencing these hallucinations – these new, critical beliefs that unbind the self – and more and more people are getting well. There is, it seems, a possible world where many of us are freed from the rigid notions of who we are and what we want, unlocked from the cage of the self, and walking, for the first time in a long time, in the open air.


Meet Dr Tim Dean, our new Senior Philosopher

Ethics is about engaging in conversations to understand different perspectives and ways in which we can approach the world.  

Which means we need a range of people participating in the conversation. 

That’s why we’re excited to share that we have recently appointed Dr Tim Dean as our Senior Philosopher. An award-winning philosopher, writer, speaker and honorary associate with the University of Sydney, Tim has developed and delivered philosophy and emotional intelligence workshops for schools and businesses across Australia and the Asia Pacific, including Meriden and St Mark’s high schools, The School of Life, Small Giants and businesses including Facebook, Commonwealth Bank, Aesop, Merivale and Clayton Utz. 

We sat down with Tim to discuss his views on morality, social media, cancel culture and what ethics means to him.

 

What drew you to the study of philosophy?

Children are natural philosophers, constantly asking “why?” about everything around them. I just never grew out of that tendency, much to the chagrin of my parents and friends. So when I arrived at university, I discovered that philosophy was my natural habitat, furnishing me with tools to ask “why?” better, and revealing the staggering array of answers that other thinkers have offered throughout the ages. It has also helped me to identify a sense of meaning and purpose that drives my work.

What made you pursue the intersection of science and philosophy?

I see science and philosophy as continuous. They are both toolkits for understanding the world around us. In fact, technically, science is a sub-branch of philosophy (even if many scientists might bristle at that idea) that specialises in questions that are able to be investigated using empirical tools, hence its original name of “natural philosophy”. I have been drawn to science as much as philosophy throughout my life, and ended up working as a science writer and editor for over 10 years. And my study of biology and evolution transformed my understanding of morality, which was the subject of my PhD thesis.

How does social media skew our perception of morals?

If you wanted to create a technology that gave a distorted perception of the world, that encouraged bad faith discourse and that promoted friction rather than understanding, you’d be hard pressed to do better than inventing social media. Social media taps into our natural tendencies to create and defend our social identity, it triggers our natural outrage response by feeding us an endless stream of horrific events, it rewards us with greater engagement when we go on the offensive while preventing us from engaging with others in a nuanced way. In short, it pushes our moral buttons, but not in a constructive way. So even though social media can do good, such as by raising awareness of previously marginalised voices and issues, overall I’d call it a net negative for humanity’s moral development.

How do you think the pandemic has changed the way we think about ethics?

The COVID-19 pandemic has both expanded and shrunk our world. On the one hand, lockdowns and border closures have grounded us in our homes and our local communities, which in many cases has been a positive thing, as people get to know their neighbours and look out for each other. But it has also expanded our world as we’ve been stuck behind screens watching a global tragedy unfold, often without any real power to fix it. But it has also made us more sensitive to how our individual actions affect our entire community, and has caused us to think about our obligations to others. In that sense, it has brought ethics to the fore.

Tell us a little about your latest book ‘How We Became Human, And Why We Need to Change’?

I’ve long been fascinated by the story of how we evolved from being a relatively anti-social species of ape a few million years ago to being the massively social species we are today. Morality has played a key part in that story, helping us to have empathy for others, motivating us to punish wrongdoing and giving us a toolkit of moral norms that can guide our community’s behaviour. But in studying this story of moral evolution, I came to realise that many of the moral tendencies we have and many of the moral rules we’ve inherited were designed in a different time, and they often cause more harm than good in today’s world. My book explores several modern problems, like racism, sexism, religious intolerance and political tribalism, and shows how they are all, in part, products of our evolved nature. I also argue that we need to update our moral toolkit if we want to live and thrive in a modern, globalised and diverse world, and that means letting go of past solutions and inventing new ones.

How do you think the concepts of right and wrong will change in the coming years?

The world is changing faster than ever before. It’s also more diverse and fragmented than ever before. This means that the moral rules that we live by and the values that drive us are also changing faster than ever before – often faster than many people can keep up. Moral change will only continue, especially as new generations challenge the assumptions and discard the moral baggage of past generations. We should expect that many things we took for granted will be challenged in the coming decades. I foresee a huge challenge in bringing people along with moral change rather than leaving them behind.

What are your thoughts on the notion of ‘cancel culture’?

There are no easy answers when it comes to the limits of free speech. We value free speech to the degree that it allows us to engage with new ideas, seek the truth and to be able to express ourselves and hear from others. But that speech comes at a cost, particularly when it allows bad faith speech to spread misinformation, to muddy the truth, or dehumanise others. There are some types of speech that ought to be shut down, but we must be careful how the power to shut down speech is used. In the same way that some speech can be in bad faith, so too can be efforts to shut it down. Some instances of “cancelling” might be warranted, but many are a symptom of mob culture that seeks to silence views the mob opposes rather than prevent bad kinds of speech. Sometimes it’s motivated by a sense that a speaker is not just mistaken but morally corrupt, which prevents people from engaging with them and attempting to change their views. This is why one thing I advocate strongly for is rebuilding social capital, or the trust and respect that enables good faith discourse to occur at all. It’s only when we have that trust and respect that we will be able to engage in good faith rather than feel like we need to resort to cancelling or silencing people.

Lastly, the big one – what does ethics mean to you?

Ethics is what makes our species unique. No other creature can live alongside and cooperate with other individuals on the scale that we do. This is all made possible by ethics, which is our ability to consider how we ought to behave towards others and what rules we should live by. It’s our superpower, it’s what has enabled our species to spread across the globe. But understanding and engaging with ethics, figuring out our obligations to others, and adapting our sense of right and wrong to a changing world, is our greatest and most enduring challenge as a species.


Ethics Explainer: Beauty

Research shows that physical appearance can affect everything from the grades of students to the sentencing of convicted criminals – are looks and morality somehow related?

Ancient philosophers spoke of beauty as a supreme value, akin to goodness and truth. The word itself alluded to far more than aesthetic appeal, implying nobility and honour – it’s counterpart, ugliness, made all the more shameful in comparison.

From the writings of Plato to Heraclitus, beautiful things were argued to be vital links between finite humans and the infinite divine. Indeed, across various cultures and epochs, beauty was praised as a virtue in and of itself; to be beautiful was to be good and to be good was to be beautiful.

When people first began to ask, ‘what makes something (or someone) beautiful?’, they came up with some weird ideas – think Pythagorean triangles and golden ratios as opposed to pretty colours and chiselled abs. Such aesthetic ideals of order and harmony contrasted with the chaos of the time and are present throughout art history.


Leonardo da Vinci, Vitruvian Man, c.1490 

These days, a more artificial understanding of beauty as a mere observable quality shared by supermodels and idyllic sunsets reigns supreme. 

This is because the rise of modern science necessitated a reappraisal of many important philosophical concepts. Beauty lost relevance as a supreme value of moral significance in a time when empirical knowledge and reason triumphed over religion and emotion.  

 Yet, as the emergence of a unique branch of philosophy, aesthetics, revealed, many still wondered what made something beautiful to look at – even if, in the modern sense, beauty is only skin deep.  

Beauty: in the eye of the beholder?

In the ancient and medieval era, it was widely understood that certain things were beautiful not because of how they were perceived, but rather because of an independent quality that appealed universally and was unequivocally good. According to thinkers such as Aristotle and Thomas Aquinas, this was determined by forces beyond human control and understanding. 

Over time, this idea of beauty as entirely objective became demonstrably flawed. After all, if this truly were the case, then controversy wouldn’t exist over whether things are beautiful or not. For instance, to some, the Mona Lisa is a truly wonderful piece of art – to others, evidence that Da Vinci urgently needed an eye check.  

Leonardo da Vinci, The Mona Lisa, 1503, Photographed at The Louvre, present day 

Consequently, definitions of beauty that accounted for these differences in opinion began to gain credence. David Hume famously quipped that beauty “exists merely in the mind which contemplates”. To him and many others, the enjoyable experience associated with the consumption of beautiful things was derived from personal taste, making the concept inherently subjective.  

This idea of beauty as a fundamentally pleasurable emotional response is perhaps the closest thing we have to a consensus among philosophers with otherwise divergent understandings of the concept. 

Returning to the debate at hand: if beauty is not at least somewhat universal, then why do hundreds and thousands of people every year visit art galleries and cosmetic surgeons in pursuit of it? How can advertising companies sell us products on the premise that they will make us more beautiful if everyone has a different idea of what that looks like? Neither subjectivist nor objectivist accounts of the concept seem to adequately explain reality.  

According to philosophers such as Immanuel Kant and Francis Hutcheson, the answer must lie somewhere in the middle. Essentially, they argue that a mind that can distance itself from its own individual beliefs can also recognize if something is beautiful in a general, objective sense. Hume suggests that this seemingly universal standard of beauty arises when the tastes of multiple, credible experts align. And yet, whether or not this so-called beautiful thing evokes feelings of pleasure is ultimately contingent upon the subjective interpretation of the viewer themselves. 

Looking good vs being good

If this seemingly endless debate has only reinforced your belief that beauty is a trivial concern, then you are not alone! During modernity and postmodernity, philosophers largely abandoned the concept in pursuit of more pressing matters – read: nuclear bombs and existential dread. Artists also expressed their disdain for beauty, perceived as a largely inaccessible relic of tired ways of thinking, through an expression of the anti-aesthetic. 

Marcel Duchamp, Fountain, 1917

Nevertheless, we should not dismiss the important role beauty plays in our day-to-day life. Whilst its association with morality has long been out of vogue among philosophers, this is not true of broader society. Psychological studies continually observe a ‘halo effect’ around beautiful people and things that see us interpret them in a more favourable light, leading them to be paid higher wages and receive better loans than their less attractive peers.  

Social media makes it easy to feel that we are not good enough, particularly when it comes to looks. Perhaps uncoincidentally, we are, on average, increasing our relative spending on cosmetics, clothing, and other beauty-related goods and services.

Turning to philosophy may help us avoid getting caught in a hamster wheel of constant comparison. From a classical perspective, the best way to achieve beauty is to be a good person. Or maybe you side with the subjectivists, who tell us that being beautiful is meaningless anyway. Irrespective, beauty is complicated, ever-important, and wonderful – so long as we do not let it unfairly cloud our judgements. 

 

Step through the mirror and examine what makes someone (or something) beautiful and how this impacts all our lives. Join us for the Ethics of Beauty on Thur 29 Feb 2024 at 6:30pm. Tickets available here.


Why morality must evolve

If you read the news or spend any time on social media, then you’d be forgiven for thinking that there’s a lack of morality in the world today.

There is certainly no shortage of serious social and moral problems in the world. People are discriminated against just because of the colour of their skin. Many women don’t feel safe in their own home or workplace. Over 450 million children around the world lack access to clean water. There are whole industries that cause untold suffering to animals. New technologies like artificial intelligence are being used to create autonomous weapons that might slip from our control. And people receive death threats simply for expressing themselves online.

It’s easy to think that if only morality featured more heavily in people’s thinking, then the world would be a better place. But I’m not convinced. This might sound strange coming from a moral philosopher, but I have come to believe that the problem we face isn’t a lack of morality, it’s that there’s often too much. Specifically, too much moral certainty.

The most dangerous people in the world are not those with weak moral views – they are those who have unwavering moral convictions. They are the ones who see the world in black and white, as if they are in a war between good and evil. They are ones who are willing to kill or die to bring about their vision of utopia.

That said, I’m not quite ready to give up on morality yet. It sits at the heart of ethics and guides how we decide on what is good and bad. It’s still central to how we live our lives. But it also has a dark side, particularly in its power to inspire rigid black and white thinking. And it’s not just the extremists who think this way. We are all susceptible to it.

To show you what I mean, let me ask you what will hopefully be an easy question:

Is it wrong to murder someone, just because you don’t like the look of their face?

I’m hoping you will say it is wrong, and I’m going to agree with you, but when we look at what we mean when we respond this way, it can help us understand how we think about right and wrong.

When we say that something like this is wrong, we’re usually not just stating a personal preference, like “I simply prefer not to murder people, but I don’t mind if you do so”. Typically, we’re saying that murder for such petty reasons is wrong for everyone, always.

Statements like these seem to be different from expressions of subjective opinion, like whether I prefer chocolate to strawberry ice cream. It seems like there’s something objective about the fact that it’s wrong to murder someone because you don’t like the look of their face. And if someone suggests that it’s just a subjective opinion – that allowing murder is a matter of personal taste – then we’re inclined to say that they’re just plain wrong. Should they defend their position, we might even be tempted to say they’re not only mistaken about some basic moral truth, but that they’re somehow morally corrupt because they cannot appreciate that truth.

Murdering someone because you don’t like the look of their face is just wrong. It’s black and white.

This view might be intuitively appealing, and it might be emotionally satisfying to feel as if we have moral truth on our side, but it has two fatal flaws. First, morality is not black and white, as I’ll explain below. Second, it stifles our ability to engage with views other than our own, which we are bound to do in a large and diverse world.

So instead of solutions, we get more conflict: left versus right, science versus anti-vaxxers, abortion versus a right to choose, free speech versus cancel culture. The list goes on.

Now, more than ever, we need to get away from this black and white thinking so we can engage with a complex moral landscape, and be flexible enough to adapt our moral views to solve the very real problems we face today.

The thing is, it’s not easy to change the way we think about morality. It turns out that it’s in our very nature to think about it in black and white terms.

As a philosopher, I’ve been fascinated by the story of where morality came from, and how we evolved from being a relatively anti-social species of ape a few million years ago to being the massively social species we are today.

Evolution plays a leading role in this story. It didn’t just shape our bodies, like giving us opposable thumbs and an upright stance. It also shaped our minds: it made us smarter, it gave us language, and it gave us psychological and social tools to help us live and cooperate together relatively peacefully. We evolved a greater capacity to feel empathy for others, to want to punish wrongdoers, and to create and follow behavioural rules that are set by our community. In short: we evolved to be moral creatures, and this is what has enabled our species to work together and spread to every corner of the globe.

But evolution often takes shortcuts. It often only makes things ‘good enough’ rather than optimising them. I mentioned we evolved an upright stance, but even that was not without cost. Just ask anyone over the age of 40 years about their knees or lower backs.

Evolution’s ‘good enough’ solution for how to make us be nice to each other tens of thousands of years ago was to appropriate the way we evolved to perceive the world. For example, when you look at a ripe strawberry, what do you see? I’m guessing that for most of you – namely if you are not colour blind – you see it as being red. And when you bite into it, what do you taste? I’m guessing that you experience it as being sweet.

However, this is just a trick that our mind plays on us. There really is no ‘redness’ or ‘sweetness’ built into the strawberry. A strawberry is just a bunch of chemicals arranged in a particular way. It is our eyes and our taste buds that interpret these chemicals as ‘red’ or ‘sweet’. And it is our minds that trick us into believing these are properties of the strawberry rather than something that came from us.

The Scottish philosopher David Hume called this “projectivism”, because we project our subjective experience onto the world, mistaking it for being an objective feature of the world.

We do this in all sorts of contexts, not just with morality. This can help explain why we sometimes mistake our subjective opinions for being objective facts. Consider debates you may have had around the merits of a particular artist or musician, or that vexed question of whether pineapple belongs on pizza. It can feel like someone who hates your favourite musician is failing to appreciate some inherently good property of their music. But, at the end of the day, we will probably acknowledge that our music tastes are subjective, and it’s us who are projecting the property of “awesomeness” onto the sounds of our favourite song.

It’s not that different with morality. As the American psychologist, Joshua Greene, puts it: “we see the world in moral colours”. We absorb the moral values of our community when we are young, and we internalise them to the point where we see the world through their lens.

As with colours, we project our sense of right and wrong onto the world so that it looks like it was there all along, and this makes it difficult for us to imagine that other people might see the world differently.

In studying the story of human physical, psychological and cultural evolution, I learnt something else. While this is how evolution shaped our minds to see right and wrong, it’s not how morality has actually developed. Even though we’re wired to see our particular version of morality as being built into the world, the moral rules that we live by are really a human invention. They’re not black and white, but come in many different shades of grey.

You can think of these rules as being a cultural tool kit that sits on top of our evolved social nature. These tools are something that our ancestors created to help them live and thrive together peacefully. They helped to solve many of the inevitable problems that emerge from living alongside one another, like how to stop bullies from exploiting the weak, or how to distribute food and other resources so everyone gets a fair share.

But, crucially, different societies had different problems to solve. Some societies were small and roamed across resource-starved areas surrounded by friendly bands. Their problems were very different from those of a settled society defending its farmlands from hostile raiders. And their problems differed even more from those of a massive post-industrial state with a highly diverse population. Each had their own set of challenges to solve, and each came up with different solutions to suit their circumstances.

Those solutions also changed and evolved as their societies did. As social, environmental, technological and economic circumstances changed, societies faced new problems, such as rising inequality, conflict between diverse cultural groups or how to prevent industry from damaging the environment. So they had to come up with new solutions.

For an example of moral evolution, consider how attitudes towards punishing wrongdoing have varied among different societies and changed over time. Let’s start with a small-scale hunter-gatherer society, like that of the !Kung, dwelling in the Kalahari desert a little over a century ago.

If one member of the band started pushing others around, perhaps turning to violence to get their way, there were no police or law courts to keep them in line. Instead, it was left to individuals to keep their own justice. That’s why if a bully murdered a family member, it was not only permitted, but it was a moral duty for the family to kill the murderer. Revenge – and the threat thereof – was an important and effective tool in the !Kung moral toolkit.

We can see that revenge also played a similar role in many moral systems around the world and throughout history. There are countless tales that demonstrate the ubiquity of revenge in great works like the Iliad, Mahabharata and Beowulf. In the Old Testament, God tells Moses the famous line that allows his people to take an eye for an eye and tooth for a tooth.

But as societies changed, as they expanded, as people started interacting with more strangers, it turned out that revenge caused more problems than it solved. While it could be managed and contained in small-scale societies like the !Kung, in larger societies it could lead to feuds where extended family groups might get stuck in a cycle of counter-retaliation for generations, all started by a one single regrettable event.

As societies changed, they created new moral tools to solve the new problems they faced, and they often discarded tools that no longer worked. That’s why the New Testament advises people to reject revenge and “turn the other cheek” instead.

Modern societies have the resources and institutions to outsource punishment to a specialised class of individuals in the form of police and law courts. When these institutions are well run and can be trusted, they have proven to be a highly effective means of keeping the peace to the point that revenge and vigilante justice is now frowned upon.

This is moral evolution. This is how different societies have adapted to new ways of living, solving the new social problems that emerge as their societies and circumstances change.

(I must stress that this does not make !Kung morality inferior or less evolved than other societies. Similar to how all creatures alive today are equally evolved, so too are all extant moral systems. My point is not that there is a linear progression from small-scale to large-scale societies, from primitive to civilised, it’s that any moral system needs to fit the circumstances that the society finds itself in and change as those circumstances change.)

But there’s a catch: moral evolution has typically moved painfully slowly, not least because our innate tendency towards black and white thinking has stifled moral innovation.

This wasn’t such a problem 10,000 years ago, when living conditions would have remained relatively static for generations. In this case, there was less pressure to evolve and adapt the moral toolkit. But the world today is not like this. It is changing faster than ever before, and so we are forced to adapt faster than our minds might be comfortable with.

This means pushing back on our black and white tendencies and looking at morality through a new lens. Instead of seeing it as something that is fixed, we can look at it as a toolkit that we adapt to the problems at hand.

It also means acknowledging that many of the social and moral problems we face today have no single perfect solution. Many come in shades of grey, like deciding if free speech should give people a right to offend, or to what extent we should tolerate intolerance, or under what circumstances we should allow people to end their own lives. There is almost certainly no single set of moral rules that will solve these problems in every circumstance without also causing undesirable consequences.

On the other hand, we should also acknowledge that there are many social and moral problems that have more than one good solution. Consider one of the questions that sits at the centre of ethics: what constitutes a good life? There are likely going to be many good answers to that question. This remains the case even if some answers come into conflict with others, such as one perspective stressing individual freedom while another stresses greater community and interpersonal obligations.

This full-spectrum evolutionary perspective on morality can also help explain why there is such a tremendous diversity of conflicting moral viewpoints in the world today. For a start, many cultures are still wielding tools that were made to solve problems from a different time, and they have carried them into today’s world, such as tools that prioritise in-group loyalty at the cost of suspicion of others. Some conservative cultures are reluctant to give these tools up, even if they are obsolete.

Other tools were never very good at their job, or they were co-opted by an elite for their own benefit to the detriment of others, such as tools that subjugated women or disenfranchised certain minorities.

Other tools are responses to different conceptions of the good life. Some represent the trade-off that is built into many moral problems. And there is constant production of new and experimental tools that have yet to be tested. Some will prove to work well and may be kept, while others will fall short, or cause more problems than they solve, and will be dropped.

One thing is clear: the world we are living in today is unlike anything our distant ancestors faced. It is larger, more complex, more diverse and more interconnected than it has ever been, and we are hearing from voices that once were silenced. The world is changing faster than ever before.

This might be taxing for our slow-moving black and white minds – and we should forgive ourselves and others for being human – but we must adapt our moral views to the world of today, and not rely on the old solutions of yesterday.

This calls for each of us to be mindful of how we think about morality, our black and white tendencies, and whether the views we inherited from our forebears are the best ones to solve the serious problems we face today. It also means we must rethink morality as being a human invention, a toolkit that can be adapted as the world changes, with many new problems and many tools that can solve them.

What matters today is not clinging to the moral views we were raised with, but looking at each problem, listening to each other, and working together to find the best solution. What we need now is genuine moral evolution.


Why we should be teaching our kids to protest

When the Prime Minister says classrooms shouldn’t be political and students should stay in school, that’s an implicit argument about what kinds of citizen he thinks we should have.

It’s not unreasonable. The type of citizen who has not gone out to protest will have certain habits and dispositions that are desirable. Hard-working, diligent, focused. However, the question about what it means to be a citizen and how to become one is complicated and not one that any one person has the truth about.  

Let’s go back to basics though. What’s the point of education? It’s to prepare children for life. Many would claim it’s to get children ready for work, but if that was the case we would put them in training facilities rather than schools. Our education systems have many tasks – to make children work ready to be sure, but also to develop their personhood, to allow them to engage in society, to help them flourish. Every part of the curriculum, from its General Capabilities of critical and creative thinking to the discipline specific like technologies, is designed to provide young people with the skills, knowledge and dispositions necessary for being 21st century citizens. 

What many don’t realise is that learning what it means to be a citizen isn’t localised to the curriculum. Interactions with parents, teachers, with each other, with news and social media – all of these contribute to the definition of a citizen.

Every time a politician says that children should be seen and not heard, that’s an indication of the type of citizen they want.

Most politicians don’t want kids out protesting after all – not only is it disruptive to whatever is in school that day, it looks really bad on the news for them. Protests are bad news for politicians in general and if children are involved, there’s no good way to spin it. 

But we do want children to learn how to protest. We want them to be able to see corruption and have discussions and heated debates and embrace complexity. Everyone should have the ability to say their piece and be heard in a democracy. This is something that we’ve already recognised as persuasion is a major part of education and has been for years 

However, when we talk about this, we need to recognise that we aren’t just talking about skills or knowledge. This isn’t putting together a pithy response or clever tweet. It’s about being capable of contributing to public discourse, and for that, we need children to hold certain intellectual virtues and values. 

An intellectual virtue refers to the way we approach inquiry. An intellectually virtuous citizen is someone who approaches problems and perspectives with open-mindedness, curiosity, honesty and resilience; they wish to know more about it and are truth seeking, unafraid of what terrors lie in it. 

If virtues are about the willingness to engage in inquiry, intellectual values are the cognitive tools needed to do so effectively. It’s essential in conversation to be able to speak with coherence; an argument that doesn’t meaningfully connect ideas is one that is confusing at best, and manipulative at worst. If we’re not able to share our thoughts and display them clearly, we’re just shouting at each other. 

Values and virtues are difficult to teach though. You can’t hold up flash cards and point to “fallibility” and say “okay, now remember that you can always be wrong”. We have cognitive biases that stand between us and accepting a virtue like “resilience to our own fallibility” – it feels bad to be wrong.  The way we learn these habits of mind is through practice, through acceptance and agreement. Teachers, parents and adults can all develop these habits explicitly through classroom activities, and implicitly by modelling these behaviours themselves. 

If a student can share their ideas without fear of being shut down by authority, they’ll develop greater clarity and coherence. They’ll be more open-minded about the ideas of others knowing they don’t have to defensively guard their beliefs.

To the original question of what it means to be a good citizen in a global context: we want our children to develop into conscientious adults. A good citizen is able to communicate their ideas and perspectives, and listen to the same from others. A good citizen can discern policy from platitude, and dilemmas from demagoguery. 

But it takes practice and time. It takes new challenges and new contexts and new ideas to train these habits. We don’t have to teach students the logistics of organising a revolution or how to get elected.  

And if we’re not teaching them when or why they should protest, we’re not teaching them to be good citizens at all.


The tyranny of righteous indignation

“It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron’s cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience. They may be more likely to go to Heaven yet at the same time likelier to make a Hell of earth.”

I have been reflecting on this quotation from C.S. Lewis. It seems to contain a warning for our time, when people are inclined to gorge themselves on the nectar of “righteous indignation”. Intoxicated with the sensation of apparent virtue, the “righteous” then set upon anyone who falls beyond their moral pale.

Indignation has no natural political or ideological home. It can be found in equal measure among so-called “progressives” and “conservatives”. Each group has its own ranks of “shock troops” ready to do the most terrible things in the firm belief that they are justified by the rightness of their cause. They are like the sans-culottes of the French Revolution or the Nazi “Brown Shirts” of the last century — convinced that callous indifference should rule the fate of those whom they bully until their targets either retreat or break. Any sense of common decency, based in a recognition of a common humanity, is dissolved in the acid of resentment.

Fortunately, in Australia, righteous indignation rarely gives rise to violence on the streets. Instead, it is enacted, for the most part, in an online environment made all the more vicious by the fact that one can cause harm at a distance (and often cloaked in anonymity) without ever having to confront the awful reality of what is done.

My colleague, Tim Dean, has written about the ethics of outrage, which touches on a number of the matters that I point to above. My intention, here, is to look at a particular philosophical problem with righteous indignation — namely, its tendency to destroy (rather than express) a capacity for virtue. This is not simply a matter of philosophical interest.

The implications of this argument also have practical implications, especially for those who are truly committed to changing the world for the better.

In my experience, most of those who ultimately embrace the excesses of righteous indignation start from an entirely reasonable and defensible position. This is typically grounded in some form of ethical insight — usually relating to a genuine instance of wrong-doing, often involving the oppression of one group of people by another. For example, the ancien régime swept away by the French Revolution was corrupt. The Black Lives Matter (BLM) movement begins with a true insight that some lives (black lives) are being discounted as if they do not matter either at all or as much as the lives of others. Which is to say, BLM is based on valuing all lives. And so it is with most movements: they begin with a genuine and justifiable grievance, which risks being converted into something far less subtle and far more dangerous.

Robespierre undermined the integrity of the French Revolution by harnessing “The Terror” in service of his ideal — the creation of a “Republic of Virtue”. He unleashed the mob to ravage those who differed from them even to the slightest degree of privilege. To be “privileged” was to be doomed — as Robespierre himself discovered when the mob eventually turned on him and despatched him to the guillotine.

Unfortunately, every movement has its “Robespierres”. They amplify the general sense of there being a wrong that needs to be righted. They exploit the sentiments of those with a sincere desire to do good in the world. They fashion an “index of commitment” where you are judged by the extremity of your action — the more extreme, the more committed you are shown to be. Excess of zeal becomes a badge of merit, a token of sincerity.

So, what might this have to do with I’ve called the destruction of virtue? The answer lies in the implications of “excess”. For Aristotle, the phrōnimos (the person of practical wisdom, or phrōnēsis) attains virtue when they rid themselves of the distorting lenses of vice so as to see (and act according to) the “golden mean” — a point between two vicious extremes. For example, the virtue of courage lies between the poles of “reckless indifference to the risk of harm” at one end and “hiding away from danger” at the other. That is, a courageous person has a clear appreciation of danger and takes a measured decision to overcome their fear and stand fast all the same.

Righteous indignation disavows the golden mean. Instead, it creates a counterfeit version of virtue in which the extreme is presented as the ideal.

This distortion leads otherwise good people, with good motives, in the service of a good cause, to do abominable things.

Worse still, those who act abominably are encouraged to think that their conduct is excused because done “in good conscience”. Yet another counterfeit.

It is easy enough to justify all manner of wrongdoing by an appeal to “conscience”. That is why one of the greatest exponents of conscience, St. Thomas Aquinas, insisted that we are only bound to act in conformance with a “well-informed conscience” — and as my friend Father Frank Brennan, SJ, would add, a “well-formed conscience”.

I think it sad to see so many people being sucked into a world of “righteous indignation” that has little, if any, relationship to a conscientious life of virtue. People of virtue exercise ethical restraint. They are not wantonly cruel. They do not index the intrinsic dignity of persons according to the colour of their skin, their culture and beliefs, their sex and gender or their socio-economic status. They know that “two wrongs do not make a right”.

Instead of tormenting others for their own good — and, perhaps, for the good of the world — the virtuous will seek to engage and persuade, exemplifying (rather than subverting) the ideals they seek to promote. If ever there is to be a “Republic of Virtue”, it will have no place for the righteously indignant.

 

This article originally appeared on ABC Religion & Ethics.


Freedom and disagreement: How we move forward

As it stands, the term “freedom” is being utilised as though it means the same thing across a variety of communities.

In the absence of a commitment to expand discourse between disagreeing parties, we may regrettably find ourselves occupying an increasingly polarised society, stricken with groups which take communication with one another as being a definitively hopeless exercise.

Freedom and its nuances have come poignantly into focus over the past two and a half years. Ongoing deliberation about pandemic rules and regulations have seen the notion being employed in myriad ways.

For some, freedom, and one’s right to it, has meant demanding that particular methods of curtailing viral spread remain optional and never be mandated. For others, the freedom to retain access to secure healthcare systems and avoid acquiring illness has meant calling for preventative methods to be enforced, heavily monitored, and in some cases made mandatory. Across most perspectives, individual freedoms are taken as having been directly impacted to degrees not previously experienced.

The concept of freedom, for better or worse, reliably takes centre stage in much political debate. The ways we conceive of it and deliberate on it impacts our evaluation of governmental action. It appears however that the term often comes into conflict with itself, forming what may be characterised as a linguistic impasse, where one usage of freedom is evidently incompatible with another.

Canonical political philosopher Hannah Arendt addresses the largely elusive (albeit valorised) term in her paper, ‘What is Freedom?’. In it, she emphasises its inherent confusion: “…it becomes as impossible to conceive of freedom or its opposite as it is to realize the notion of a square circle.” Despite the linguistic and conceptual red flags which freedom bears, Arendt, and many others, persist in grappling with the topic in their work.

I am also sympathetic to the commitment to ongoing deliberation on the matter of freedom. Devoting time to understanding visible impasses which arise in its usage appears vital. Doing so aids in encouraging productive discussions on matters of how states should act, and to what extent a populace should comply with political directives.

As these are discussions central to the maintenance and progression of liberal democratic societies, we should feel motivated to formulate responses in situations where one conception of freedom comes into conflict with another.

The seemingly inevitable inaccuracies embedded within the concept of freedom and the difficulties inherent in the project of discriminating between the emancipatory and the oppressive remains evident across recent political rhetoric and subsequent public response. From 2020, many politicians began using the term freedom to emphasise the need to make present sacrifice for future gain, and to secure safety for vulnerable populations by curtailing the spread of COVID-19. For some, this placed politicians in the camp of the emancipators, working to defend our freedom against a looming viral force. In contrast, those who opposed measures such as lockdowns took the relevant enforcers to be oppressors, acting in total opposition to a treasured freedom of movement and individuated determination.

Both those in favour of and those opposed to lockdowns were seen utilising the term freedom in the public arena, yet freedom to the former necessarily required a certain degree of political intervention, and freedom to the latter firmly required a sovereignty from political reach. This self-oriented sovereignty is one in which freedom is experienced not in our relation to others, but individualistically, through the deployment of free will and a safety from political non-interference. While these two differing utilisations of freedom are broad and not immutable, they do provide a useful starting point from which to assess contemporary impasses.

Commitment to sovereignty as necessary for a commitment to freedom is not a new position, nor is it reliably misplaced. The individual who decries re-introduced mask mandates, or vaccines being made compulsory in workplaces evidently takes these actions as being incompatible with the maintenance of freedom, and a free society more broadly. Their sovereignty from political interference is necessary for their freedom to persist.

Both historically and contemporarily, many have seen it essential to measure their own freedoms by the degree to which states did not unduly intervene upon realms of education, religion, or health. We know countless instances where political reach has chocked public freedoms to undesirable extents. Those in opposition to vaccine mandates, for example, may take freedom to begin wherever politics ends, thinking it best to safeguard their liberties with one hand while defending against political reach with the other.

In contrast, politicians and individuals who deem actions such as vaccine mandates, lockdowns, and the like as necessary for the maintenance of long-term social freedoms are seen as upholding a competing notion of freedom. For this group, politics and freedom are beyond compatible, they are deeply contingent upon one another. On this conception, emancipation from political reach would result in a breakdown of society, where inevitably some personal liberties would be infringed upon by others.

This position of the compatibility between freedom and politics is articulated and advocated for by Arendt. Arendt argues that sovereignty itself must be surrendered if a society is ever to be comprehensively free. This is because we do not occupy earth as individuals, but as communities, moreover, political communities, which have been formed and continue to be maintained due to the freedom of our wills. “If men wish to be free,” she writes, “it is precisely sovereignty they must renounce.”

The point here is not to say that individual rights are of no importance to political systems, or to freedom more broadly. Rather, it suggests that a comprehensive freedom cannot flourish in systems in which individuals remain committed to sovereignty above all else. Freedom is not located within the individual, but rather in the systems, or community, within which an individual operates.

We do not envy the freedom a prisoner possesses to retreat into the recesses of their own mind, we envy the person who is free to leave their home, and is safe in doing so, because a system has been politically and socially established to make it as such.

When debates are being waged over freedom, we must begin with the acknowledgement that we (as individuals) are only ever as free as the broader communities in which we operate. Our own freedoms are contingent upon the political systems that we exist in, actively engage with, and mutually construct.

Assessing the disagreement, or linguistic impasse, which exists between those who take political action as central to securing freedom and those who take freedom to begin to where politics ends has certainly not fully allowed us to realise the notion of a square circle to which Arendt alludes. Though from here, we may be better equipped to engage in discourse when we inevitably find one conception of freedom being pinned against another.

We are luckily not resigned to let present linguistic impasses on the matter of freedom mark the end of meaningful discourse. Rather, they can mark the beginning, as we are able to make efforts to rectify impasses that turn on this single word. Importantly, we have more language and words at our disposal, and many methods by which to use them. It is vital that we do.


Social media is a moral trap

Rarely a day goes by without Twitter, Facebook or some other social media platform exploding in outrage at some perceived injustice or offence.

It could be aimed at a politician, celebrity or just some hapless individual who happened to make an off-colour joke or wear the wrong dress to their school formal.

These outbursts of outrage are not without cost for everyone involved. Many people, especially women and people from minority backgrounds, have received death threats for simply expressing themselves online. Many more people have chosen to censor themselves for fear of a backlash from an online mob. And when the mob does go after a perceived wrongdoer, all too often the punishment far exceeds the crime. Targeted individuals have been ostracised from their communities, sacked from their jobs, and in some cases taken their own lives.

How did we get here?

Social media was supposed to unite us. It was supposed to forge stronger social connections. It was meant to bridge conventional barriers like wealth, class, ethnicity or geography. It was supposed to be a platform where we could express ourselves freely. Where did it all go so horribly wrong?

It’s tempting to say that something must be broken, either the social media platforms or ourselves. But in fact, both are working as intended.

When it comes to the social media platforms, they and their owners thrive on the traffic generated by viral outrage. The feedback mechanisms – ‘like’ buttons, comments and sharing – only serve to amplify it. Studies have shown that posts expressing anger or outrage are shared at a significantly higher rate than more measured posts.

In short, outrage generates engagement, and for social media companies, engagement equals profit.

When it comes to us, it turns out that our minds are working as intended too. At least, working as evolution intended.

Our minds are wired for outrage.

It was one of the moral emotions that evolution furnished us with to keep people behaving nicely tens of thousands of years ago, along with other emotions like empathy, guilt and disgust.

We may not think of outrage as being a ‘moral’ emotion, but that’s just what it is. Outrage is effectively a special kind of anger that we feel when someone does something wrong to us or someone else, and it motivates us to punish the wrongdoer. It’s what we feel when someone pushes in front of us in a queue or defrauds an elderly couple of their life savings. It’s also what we feel just about any time we log on to Twitter and look at the hashtags that are doing the rounds.

Well before the advent of social media, outrage served our ancestors well. It helped to keep the peace in small-scale hunter-gatherer societies. When someone stole, cheated or bullied other members of their band, outrage inspired the victims and onlookers to respond. Its contagious nature spread word of the wrongdoing throughout the band, creating a coalition of the outraged that could confront the miscreant and punish them if necessary.

Outrage wasn’t just something that individuals experienced. It was built to be shared. It inspired ‘strategic action’, where a number of people – possibly the whole band – would threaten or punish the wrongdoer. A common punishment was social isolation or ostracism, which was often tantamount to a death sentence in a world where everyone depended on everyone else for their survival. The modern equivalent would be ‘cancelling’.

But take this tendency to get fired up at wrongdoing and drop it on social media, and you have a recipe for misery.

All our minds needed was a platform that could feed them a steady stream of injustice and offence and they quickly became addicted to it.

Another problem with social media is that many of the injustices we witness are far removed from us, and we have little or no power to prevent them or to reform the wrongdoers directly. But that doesn’t stop us trying. Because we are rarely able to engage with the wrongdoer face-to-face, we resort to more indirect means of punishment, such as getting them fired or cancelling them.

In small-scale societies, the intense interdependence of each individual on everyone else in the band meant there were good reasons to limit punishment to only what was necessary to keep people in line. Actually, following through with ostracism could remove a valuable member of the community. Often just the threat of ostracism was enough to prevent harmful behaviour.

Not so on social media. The online target is usually so far removed from the outraged mob that there is little or no cost for the mob if the target is extricated from their lives. The cost is low for the punishers but not necessarily for the punished.

Social media outrage isn’t only bad for the targets of the mob’s ire – it’s bad for the mob too. Unlike ancestral times, we now have access to an entire world of injustice about which to get outraged. We even have a word for the new tendency to seek out and overconsume bad news: doomscrolling. This can leave us with an impression that the world is filled with evil, corrupt and bad actors when, in fact, most people are genuinely decent.

And the mismatch between the unlimited scope of our perception, and the limited ability for us to genuinely effect change, can inspire despondency. This, in turn, can motivate us to seek out some way for us to recapture a sense of agency, even if that is limited to calling someone out on Twitter or sharing a dumb quote from a despised politician on Facebook. But what have we actually achieved, except to spread the outrage further?

The good news is that while we’re wired for outrage, and social media is built to exploit it, we are not slaves to our nature. We also evolved the ability to unshackle ourselves from the psychological baggage of our ancestors. That makes it possible for us to avoid the outrage trap.

If we care about making the world a better place – and saving ourselves and others from being constantly tired, angry and miserable – we can change the way we use social media. And this means we must change some of our habits.

It’s hard to resist outrage when we see it, like it’s hard to resist that cookie that you left on the kitchen counter. So put the cookie away. This doesn’t mean getting off social media entirely. But it does mean being careful about whom you follow. If the people you follow are only posting outrage porn, then you can choose to unfollow them. Follow people who share genuinely new or useful information instead. Replace the cookie with a piece of fruit.

And if you do come across something outrageous, you can decide what to do about it. Think about whether sharing it is going to actually fix the problem or whether you’re just seeking validation for your own feelings of fury.

Sometimes there are things we can share that will do good – there’s a role for increasing awareness of certain systemic injustices, as we’ve seen with the #metoo and Black Lives Matter movements. But if it’s just a tasteless joke, a political columnist trolling the opposition or someone who refuses to wear a mask at Bunnings, you can decide whether spreading it further is going to actually make things better. If not, don’t share it.

It’s not easy to inoculate ourselves against viral outrage. Our evolved moral minds have a powerful grip on our hearts. But if we want to genuinely combat injustice and harm, we need to take ownership of our behaviour and push back against outrage.