To see no longer means to believe: The harms and benefits of deepfake

To see no longer means to believe: The harms and benefits of deepfake
Opinion + AnalysisRelationshipsScience + Technology
BY Mehhma Malhi 18 FEB 2022
The use of deepfake technology is increasing as more companies devise different models.
It is a form of technology where a user can upload an image and synthetically augment a video of a real person or create a picture of a fake person. Many people have raised concerns about the harmful possibilities of these technologies. Yet, the notion of deception that is at the core of this technology is not entirely new. History is filled with examples of fraud, identity theft, and counterfeit artworks, all of which are based on imitation or assuming a person’s likeliness.
In 1846, the oldest gallery in the US, The Knoedler, opened its doors. By supplying art to some of the most famous galleries and collectors worldwide, it gained recognition as a trusted source of expensive artwork – such as Rothko’s and Pollock’s. However, unlike many other galleries, The Knoedler allowed citizens to purchase the art pieces on display. Shockingly, in 2009, Ann Freedman, who had been appointed as the gallery director a decade prior, was famously prosecuted for knowingly selling fake artworks. After several buyers sought authentication and valuation of their purchases for insurance purposes, the forgeries came to light. The scandal was sensational, not only because of the sheer number of artworks involved in the deception that lasted years but also because millions of dollars were scammed from New York’s elite.
The grandiose art foundation of NYC fell as the gallery lost its credibility and eventually shut down. Despite being exact replicas and almost indistinguishable, the understanding of the artist and the meaning of the artworks were lost due to the lack of emotion and originality. As a result, all the artworks lost sentimental and monetary value.
Yet, this betrayal is not as immoral as stealing someone’s identity or engaging in fraud by forging someone’s signature. Unlike artwork, when someone’s identity is stolen, the person who has taken the identity has the power to define how the other person is perceived. For example, catfishing online allows a person to misrepresent not only themselves but also the person’s identity that they are using to catfish with. This is because they ascribe specific values and activities to a person’s being and change how they are represented online.
Similarly, deepfakes allow people to create entirely fictional personas or take the likeness of a person and distort how they represent themselves online. Online self-representations are already augmented to some degree by the person. For instance, most individuals on Instagram present a highly curated version of themselves that is tailored specifically to garner attention and draw particular opinions.
But, when that persona is out of the person’s control, it can spur rumours that become embedded as fact due to the nature of the internet. An example is that of celebrity tabloids. Celebrities’ love lives are continually speculated about, and often these rumours are spread and cemented until the celebrity comes out themselves to deny the claims. Even then, the story has, to some degree, impacted their reputation as those tabloids will not be removed from the internet.
The importance of a person maintaining control of their online image is paramount as it ensures their autonomy and ability to consent. When deepfakes are created of an existing person, it takes control of those tenets.
Before delving further into the ethical concerns, understanding how this technology is developed may shed light on some of the issues that arise from such a technology.
The technology is derived from deep learning, a type of artificial intelligence based on neural networks. Deep neural network technologies are often composed of layers based on input/output features. It is created using two sets of algorithms known as the generator and discriminator. The former creates fake content, and the latter must determine the authenticity of the materials. Each time it is correct, it feeds information back to the generator to improve the system. In short, if it determines whether the image is real correctly, the input receives a greater weighting. Together this process is known as generative adversarial network (GAN). It uses the process to recognise patterns which can then be compiled to make fake images.
With this type of model, if the discriminator is overly sensitive, it will provide no feedback to the generator to develop improvements. If the generator provides an image that is too realistic, the discriminator can get stuck in a loop. However, in addition to the technical difficulties, there are several serious ethical concerns that it gives rise to.
Firstly, there have been concerns regarding political safety and women’s safety. Deepfake technology has advanced to the extent that it can create multiple photos compiled into a video. At first, this seemed harmless as many early adopters began using this technology in 2019 to make videos of politicians and celebrities singing along to funny videos. However, this technology has also been used to create videos of politicians saying provocative things.
Unlike, photoshop and other editing apps that require a lot of skill or time to augment images, deepfake technology is much more straightforward as it is attuned to mimicking the person’s voice and actions. Coupling the precision of the technology to develop realistic images and the vast entity that we call the internet, these videos are at risk of entering echo chambers and epistemic bubbles where people may not know that these videos are fake. Therefore, one primary concern regarding deepfake videos is that they can be used to assert or consolidate dangerous thinking.
These tools could be used to edit photos or create videos that damage a person’s online reputation, and although they may be refuted or proved as not real, the images and effects will remain. Recently, countries such as the UK have been demanding the implementation of legislation that limits deepfake technology and violence against women. Specifically, there is a slew of apps that “nudify” any individual, and they have been used predominantly against women. All that is required of users is to upload an image of a person. One version of this website gained over 35 million hits over a few days. The use of deepfake in this manner creates non-consensual pornography that can be used to manipulate women. Because of this, the UK has called for stronger criminal laws for harassment and assault. As people’s main image continues to merge with technology, the importance of regulating these types of technology is paramount to protect individuals. Parameters are increasingly pertinent as people’s reality merges with the virtual world.
However, like with any piece of technology, there are also positive uses. For example, Deepfake technology can be used in medicine and education systems by creating learning tools and can also be used as an accessibility feature within technology. In particular, the technology can recreate persons in history and can be used in gaming and the arts. In more detail, the technology can be used to render fake patients whose data can be used in research. This protects patient information and autonomy while still providing researchers with relevant data. Further, deepfake tech has been used in marketing to help small businesses promote their products by partnering them with celebrities.
Deepfake technology was used by academics but popularised by online forums. Not used to benefit people initially, it was first used to visualise how certain celebrities would look in compromising positions. The actual benefits derived from deepfake technology were only conceptualised by different tech groups after the basis for the technology had been developed.
The conception of such technology often comes to fruition due to a developer’s will and, given the lack of regulation, is often implemented online.
While there are extensive benefits to such technology, there need to be stricter regulations, and people who abuse the scope of technology ought to be held accountable. As we see our present reality merge with virtual spaces, a person’s online presence will continue to grow in importance. Stronger regulations must be put into place to protect people’s online persona.
While users should be held accountable for manipulating and stripping away the autonomy of individuals by using their likeness, more specifically, developers must be held responsible for using their knowledge to develop an app using deepfake technology that actively harms.
To avoid a fallout like Knoedler, where distrust, skepticism, and hesitancy rooted itself in the art community, we must alert individuals when deepfake technology is employed; even in cases where the use is positive, be transparent that it has been used. Some websites teach users how to differentiate between real and fake, and some that process images to determine their validity.
Overall, this technology can help individuals gain agency; however, it can also limit another persons’ right to autonomy and privacy. This type of AI brings unique awareness to the need for balance in technology.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Big thinker
Relationships
Big Thinkers: Laozi and Zhuangzi
Opinion + Analysis
Politics + Human Rights, Relationships, Science + Technology
Parent planning – we shouldn’t be allowed to choose our children’s sex
Opinion + Analysis
Politics + Human Rights, Relationships
Free markets must beware creeping breakdown in legitimacy
Opinion + Analysis
Politics + Human Rights, Relationships
A new era of reckoning: Teela Reid on The Voice to Parliament
BY Mehhma Malhi
Mehhma recently graduated from NYU having majored in Philosophy and minoring in Politics, Bioethics, and Art. She is now continuing her study at Columbia University and pursuing a Masters of Science in Bioethics. She is interested in refocusing the news to discuss why and how people form their personal opinions.
Big Thinker: Jean-Paul Sartre

Jean-Paul Sartre (1905–1980) is one of the best known philosophers of the 20th century, and one of few who became a household name. But he wasn’t only a philosopher – he was also a provocative novelist, playwright and political activist.
Sartre was born in Paris in 1905, and lived in France throughout his entire life. He was conscripted during the war, but was spared the front line due to his exotropia, a condition that caused his right eye to wander. Instead, he served as a meteorologist, but was captured by German forces as they invaded France in 1940. He spent several months in a prisoner of war camp, making the most of the time by writing, and then returned to occupied Paris, where he remained throughout the war.
Before, during and after the war, he and his lifelong partner, the philosopher and novelist Simone de Beauvoir, were frequent patrons of the coffee houses around Saint-Germain-des-Prés in Paris. There, they and other leading thinkers of the time, like Albert Camus and Maurice Merleau-Ponty, cemented the cliché of bohemian thinkers smoking cigarettes and debating the nature of existence, freedom and oppression.
Sartre started writing his most popular philosophical work, Being and Nothingness, while still in captivity during the war, and published it in 1943. In it, he elaborated on one of his core themes: phenomenology, the study of experience and consciousness.
Learning from experience
Many philosophers who came before Sartre were sceptical about our ability to get to the truth about reality. Philosophers from Plato through to René Descartes and Immanuel Kant believed that appearances were deceiving, and what we experience of the world might not truly reflect the world as it really is. For this reason, these thinkers tended to dismiss our experience as being unreliable, and thus fairly uninteresting.
But Sartre disagreed. He built on the work of the German phenomenologist Edmund Husserl to focus attention on experience itself. He argued that there was something “true” about our experience that is worthy of examination – something that tells us about how we interact with the world, how we find meaning and how we relate to other people.
The other branch of Sartre’s philosophy was existentialism, which looks at what it means to be beings that exist in the way we do. He said that we exist in two somewhat contradictory states at the same time.
First, we exist as objects in the world, just as any other object, like a tree or chair. He calls this our “facticity” – simply, the sum total of the facts about us.
The second way is as subjects. As conscious beings, we have the freedom and power to change what we are – to go beyond our facticity and become something else. He calls this our “transcendence,” as we’re capable of transcending our facticity.
However, these two states of being don’t sit easily with one another. It’s hard to think of ourselves as both objects and subjects at the same time, and when we do, it can be an unsettling experience. This experience creates a central scene in Sartre’s most famous novel, Nausea (1938).
Freedom and responsibility
But Sartre thought we could escape the nausea of existence. We do this by acknowledging our status as objects, but also embracing our freedom and working to transcend what we are by pursuing “projects.”
Sartre thought this was essential to making our lives meaningful because he believed there was no almighty creator that could tell us how we ought to live our lives. Rather, it’s up to us to decide how we should live, and who we should be.
“Man is nothing else but what he makes of himself.”
This does place a tremendous burden on us, though. Sartre famously admitted that we’re “condemned to be free.” He wrote that “man” was “condemned, because he did not create himself, yet is nevertheless at liberty, and from the moment that he is thrown into this world he is responsible for everything he does.”
This radical freedom also means we are responsible for our own behaviour, and ethics to Sartre amounted to behaving in a way that didn’t oppress the ability of others to express their freedom.
Later in life, Sartre became a vocal political activist, particularly railing against the structural forces that limited our freedom, such as capitalism, colonialism and racism. He embraced many of Marx’s ideas and promoted communism for a while, but eventually became disillusioned with communism and distanced himself from the movement.
He continued to reinforce the power and the freedom that we all have, particularly encouraging the oppressed to fight for their freedom.
By the end of his life in 1980, he was a household name not only for his insightful and witty novels and plays, but also for his existentialist phenomenology, which is not just an abstract philosophy, but a philosophy built for living.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Business + Leadership, Relationships
Can you incentivise ethical behaviour?
Opinion + Analysis
Relationships, Society + Culture
Based on a true story: The ethics of making art about real-life others
WATCH
Relationships
What is the difference between ethics, morality and the law?
Opinion + Analysis
Relationships
Is masculinity fragile? On the whole, no. But things do change.
BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Ethics Explainer: Beauty

Research shows that physical appearance can affect everything from the grades of students to the sentencing of convicted criminals – are looks and morality somehow related?
Ancient philosophers spoke of beauty as a supreme value, akin to goodness and truth. The word itself alluded to far more than aesthetic appeal, implying nobility and honour – it’s counterpart, ugliness, made all the more shameful in comparison.
From the writings of Plato to Heraclitus, beautiful things were argued to be vital links between finite humans and the infinite divine. Indeed, across various cultures and epochs, beauty was praised as a virtue in and of itself; to be beautiful was to be good and to be good was to be beautiful.
When people first began to ask, ‘what makes something (or someone) beautiful?’, they came up with some weird ideas – think Pythagorean triangles and golden ratios as opposed to pretty colours and chiselled abs. Such aesthetic ideals of order and harmony contrasted with the chaos of the time and are present throughout art history.

Leonardo da Vinci, Vitruvian Man, c.1490
These days, a more artificial understanding of beauty as a mere observable quality shared by supermodels and idyllic sunsets reigns supreme.
This is because the rise of modern science necessitated a reappraisal of many important philosophical concepts. Beauty lost relevance as a supreme value of moral significance in a time when empirical knowledge and reason triumphed over religion and emotion.
Yet, as the emergence of a unique branch of philosophy, aesthetics, revealed, many still wondered what made something beautiful to look at – even if, in the modern sense, beauty is only skin deep.
Beauty: in the eye of the beholder?
In the ancient and medieval era, it was widely understood that certain things were beautiful not because of how they were perceived, but rather because of an independent quality that appealed universally and was unequivocally good. According to thinkers such as Aristotle and Thomas Aquinas, this was determined by forces beyond human control and understanding.
Over time, this idea of beauty as entirely objective became demonstrably flawed. After all, if this truly were the case, then controversy wouldn’t exist over whether things are beautiful or not. For instance, to some, the Mona Lisa is a truly wonderful piece of art – to others, evidence that Da Vinci urgently needed an eye check.

Consequently, definitions of beauty that accounted for these differences in opinion began to gain credence. David Hume famously quipped that beauty “exists merely in the mind which contemplates”. To him and many others, the enjoyable experience associated with the consumption of beautiful things was derived from personal taste, making the concept inherently subjective.
This idea of beauty as a fundamentally pleasurable emotional response is perhaps the closest thing we have to a consensus among philosophers with otherwise divergent understandings of the concept.
Returning to the debate at hand: if beauty is not at least somewhat universal, then why do hundreds and thousands of people every year visit art galleries and cosmetic surgeons in pursuit of it? How can advertising companies sell us products on the premise that they will make us more beautiful if everyone has a different idea of what that looks like? Neither subjectivist nor objectivist accounts of the concept seem to adequately explain reality.
According to philosophers such as Immanuel Kant and Francis Hutcheson, the answer must lie somewhere in the middle. Essentially, they argue that a mind that can distance itself from its own individual beliefs can also recognize if something is beautiful in a general, objective sense. Hume suggests that this seemingly universal standard of beauty arises when the tastes of multiple, credible experts align. And yet, whether or not this so-called beautiful thing evokes feelings of pleasure is ultimately contingent upon the subjective interpretation of the viewer themselves.
Looking good vs being good
If this seemingly endless debate has only reinforced your belief that beauty is a trivial concern, then you are not alone! During modernity and postmodernity, philosophers largely abandoned the concept in pursuit of more pressing matters – read: nuclear bombs and existential dread. Artists also expressed their disdain for beauty, perceived as a largely inaccessible relic of tired ways of thinking, through an expression of the anti-aesthetic.

Nevertheless, we should not dismiss the important role beauty plays in our day-to-day life. Whilst its association with morality has long been out of vogue among philosophers, this is not true of broader society. Psychological studies continually observe a ‘halo effect’ around beautiful people and things that see us interpret them in a more favourable light, leading them to be paid higher wages and receive better loans than their less attractive peers.
Social media makes it easy to feel that we are not good enough, particularly when it comes to looks. Perhaps uncoincidentally, we are, on average, increasing our relative spending on cosmetics, clothing, and other beauty-related goods and services.
Turning to philosophy may help us avoid getting caught in a hamster wheel of constant comparison. From a classical perspective, the best way to achieve beauty is to be a good person. Or maybe you side with the subjectivists, who tell us that being beautiful is meaningless anyway. Irrespective, beauty is complicated, ever-important, and wonderful – so long as we do not let it unfairly cloud our judgements.
Step through the mirror and examine what makes someone (or something) beautiful and how this impacts all our lives. Join us for the Ethics of Beauty on Thur 29 Feb 2024 at 6:30pm. Tickets available here.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Politics + Human Rights, Relationships
Want #MeToo to serve justice? Use it responsibly.
Opinion + Analysis
Society + Culture, Politics + Human Rights
‘The Zone of Interest’ and the lengths we’ll go to ignore evil
Opinion + Analysis
Health + Wellbeing, Relationships
LGBT…Z? The limits of ‘inclusiveness’ in the alphabet rainbow
Big thinker
Politics + Human Rights, Society + Culture
Big Thinker: Audre Lorde
BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Why morality must evolve

If you read the news or spend any time on social media, then you’d be forgiven for thinking that there’s a lack of morality in the world today.
There is certainly no shortage of serious social and moral problems in the world. People are discriminated against just because of the colour of their skin. Many women don’t feel safe in their own home or workplace. Over 450 million children around the world lack access to clean water. There are whole industries that cause untold suffering to animals. New technologies like artificial intelligence are being used to create autonomous weapons that might slip from our control. And people receive death threats simply for expressing themselves online.
It’s easy to think that if only morality featured more heavily in people’s thinking, then the world would be a better place. But I’m not convinced. This might sound strange coming from a moral philosopher, but I have come to believe that the problem we face isn’t a lack of morality, it’s that there’s often too much. Specifically, too much moral certainty.
The most dangerous people in the world are not those with weak moral views – they are those who have unwavering moral convictions. They are the ones who see the world in black and white, as if they are in a war between good and evil. They are ones who are willing to kill or die to bring about their vision of utopia.
That said, I’m not quite ready to give up on morality yet. It sits at the heart of ethics and guides how we decide on what is good and bad. It’s still central to how we live our lives. But it also has a dark side, particularly in its power to inspire rigid black and white thinking. And it’s not just the extremists who think this way. We are all susceptible to it.
To show you what I mean, let me ask you what will hopefully be an easy question:
Is it wrong to murder someone, just because you don’t like the look of their face?
I’m hoping you will say it is wrong, and I’m going to agree with you, but when we look at what we mean when we respond this way, it can help us understand how we think about right and wrong.
When we say that something like this is wrong, we’re usually not just stating a personal preference, like “I simply prefer not to murder people, but I don’t mind if you do so”. Typically, we’re saying that murder for such petty reasons is wrong for everyone, always.
Statements like these seem to be different from expressions of subjective opinion, like whether I prefer chocolate to strawberry ice cream. It seems like there’s something objective about the fact that it’s wrong to murder someone because you don’t like the look of their face. And if someone suggests that it’s just a subjective opinion – that allowing murder is a matter of personal taste – then we’re inclined to say that they’re just plain wrong. Should they defend their position, we might even be tempted to say they’re not only mistaken about some basic moral truth, but that they’re somehow morally corrupt because they cannot appreciate that truth.
Murdering someone because you don’t like the look of their face is just wrong. It’s black and white.
This view might be intuitively appealing, and it might be emotionally satisfying to feel as if we have moral truth on our side, but it has two fatal flaws. First, morality is not black and white, as I’ll explain below. Second, it stifles our ability to engage with views other than our own, which we are bound to do in a large and diverse world.
So instead of solutions, we get more conflict: left versus right, science versus anti-vaxxers, abortion versus a right to choose, free speech versus cancel culture. The list goes on.
Now, more than ever, we need to get away from this black and white thinking so we can engage with a complex moral landscape, and be flexible enough to adapt our moral views to solve the very real problems we face today.
The thing is, it’s not easy to change the way we think about morality. It turns out that it’s in our very nature to think about it in black and white terms.
As a philosopher, I’ve been fascinated by the story of where morality came from, and how we evolved from being a relatively anti-social species of ape a few million years ago to being the massively social species we are today.
Evolution plays a leading role in this story. It didn’t just shape our bodies, like giving us opposable thumbs and an upright stance. It also shaped our minds: it made us smarter, it gave us language, and it gave us psychological and social tools to help us live and cooperate together relatively peacefully. We evolved a greater capacity to feel empathy for others, to want to punish wrongdoers, and to create and follow behavioural rules that are set by our community. In short: we evolved to be moral creatures, and this is what has enabled our species to work together and spread to every corner of the globe.
But evolution often takes shortcuts. It often only makes things ‘good enough’ rather than optimising them. I mentioned we evolved an upright stance, but even that was not without cost. Just ask anyone over the age of 40 years about their knees or lower backs.
Evolution’s ‘good enough’ solution for how to make us be nice to each other tens of thousands of years ago was to appropriate the way we evolved to perceive the world. For example, when you look at a ripe strawberry, what do you see? I’m guessing that for most of you – namely if you are not colour blind – you see it as being red. And when you bite into it, what do you taste? I’m guessing that you experience it as being sweet.
However, this is just a trick that our mind plays on us. There really is no ‘redness’ or ‘sweetness’ built into the strawberry. A strawberry is just a bunch of chemicals arranged in a particular way. It is our eyes and our taste buds that interpret these chemicals as ‘red’ or ‘sweet’. And it is our minds that trick us into believing these are properties of the strawberry rather than something that came from us.
The Scottish philosopher David Hume called this “projectivism”, because we project our subjective experience onto the world, mistaking it for being an objective feature of the world.
We do this in all sorts of contexts, not just with morality. This can help explain why we sometimes mistake our subjective opinions for being objective facts. Consider debates you may have had around the merits of a particular artist or musician, or that vexed question of whether pineapple belongs on pizza. It can feel like someone who hates your favourite musician is failing to appreciate some inherently good property of their music. But, at the end of the day, we will probably acknowledge that our music tastes are subjective, and it’s us who are projecting the property of “awesomeness” onto the sounds of our favourite song.
It’s not that different with morality. As the American psychologist, Joshua Greene, puts it: “we see the world in moral colours”. We absorb the moral values of our community when we are young, and we internalise them to the point where we see the world through their lens.
As with colours, we project our sense of right and wrong onto the world so that it looks like it was there all along, and this makes it difficult for us to imagine that other people might see the world differently.
In studying the story of human physical, psychological and cultural evolution, I learnt something else. While this is how evolution shaped our minds to see right and wrong, it’s not how morality has actually developed. Even though we’re wired to see our particular version of morality as being built into the world, the moral rules that we live by are really a human invention. They’re not black and white, but come in many different shades of grey.
You can think of these rules as being a cultural tool kit that sits on top of our evolved social nature. These tools are something that our ancestors created to help them live and thrive together peacefully. They helped to solve many of the inevitable problems that emerge from living alongside one another, like how to stop bullies from exploiting the weak, or how to distribute food and other resources so everyone gets a fair share.
But, crucially, different societies had different problems to solve. Some societies were small and roamed across resource-starved areas surrounded by friendly bands. Their problems were very different from those of a settled society defending its farmlands from hostile raiders. And their problems differed even more from those of a massive post-industrial state with a highly diverse population. Each had their own set of challenges to solve, and each came up with different solutions to suit their circumstances.
Those solutions also changed and evolved as their societies did. As social, environmental, technological and economic circumstances changed, societies faced new problems, such as rising inequality, conflict between diverse cultural groups or how to prevent industry from damaging the environment. So they had to come up with new solutions.
For an example of moral evolution, consider how attitudes towards punishing wrongdoing have varied among different societies and changed over time. Let’s start with a small-scale hunter-gatherer society, like that of the !Kung, dwelling in the Kalahari desert a little over a century ago.
If one member of the band started pushing others around, perhaps turning to violence to get their way, there were no police or law courts to keep them in line. Instead, it was left to individuals to keep their own justice. That’s why if a bully murdered a family member, it was not only permitted, but it was a moral duty for the family to kill the murderer. Revenge – and the threat thereof – was an important and effective tool in the !Kung moral toolkit.
We can see that revenge also played a similar role in many moral systems around the world and throughout history. There are countless tales that demonstrate the ubiquity of revenge in great works like the Iliad, Mahabharata and Beowulf. In the Old Testament, God tells Moses the famous line that allows his people to take an eye for an eye and tooth for a tooth.
But as societies changed, as they expanded, as people started interacting with more strangers, it turned out that revenge caused more problems than it solved. While it could be managed and contained in small-scale societies like the !Kung, in larger societies it could lead to feuds where extended family groups might get stuck in a cycle of counter-retaliation for generations, all started by a one single regrettable event.
As societies changed, they created new moral tools to solve the new problems they faced, and they often discarded tools that no longer worked. That’s why the New Testament advises people to reject revenge and “turn the other cheek” instead.
Modern societies have the resources and institutions to outsource punishment to a specialised class of individuals in the form of police and law courts. When these institutions are well run and can be trusted, they have proven to be a highly effective means of keeping the peace to the point that revenge and vigilante justice is now frowned upon.
This is moral evolution. This is how different societies have adapted to new ways of living, solving the new social problems that emerge as their societies and circumstances change.
(I must stress that this does not make !Kung morality inferior or less evolved than other societies. Similar to how all creatures alive today are equally evolved, so too are all extant moral systems. My point is not that there is a linear progression from small-scale to large-scale societies, from primitive to civilised, it’s that any moral system needs to fit the circumstances that the society finds itself in and change as those circumstances change.)
But there’s a catch: moral evolution has typically moved painfully slowly, not least because our innate tendency towards black and white thinking has stifled moral innovation.
This wasn’t such a problem 10,000 years ago, when living conditions would have remained relatively static for generations. In this case, there was less pressure to evolve and adapt the moral toolkit. But the world today is not like this. It is changing faster than ever before, and so we are forced to adapt faster than our minds might be comfortable with.
This means pushing back on our black and white tendencies and looking at morality through a new lens. Instead of seeing it as something that is fixed, we can look at it as a toolkit that we adapt to the problems at hand.
It also means acknowledging that many of the social and moral problems we face today have no single perfect solution. Many come in shades of grey, like deciding if free speech should give people a right to offend, or to what extent we should tolerate intolerance, or under what circumstances we should allow people to end their own lives. There is almost certainly no single set of moral rules that will solve these problems in every circumstance without also causing undesirable consequences.
On the other hand, we should also acknowledge that there are many social and moral problems that have more than one good solution. Consider one of the questions that sits at the centre of ethics: what constitutes a good life? There are likely going to be many good answers to that question. This remains the case even if some answers come into conflict with others, such as one perspective stressing individual freedom while another stresses greater community and interpersonal obligations.
This full-spectrum evolutionary perspective on morality can also help explain why there is such a tremendous diversity of conflicting moral viewpoints in the world today. For a start, many cultures are still wielding tools that were made to solve problems from a different time, and they have carried them into today’s world, such as tools that prioritise in-group loyalty at the cost of suspicion of others. Some conservative cultures are reluctant to give these tools up, even if they are obsolete.
Other tools were never very good at their job, or they were co-opted by an elite for their own benefit to the detriment of others, such as tools that subjugated women or disenfranchised certain minorities.
Other tools are responses to different conceptions of the good life. Some represent the trade-off that is built into many moral problems. And there is constant production of new and experimental tools that have yet to be tested. Some will prove to work well and may be kept, while others will fall short, or cause more problems than they solve, and will be dropped.
One thing is clear: the world we are living in today is unlike anything our distant ancestors faced. It is larger, more complex, more diverse and more interconnected than it has ever been, and we are hearing from voices that once were silenced. The world is changing faster than ever before.
This might be taxing for our slow-moving black and white minds – and we should forgive ourselves and others for being human – but we must adapt our moral views to the world of today, and not rely on the old solutions of yesterday.
This calls for each of us to be mindful of how we think about morality, our black and white tendencies, and whether the views we inherited from our forebears are the best ones to solve the serious problems we face today. It also means we must rethink morality as being a human invention, a toolkit that can be adapted as the world changes, with many new problems and many tools that can solve them.
What matters today is not clinging to the moral views we were raised with, but looking at each problem, listening to each other, and working together to find the best solution. What we need now is genuine moral evolution.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Business + Leadership, Relationships
Moving work online
Opinion + Analysis
Business + Leadership, Relationships
What makes a business honest and trustworthy?
Big thinker
Relationships
Big Thinker: Plato
Explainer
Relationships
Ethics Explainer: Teleology
BY Dr Tim Dean
Dr Tim Dean is Philosopher in Residence at The Ethics Centre and author of How We Became Human: And Why We Need to Change.
Why we should be teaching our kids to protest

Why we should be teaching our kids to protest
Opinion + AnalysisPolitics + Human RightsRelationships
BY Dr Luke Zaphir 3 FEB 2022
When the Prime Minister says classrooms shouldn’t be political and students should stay in school, that’s an implicit argument about what kinds of citizen he thinks we should have.
It’s not unreasonable. The type of citizen who has not gone out to protest will have certain habits and dispositions that are desirable. Hard-working, diligent, focused. However, the question about what it means to be a citizen and how to become one is complicated and not one that any one person has the truth about.
Let’s go back to basics though. What’s the point of education? It’s to prepare children for life. Many would claim it’s to get children ready for work, but if that was the case we would put them in training facilities rather than schools. Our education systems have many tasks – to make children work ready to be sure, but also to develop their personhood, to allow them to engage in society, to help them flourish. Every part of the curriculum, from its General Capabilities of critical and creative thinking to the discipline specific like technologies, is designed to provide young people with the skills, knowledge and dispositions necessary for being 21st century citizens.
What many don’t realise is that learning what it means to be a citizen isn’t localised to the curriculum. Interactions with parents, teachers, with each other, with news and social media – all of these contribute to the definition of a citizen.
Every time a politician says that children should be seen and not heard, that’s an indication of the type of citizen they want.
Most politicians don’t want kids out protesting after all – not only is it disruptive to whatever is in school that day, it looks really bad on the news for them. Protests are bad news for politicians in general and if children are involved, there’s no good way to spin it.
But we do want children to learn how to protest. We want them to be able to see corruption and have discussions and heated debates and embrace complexity. Everyone should have the ability to say their piece and be heard in a democracy. This is something that we’ve already recognised as persuasion is a major part of education and has been for years.
However, when we talk about this, we need to recognise that we aren’t just talking about skills or knowledge. This isn’t putting together a pithy response or clever tweet. It’s about being capable of contributing to public discourse, and for that, we need children to hold certain intellectual virtues and values.
An intellectual virtue refers to the way we approach inquiry. An intellectually virtuous citizen is someone who approaches problems and perspectives with open-mindedness, curiosity, honesty and resilience; they wish to know more about it and are truth seeking, unafraid of what terrors lie in it.
If virtues are about the willingness to engage in inquiry, intellectual values are the cognitive tools needed to do so effectively. It’s essential in conversation to be able to speak with coherence; an argument that doesn’t meaningfully connect ideas is one that is confusing at best, and manipulative at worst. If we’re not able to share our thoughts and display them clearly, we’re just shouting at each other.
Values and virtues are difficult to teach though. You can’t hold up flash cards and point to “fallibility” and say “okay, now remember that you can always be wrong”. We have cognitive biases that stand between us and accepting a virtue like “resilience to our own fallibility” – it feels bad to be wrong. The way we learn these habits of mind is through practice, through acceptance and agreement. Teachers, parents and adults can all develop these habits explicitly through classroom activities, and implicitly by modelling these behaviours themselves.
If a student can share their ideas without fear of being shut down by authority, they’ll develop greater clarity and coherence. They’ll be more open-minded about the ideas of others knowing they don’t have to defensively guard their beliefs.
To the original question of what it means to be a good citizen in a global context: we want our children to develop into conscientious adults. A good citizen is able to communicate their ideas and perspectives, and listen to the same from others. A good citizen can discern policy from platitude, and dilemmas from demagoguery.
But it takes practice and time. It takes new challenges and new contexts and new ideas to train these habits. We don’t have to teach students the logistics of organising a revolution or how to get elected.
And if we’re not teaching them when or why they should protest, we’re not teaching them to be good citizens at all.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Explainer, READ
Relationships, Society + Culture
Ethics Explainer: Shame
Opinion + Analysis
Business + Leadership, Climate + Environment, Relationships
ESG is not just about ticking boxes, it’s about earning the public’s trust
Explainer
Politics + Human Rights, Relationships
Ethics Explainer: Autonomy
Opinion + Analysis
Health + Wellbeing, Politics + Human Rights
‘Eye in the Sky’ and drone warfare
BY Dr Luke Zaphir
Luke is a researcher for the University of Queensland's Critical Thinking Project. He completed a PhD in philosophy in 2017, writing about non-electoral alternatives to democracy. Democracy without elections is a difficult goal to achieve though, requiring a much greater level of education from citizens and more deliberate forms of engagement. Thus he's also a practicing high school teacher in Queensland, where he teaches critical thinking and philosophy.
The tyranny of righteous indignation

“It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron’s cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience. They may be more likely to go to Heaven yet at the same time likelier to make a Hell of earth.”
I have been reflecting on this quotation from C.S. Lewis. It seems to contain a warning for our time, when people are inclined to gorge themselves on the nectar of “righteous indignation”. Intoxicated with the sensation of apparent virtue, the “righteous” then set upon anyone who falls beyond their moral pale.
Indignation has no natural political or ideological home. It can be found in equal measure among so-called “progressives” and “conservatives”. Each group has its own ranks of “shock troops” ready to do the most terrible things in the firm belief that they are justified by the rightness of their cause. They are like the sans-culottes of the French Revolution or the Nazi “Brown Shirts” of the last century — convinced that callous indifference should rule the fate of those whom they bully until their targets either retreat or break. Any sense of common decency, based in a recognition of a common humanity, is dissolved in the acid of resentment.
Fortunately, in Australia, righteous indignation rarely gives rise to violence on the streets. Instead, it is enacted, for the most part, in an online environment made all the more vicious by the fact that one can cause harm at a distance (and often cloaked in anonymity) without ever having to confront the awful reality of what is done.
My colleague, Tim Dean, has written about the ethics of outrage, which touches on a number of the matters that I point to above. My intention, here, is to look at a particular philosophical problem with righteous indignation — namely, its tendency to destroy (rather than express) a capacity for virtue. This is not simply a matter of philosophical interest.
The implications of this argument also have practical implications, especially for those who are truly committed to changing the world for the better.
In my experience, most of those who ultimately embrace the excesses of righteous indignation start from an entirely reasonable and defensible position. This is typically grounded in some form of ethical insight — usually relating to a genuine instance of wrong-doing, often involving the oppression of one group of people by another. For example, the ancien régime swept away by the French Revolution was corrupt. The Black Lives Matter (BLM) movement begins with a true insight that some lives (black lives) are being discounted as if they do not matter either at all or as much as the lives of others. Which is to say, BLM is based on valuing all lives. And so it is with most movements: they begin with a genuine and justifiable grievance, which risks being converted into something far less subtle and far more dangerous.
Robespierre undermined the integrity of the French Revolution by harnessing “The Terror” in service of his ideal — the creation of a “Republic of Virtue”. He unleashed the mob to ravage those who differed from them even to the slightest degree of privilege. To be “privileged” was to be doomed — as Robespierre himself discovered when the mob eventually turned on him and despatched him to the guillotine.
Unfortunately, every movement has its “Robespierres”. They amplify the general sense of there being a wrong that needs to be righted. They exploit the sentiments of those with a sincere desire to do good in the world. They fashion an “index of commitment” where you are judged by the extremity of your action — the more extreme, the more committed you are shown to be. Excess of zeal becomes a badge of merit, a token of sincerity.
So, what might this have to do with I’ve called the destruction of virtue? The answer lies in the implications of “excess”. For Aristotle, the phrōnimos (the person of practical wisdom, or phrōnēsis) attains virtue when they rid themselves of the distorting lenses of vice so as to see (and act according to) the “golden mean” — a point between two vicious extremes. For example, the virtue of courage lies between the poles of “reckless indifference to the risk of harm” at one end and “hiding away from danger” at the other. That is, a courageous person has a clear appreciation of danger and takes a measured decision to overcome their fear and stand fast all the same.
Righteous indignation disavows the golden mean. Instead, it creates a counterfeit version of virtue in which the extreme is presented as the ideal.
This distortion leads otherwise good people, with good motives, in the service of a good cause, to do abominable things.
Worse still, those who act abominably are encouraged to think that their conduct is excused because done “in good conscience”. Yet another counterfeit.
It is easy enough to justify all manner of wrongdoing by an appeal to “conscience”. That is why one of the greatest exponents of conscience, St. Thomas Aquinas, insisted that we are only bound to act in conformance with a “well-informed conscience” — and as my friend Father Frank Brennan, SJ, would add, a “well-formed conscience”.
I think it sad to see so many people being sucked into a world of “righteous indignation” that has little, if any, relationship to a conscientious life of virtue. People of virtue exercise ethical restraint. They are not wantonly cruel. They do not index the intrinsic dignity of persons according to the colour of their skin, their culture and beliefs, their sex and gender or their socio-economic status. They know that “two wrongs do not make a right”.
Instead of tormenting others for their own good — and, perhaps, for the good of the world — the virtuous will seek to engage and persuade, exemplifying (rather than subverting) the ideals they seek to promote. If ever there is to be a “Republic of Virtue”, it will have no place for the righteously indignant.
This article originally appeared on ABC Religion & Ethics.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Climate + Environment, Relationships
Care is a relationship: Exploring climate distress and what it means for place, self and community
Opinion + Analysis
Climate + Environment, Politics + Human Rights, Relationships
This is what comes after climate grief
Opinion + Analysis
Relationships, Society + Culture
The Bear and what it means to keep going when you lose it all
Opinion + Analysis
Relationships, Society + Culture
The problem with Australian identity
BY Simon Longstaff
Simon Longstaff began his working life on Groote Eylandt in the Northern Territory of Australia. He is proud of his kinship ties to the Anindilyakwa people. After a period studying law in Sydney and teaching in Tasmania, he pursued postgraduate studies as a Member of Magdalene College, Cambridge. In 1991, Simon commenced his work as the first Executive Director of The Ethics Centre. In 2013, he was made an officer of the Order of Australia (AO) for “distinguished service to the community through the promotion of ethical standards in governance and business, to improving corporate responsibility, and to philosophy.” Simon is an Adjunct Professor of the Australian Graduate School of Management at UNSW, a Fellow of CPA Australia, the Royal Society of NSW and the Australian Risk Policy Institute.
Big Thinker: Kate Manne

Kate Manne (1983 – present) is an Australian philosopher who works at the intersection of feminist philosophy, metaethics, and moral psychology.
While Manne is an academic philosopher by training and practice, she is best known for her contributions to public philosophy. Her work draws upon the methodology of analytic philosophy to dissect the interrelated phenomena of misogyny and masculine entitlement.
What is misogyny?
Manne’s debut book Down Girl: The Logic of Misogyny (2018), develops and defends a robust definition of misogyny that will allow us to better analyse the prevalence of violence and discrimination against women in contemporary society. Contrary to popular belief, Manne argues that misogyny is not a “deep-seated psychological hatred” of women, most often exhibited by men. Instead, she conceives of misogyny in structural terms, arguing that it is the “law enforcement” branch of patriarchy (male-dominated society and government), which exists to police the behaviour of women and girls through gendered norms and expectations.
Manne distinguishes misogyny from sexism by suggesting that the latter is more concerned with justifying and naturalising patriarchy through the spread of ideas about the relationship between biology, gender and social roles.
While the two concepts are closely related, Manne believes that people are capable of being misogynistic without consciously holding sexist beliefs. This is because misogyny, much like racism, is systemic and capable of flourishing regardless of someone’s psychological beliefs.
One of the most distinctive features of Manne’s philosophical work is that she interweaves case studies from public and political life into her writing to powerfully motivate her theoretical claims.
For instance, in Down Girl, Manne offers up the example of Julia Gillard’s famous misogyny speech from October 2012 as evidence of the distinction between sexism and misogyny in Australian politics. She contends that Gillard’s characterisation of then Opposition Leader Tony Abbott’s behaviour toward her as both sexist and misogynistic is entirely apt. His comments about the suitability of women to politics and characterisation of female voters as immersed in housework display sexist values, while his endorsement of statements like “Ditch the witch” and “man’s bitch” are designed to shame and belittle Gillard in accordance with misogyny.
Himpathy and herasure
One of the key concepts coined by Kate Manne is “himpathy”. She defines himpathy as “the disproportionate or inappropriate sympathy extended to a male perpetrator over his similarly, or less privileged, female targets in cases of sexual assault, harassment, and other misogynistic behaviour.”
According to Manne, himpathy operates in concert with misogyny. While misogyny seeks to discredit the testimony of women in cases of gendered violence, himpathy shields the perpetrators of that misogynistic behaviour from harm to their reputation by positioning them as “good guys” who are the victims of “witch hunts”. Consequently, the traumatic experiences of those women and their motivations for seeking justice are unfairly scrutinised and often disbelieved. Manne terms the impact of this social phenomenon upon women, “herasure.”
Manne’s book Entitled: How Male Privilege Hurts Women (2020) illustrates the potency of himpathy by analysing the treatment of Brett Kavanaugh during the Senate Judiciary Committee’s investigation into allegations of sexual assault levelled against Kavanaugh by Professor Christine Blassey Ford. Manne points to the public’s praise of Kavanaugh as a brilliant jurist who was being unfairly defamed by a woman who sought to derail his appointment to the Supreme Court of the United States as an example of himpathy in action.
She also suggests that the public scrutiny of Ford’s testimony and the conservative media’s attack on her character functioned to diminish her credibility in the eyes of the law and erase her experiences. The Senate’s ultimate endorsement of Justice Kavanaugh’s appointment to the Supreme Court proved Manne’s thesis – that male entitlement to positions of power is a product of patriarchy and serves to further entrench misogyny.
Evidently, Kate Manne is a philosopher who doesn’t shy away from thorny social debates. Manne’s decision to enliven her philosophical work with empirical evidence allows her to reach a broader audience and to increase the accessibility of philosophy for the public. She represents a new generation of female philosophers – brave, bold, and unapologetically political.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Big thinker
Relationships
Big Thinker: Jean-Paul Sartre
Opinion + Analysis
Business + Leadership, Relationships
So your boss installed CCTV cameras
Opinion + Analysis
Politics + Human Rights, Relationships
Adoption without parental consent: kidnapping or putting children first?
Explainer
Relationships
Ethics Explainer: Love and morality
BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Freedom and disagreement: How we move forward

Freedom and disagreement: How we move forward
Opinion + AnalysisRelationships
BY Georgia Fagan 17 JAN 2022
As it stands, the term “freedom” is being utilised as though it means the same thing across a variety of communities.
In the absence of a commitment to expand discourse between disagreeing parties, we may regrettably find ourselves occupying an increasingly polarised society, stricken with groups which take communication with one another as being a definitively hopeless exercise.
Freedom and its nuances have come poignantly into focus over the past two and a half years. Ongoing deliberation about pandemic rules and regulations have seen the notion being employed in myriad ways.
For some, freedom, and one’s right to it, has meant demanding that particular methods of curtailing viral spread remain optional and never be mandated. For others, the freedom to retain access to secure healthcare systems and avoid acquiring illness has meant calling for preventative methods to be enforced, heavily monitored, and in some cases made mandatory. Across most perspectives, individual freedoms are taken as having been directly impacted to degrees not previously experienced.
The concept of freedom, for better or worse, reliably takes centre stage in much political debate. The ways we conceive of it and deliberate on it impacts our evaluation of governmental action. It appears however that the term often comes into conflict with itself, forming what may be characterised as a linguistic impasse, where one usage of freedom is evidently incompatible with another.
Canonical political philosopher Hannah Arendt addresses the largely elusive (albeit valorised) term in her paper, ‘What is Freedom?’. In it, she emphasises its inherent confusion: “…it becomes as impossible to conceive of freedom or its opposite as it is to realize the notion of a square circle.” Despite the linguistic and conceptual red flags which freedom bears, Arendt, and many others, persist in grappling with the topic in their work.
I am also sympathetic to the commitment to ongoing deliberation on the matter of freedom. Devoting time to understanding visible impasses which arise in its usage appears vital. Doing so aids in encouraging productive discussions on matters of how states should act, and to what extent a populace should comply with political directives.
As these are discussions central to the maintenance and progression of liberal democratic societies, we should feel motivated to formulate responses in situations where one conception of freedom comes into conflict with another.
The seemingly inevitable inaccuracies embedded within the concept of freedom and the difficulties inherent in the project of discriminating between the emancipatory and the oppressive remains evident across recent political rhetoric and subsequent public response. From 2020, many politicians began using the term freedom to emphasise the need to make present sacrifice for future gain, and to secure safety for vulnerable populations by curtailing the spread of COVID-19. For some, this placed politicians in the camp of the emancipators, working to defend our freedom against a looming viral force. In contrast, those who opposed measures such as lockdowns took the relevant enforcers to be oppressors, acting in total opposition to a treasured freedom of movement and individuated determination.
Both those in favour of and those opposed to lockdowns were seen utilising the term freedom in the public arena, yet freedom to the former necessarily required a certain degree of political intervention, and freedom to the latter firmly required a sovereignty from political reach. This self-oriented sovereignty is one in which freedom is experienced not in our relation to others, but individualistically, through the deployment of free will and a safety from political non-interference. While these two differing utilisations of freedom are broad and not immutable, they do provide a useful starting point from which to assess contemporary impasses.
Commitment to sovereignty as necessary for a commitment to freedom is not a new position, nor is it reliably misplaced. The individual who decries re-introduced mask mandates, or vaccines being made compulsory in workplaces evidently takes these actions as being incompatible with the maintenance of freedom, and a free society more broadly. Their sovereignty from political interference is necessary for their freedom to persist.
Both historically and contemporarily, many have seen it essential to measure their own freedoms by the degree to which states did not unduly intervene upon realms of education, religion, or health. We know countless instances where political reach has chocked public freedoms to undesirable extents. Those in opposition to vaccine mandates, for example, may take freedom to begin wherever politics ends, thinking it best to safeguard their liberties with one hand while defending against political reach with the other.
In contrast, politicians and individuals who deem actions such as vaccine mandates, lockdowns, and the like as necessary for the maintenance of long-term social freedoms are seen as upholding a competing notion of freedom. For this group, politics and freedom are beyond compatible, they are deeply contingent upon one another. On this conception, emancipation from political reach would result in a breakdown of society, where inevitably some personal liberties would be infringed upon by others.
This position of the compatibility between freedom and politics is articulated and advocated for by Arendt. Arendt argues that sovereignty itself must be surrendered if a society is ever to be comprehensively free. This is because we do not occupy earth as individuals, but as communities, moreover, political communities, which have been formed and continue to be maintained due to the freedom of our wills. “If men wish to be free,” she writes, “it is precisely sovereignty they must renounce.”
The point here is not to say that individual rights are of no importance to political systems, or to freedom more broadly. Rather, it suggests that a comprehensive freedom cannot flourish in systems in which individuals remain committed to sovereignty above all else. Freedom is not located within the individual, but rather in the systems, or community, within which an individual operates.
We do not envy the freedom a prisoner possesses to retreat into the recesses of their own mind, we envy the person who is free to leave their home, and is safe in doing so, because a system has been politically and socially established to make it as such.
When debates are being waged over freedom, we must begin with the acknowledgement that we (as individuals) are only ever as free as the broader communities in which we operate. Our own freedoms are contingent upon the political systems that we exist in, actively engage with, and mutually construct.
Assessing the disagreement, or linguistic impasse, which exists between those who take political action as central to securing freedom and those who take freedom to begin to where politics ends has certainly not fully allowed us to realise the notion of a square circle to which Arendt alludes. Though from here, we may be better equipped to engage in discourse when we inevitably find one conception of freedom being pinned against another.
We are luckily not resigned to let present linguistic impasses on the matter of freedom mark the end of meaningful discourse. Rather, they can mark the beginning, as we are able to make efforts to rectify impasses that turn on this single word. Importantly, we have more language and words at our disposal, and many methods by which to use them. It is vital that we do.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Relationships
Praying for Paris doesn’t make you racist
Big thinker
Health + Wellbeing, Politics + Human Rights, Relationships
Big Thinker: Judith Butler
Explainer
Relationships
Ethics Explainer: The Sunlight Test
Opinion + Analysis
Health + Wellbeing, Relationships, Science + Technology
How to put a price on a life – explaining Quality-Adjusted Life Years (QALY)
BY Georgia Fagan
Georgia has an academic and professional background in applied ethics, feminism and humanitarian aid. They are currently completing a Masters of Philosophy at the University of Sydney on the topic of gender equality and pragmatic feminist ethics. Georgia also holds a degree in Psychology and undertakes research on cross-cultural feminist initiatives in Bangladeshi refugee camps.
Social media is a moral trap

Rarely a day goes by without Twitter, Facebook or some other social media platform exploding in outrage at some perceived injustice or offence.
It could be aimed at a politician, celebrity or just some hapless individual who happened to make an off-colour joke or wear the wrong dress to their school formal.
These outbursts of outrage are not without cost for everyone involved. Many people, especially women and people from minority backgrounds, have received death threats for simply expressing themselves online. Many more people have chosen to censor themselves for fear of a backlash from an online mob. And when the mob does go after a perceived wrongdoer, all too often the punishment far exceeds the crime. Targeted individuals have been ostracised from their communities, sacked from their jobs, and in some cases taken their own lives.
How did we get here?
Social media was supposed to unite us. It was supposed to forge stronger social connections. It was meant to bridge conventional barriers like wealth, class, ethnicity or geography. It was supposed to be a platform where we could express ourselves freely. Where did it all go so horribly wrong?
It’s tempting to say that something must be broken, either the social media platforms or ourselves. But in fact, both are working as intended.
When it comes to the social media platforms, they and their owners thrive on the traffic generated by viral outrage. The feedback mechanisms – ‘like’ buttons, comments and sharing – only serve to amplify it. Studies have shown that posts expressing anger or outrage are shared at a significantly higher rate than more measured posts.
In short, outrage generates engagement, and for social media companies, engagement equals profit.
When it comes to us, it turns out that our minds are working as intended too. At least, working as evolution intended.
Our minds are wired for outrage.
It was one of the moral emotions that evolution furnished us with to keep people behaving nicely tens of thousands of years ago, along with other emotions like empathy, guilt and disgust.
We may not think of outrage as being a ‘moral’ emotion, but that’s just what it is. Outrage is effectively a special kind of anger that we feel when someone does something wrong to us or someone else, and it motivates us to punish the wrongdoer. It’s what we feel when someone pushes in front of us in a queue or defrauds an elderly couple of their life savings. It’s also what we feel just about any time we log on to Twitter and look at the hashtags that are doing the rounds.
Well before the advent of social media, outrage served our ancestors well. It helped to keep the peace in small-scale hunter-gatherer societies. When someone stole, cheated or bullied other members of their band, outrage inspired the victims and onlookers to respond. Its contagious nature spread word of the wrongdoing throughout the band, creating a coalition of the outraged that could confront the miscreant and punish them if necessary.
Outrage wasn’t just something that individuals experienced. It was built to be shared. It inspired ‘strategic action’, where a number of people – possibly the whole band – would threaten or punish the wrongdoer. A common punishment was social isolation or ostracism, which was often tantamount to a death sentence in a world where everyone depended on everyone else for their survival. The modern equivalent would be ‘cancelling’.
But take this tendency to get fired up at wrongdoing and drop it on social media, and you have a recipe for misery.
All our minds needed was a platform that could feed them a steady stream of injustice and offence and they quickly became addicted to it.
Another problem with social media is that many of the injustices we witness are far removed from us, and we have little or no power to prevent them or to reform the wrongdoers directly. But that doesn’t stop us trying. Because we are rarely able to engage with the wrongdoer face-to-face, we resort to more indirect means of punishment, such as getting them fired or cancelling them.
In small-scale societies, the intense interdependence of each individual on everyone else in the band meant there were good reasons to limit punishment to only what was necessary to keep people in line. Actually, following through with ostracism could remove a valuable member of the community. Often just the threat of ostracism was enough to prevent harmful behaviour.
Not so on social media. The online target is usually so far removed from the outraged mob that there is little or no cost for the mob if the target is extricated from their lives. The cost is low for the punishers but not necessarily for the punished.
Social media outrage isn’t only bad for the targets of the mob’s ire – it’s bad for the mob too. Unlike ancestral times, we now have access to an entire world of injustice about which to get outraged. We even have a word for the new tendency to seek out and overconsume bad news: doomscrolling. This can leave us with an impression that the world is filled with evil, corrupt and bad actors when, in fact, most people are genuinely decent.
And the mismatch between the unlimited scope of our perception, and the limited ability for us to genuinely effect change, can inspire despondency. This, in turn, can motivate us to seek out some way for us to recapture a sense of agency, even if that is limited to calling someone out on Twitter or sharing a dumb quote from a despised politician on Facebook. But what have we actually achieved, except to spread the outrage further?
The good news is that while we’re wired for outrage, and social media is built to exploit it, we are not slaves to our nature. We also evolved the ability to unshackle ourselves from the psychological baggage of our ancestors. That makes it possible for us to avoid the outrage trap.
If we care about making the world a better place – and saving ourselves and others from being constantly tired, angry and miserable – we can change the way we use social media. And this means we must change some of our habits.
It’s hard to resist outrage when we see it, like it’s hard to resist that cookie that you left on the kitchen counter. So put the cookie away. This doesn’t mean getting off social media entirely. But it does mean being careful about whom you follow. If the people you follow are only posting outrage porn, then you can choose to unfollow them. Follow people who share genuinely new or useful information instead. Replace the cookie with a piece of fruit.
And if you do come across something outrageous, you can decide what to do about it. Think about whether sharing it is going to actually fix the problem or whether you’re just seeking validation for your own feelings of fury.
Sometimes there are things we can share that will do good – there’s a role for increasing awareness of certain systemic injustices, as we’ve seen with the #metoo and Black Lives Matter movements. But if it’s just a tasteless joke, a political columnist trolling the opposition or someone who refuses to wear a mask at Bunnings, you can decide whether spreading it further is going to actually make things better. If not, don’t share it.
It’s not easy to inoculate ourselves against viral outrage. Our evolved moral minds have a powerful grip on our hearts. But if we want to genuinely combat injustice and harm, we need to take ownership of our behaviour and push back against outrage.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Relationships, Science + Technology
We are being saturated by knowledge. How much is too much?
Opinion + Analysis
Relationships
We shouldn’t assume bad intent from those we disagree with
Opinion + Analysis
Relationships
Are there limits to forgiveness?
Opinion + Analysis
Relationships
We can help older Australians by asking them for help
BY Dr Tim Dean
Dr Tim Dean is Philosopher in Residence at The Ethics Centre and author of How We Became Human: And Why We Need to Change.
Ethics Explainer: Pragmatism

Pragmatism is a philosophical school of thought that, broadly, is interested in the effects and usefulness of theories and claims.
Pragmatism is a distinct school of philosophical thought that began at Harvard University in the late 19th century. Charles Sanders Pierce and William James were members of the university’s ‘Metaphysical Club’ and both came to believe that many disputes taking place between its members were empty concerns. In response, the two began to form a ‘Pragmatic Method’ that aimed to dissolve seemingly endless metaphysical disputes by revealing that there was nothing to argue about in the first place.
How it came to be
Pragmatism is best understood as a school of thought born from a rejection of metaphysical thinking and the traditional philosophical pursuits of truth and objectivity. The Socratic and Platonic theories that form the basis of a large portion of Western philosophical thought aim to find and explain the “essences” of reality and undercover truths that are believed to be obscured from our immediate senses.
This Platonic aim for objectivity, in which knowledge is taken to be an uncovering of truth, is one which would have been shared by many members of Pierce and James’ ‘Metaphysical Club’. In one of his lectures, James offers an example of a metaphysical dispute:
A squirrel is situated on one side of a tree trunk, while a person stands on the other. The person quickly circles the tree hoping to catch sight of the squirrel, but the squirrel also circles the tree at an equal pace, such that the two never enter one another’s sight. The grand metaphysical question that follows? Does the man go round the squirrel or not?
Seeing his friends ferociously arguing for their distinct position led James to suggest that the correctness of any position simply turns on what someone practically means when they say, ‘go round’. In this way, the answer to the question has no essential, objectively correct response. Instead, the correctness of the response is contingent on how we understand the relevant features of the question.
Truth and reality
Metaphysics often talks about truth as a correspondence to or reflection of a particular feature of “reality”. In this way, the metaphysical philosopher takes truth to be a process of uncovering (through philosophical debate or scientific enquiry) the relevant feature of reality.
On the other hand, pragmatism is more interested in how useful any given truth is. Instead of thinking of truth as an ultimately achievable end where the facts perfectly mirror some external objective reality, pragmatism instead regards truth as functional or instrumental (James) or the goal of inquiry where communal understanding converges (Pierce).
Take gravity, for example. Pragmatism doesn’t view it as true because it’s the ‘perfect’ understanding and explanation for the phenomenon, but it does view it as true insofar as it lets us make extremely reliable predictions and it is where vast communal understanding has landed. It’s still useful and pragmatic to view gravity as a true scientific concept even if in some external, objective, all-knowing sense it isn’t the perfect explanation or representation of what’s going on.
In this sense, truth is capable of changing and is contextually contingent, unlike traditional views.. Pragmatism argues that what is considered ‘true’ may shift or multiply when new groups come along with new vocabularies and new ways of seeing the world.
To reconcile these constantly changing states of language and belief, Pierce constructed a ‘Pragmatic Maxim’ to act as a method by which thinkers can clarify the meaning of the concepts embedded in particular hypotheses. One formation of the maxim is:
Consider what effects, which might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of those effects is the whole of our conception of the object.
In other words, Pierce is saying that the disagreement in any conceptual dispute should be describable in a way which impacts the practical consequences of what is being debated. Pragmatic conceptions of truth take seriously this commitment to practicality. Richard Rorty, who is considered a neopragmatist, writes extensively on a particular pragmatic conception of truth.
Rorty argues that the concept of ‘truth’ is not dissimilar to the concept of ‘God’, in the way that there is very little one can say definitively about God. Rorty suggests that rather than aiming to uncover truths of the world, communities should instead attempt to garner as much intersubjective agreement as possible on matters they agree are important.
Rorty wants us to stop asking questions like, ‘Do human beings have inalienable human rights?’, and begin asking questions like, ‘Should we work towards obtaining equal standards of living for all humans?’. The first question is at risk of leading us down the garden path of metaphysical disputes in ways the second is not. As the pragmatist is concerned with practical outcomes, questions which deal in ‘shoulds’ are more aligned with positing future directed action than those which get stuck in metaphysical mud.
Perhaps the pragmatists simply want us to ask ourselves: Is the question we’re asking, or hypothesis that we’re posing, going to make a useful difference to addressing the problem at hand? Useful, as Rorty puts it, is simply that which gets us more of what we want, and less of what we don’t want. If what we want is collective understanding and successful communication, we can get it by testing whether the questions we are asking get us closer to that goal, not further away.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Relationships
Free speech has failed us
Explainer
Relationships
Ethics Explainer: Respect
Explainer
Health + Wellbeing, Relationships
Ethics Explainer: Eudaimonia
Opinion + Analysis
Relationships, Society + Culture










