Ethics Explainer: Shame

Flushed cheeks, lowered gaze and an interminable voice in your head criticising your very being. 

Imagine you’re invited to two different events by different friends. You decide to go to one over the other, but instead of telling your friend the truth, you pretend you’re sick. At first, you might be struck with a bit of guilt for lying to your friend. Then, afterwards, they see photos of you from the other event and confront you about it.  

In situations like this, something other than guilt might creep in. You might start to think it’s more than just a mistake; that this lie is a symptom of a larger problem: that you’re a bad, disrespectful person who doesn’t deserve to be invited to these things in the first place. This is the moral emotion of shame. 

Guilt says, “I did something bad”, while shame whispers, “I am bad”.

Shame is a complicated emotion. It’s most often characterised by feelings of inadequacy, humiliation and self-consciousness in relation to ourselves, others or social and cultural standards, sometimes resulting in a sense of exposure or vulnerability, although many philosophers disagree about which of these are necessary aspects of shame. 

One approach to understanding shame is through the lens of self-evaluation, which says that shame arises from a discrepancy between self-perception and societal norms or personal standards. According to this view, shame emerges when we perceive ourselves as falling short of our own expectations or the expectations of others – though it’s unclear to what extent internal expectations can be separated from social expectations or the process of socialisation. 

Other approaches lean more heavily on our appraisal of social expectations and our perception of how we are viewed by others, even imaginary others. These approaches focus on the arguably unavoidably interpersonal nature of shame, viewing it as a response to social rejection or disapproval.  

This social aspect is such a strong part of shame that it can persist even when we’re alone. One way to exemplify this is to draw similarity between shame and embarrassment. Imagine you’re on an empty street and you trip over, sprawling onto the path. If you’re not immediately overcome with annoyance or rage, you’ll probably be embarrassed. 

But there’s no one around to see you, so why? 

Similarly, taking the example we began with, imagine instead that no one ever found out that you lied about being sick. It’s possible you might still feel ashamed. 

In both of these cases, you’re usually reacting to an imagined audience – you might be momentarily imagining what it would feel like if someone had witnessed what you did, or you might have a moment of viewing yourself from the outside, a second of heightened self-awareness. 

Many philosophers who take this social position also see shame as a means of social control – notably among them is Martha Nussbaum, known for her academic career highlighting the importance of emotions in philosophy and life.  

Nussbaum argues that shame is very often ‘normatively distorted’, in that because shame is reactive to social norms, we often end up internalising societal prejudices or unjust beliefs, leading to a sense of shame about ourselves that should not be a source of shame. For example, people often feel ashamed of their race, gender, sexual orientation, or disability due to societal stigma and discrimination. 

 

Where shame can go wrong

The idea of shame as a prohibitive and often unjust feeling is a sentiment shared by many who work with domestic violence and sexual assault survivors, who note that this distortive nature of shame is what prevents many women from coming forward with a report.   

Even in cases where shame seems to be an appropriate response, it often still causes damage. At the Festival of Dangerous Ideas session in 2022, World Without Rape, panellist and journalist Jess Hill described an advertisement she once saw: 

“…a group of male friends call out their mate who was talking to his wife aggressively on the phone. The way in which they called him out came from a place of shame, and then the men went back to having their beers like nothing happened.” Hill encourages us to think: where will the man in the ad take his shame with him at the end of the night? It will likely go home with him, perpetuating a cycle of violence. 

Likewise, co-panellist and historian Joanna Bourke noted something similar: “rapists have extremely high levels of abuse and drug addictions because they actually do feel shame”. Neither of these situations seem ‘normatively distorted’ in Nussbaum’s sense, and yet they still seem to go wrong. Bourke and other panellists suggested that what is happening here is not necessarily a failing of shame, but a failing of the social processes surrounding it.  

Shame opens us to vulnerability, but to sit with vulnerability and reflect requires us to be open to very difficult conversations.

If the social framework for these conversations isn’t set up, we end up with unjust shame or just shame that, unsupported, still manifests itself ultimately in further destruction. 

However, this nuance is far from intuitive. While people are saddened by the idea of victims feeling shame, they often feel righteous in their assertions that perpetrators of crimes or transgressors of socials norms should feel shame, and that their lack of shame is something that causes the shameful behaviour in the first place. 

Shame certainly has potential to be a force for good if it reminds us of moral standards, or in trying to avoid it we are motivated to abide by moral standards, but it’s important to retain a level of awareness that shame alone is often not enough to define and maintain the ethical playing field. 

copy license

Ethics Explainer: Moral hazards

When individuals are able to avoid bearing the costs of their decisions, they can be inclined towards more risky and unethical behaviour.

Sailing across the open sea in a tall ship laden with trade goods is a risky business. All manner of misfortune can strike, from foul weather to uncharted shoals to piracy. Shipping businesses in the 19th century knew this only too well, so when the budding insurance industry started offering their services to underwrite the ships and cargo, and cover the costs should they experience misadventure, they jumped at the opportunity. 

But the insurance companies started to notice something peculiar: insured ships were more likely to meet with misfortune than ships that were uninsured. And it didn’t seem to be mere coincidence. Instead, it turned out that shipping companies covered by insurance tended to invest less in safety and were more inclined to make risky decisions, such as sailing into more dangerous waters to save time. After all, they had the safety net of insurance to bail them out should anything go awry. 

Naturally, the insurance companies were not impressed, and they soon coined a term for this phenomenon: “moral hazard”.  

Risky business

Moral hazard is usually defined as the propensity for the insured to take greater risks than they might otherwise take. So the owners of a building insured against fire damage might be less inclined to spend money on smoke alarms and extinguishers. Or an individual who insures their car against theft might be less inclined to invest in a more reliable car alarm. 

But it’s a concept that has applications beyond just insurance. 

Consider the banks that were bailed out following the 2008 collapse of the subprime mortgage market in the United States. Many were considered “too big to fail”, and it seems they knew it. Their belief that the government would bail them out rather than let them collapse gave the banks’ executives a greater incentive to take riskier bets. And when those bets didn’t pay off, it was the public that had to foot much of the bill for their reckless behaviour. 

There is also evidence that the existence of government emergency disaster relief, which helps cover the costs of things like floods or bushfires, might encourage people to build their homes in more risky locations, such as in overgrown bushland or coastal areas prone to cyclone or flood. 

What makes moral hazards “moral” is that they allow people to avoid taking responsibility for their actions. If they had to bear the full cost of their actions, then they would be more likely to act with greater caution. Things like insurance, disaster relief and bank bailouts all serve to shift the costs of a risky decision from the shoulders of the decision-maker onto others – sometimes placing the burden of that individual’s decision on the wider public.  

Perverse incentives

While the term “moral hazard” is typically restricted examples involving insurance, there is a general principle that applies across many domains of life. If we put people into a situation where they are able to offload the costs of their decisions onto others, then they are more inclined to entertain risks that they would otherwise avoid or engage in unethical behaviour. 

Like the salesperson working for a business they know will be closing in the near future might be more inclined to sell an inferior or faulty product to a customer, knowing that they won’t have to worry about dealing with warranty claims.  

This means there’s a double edge to moral hazards. One is born by the individual who has to resist the opportunity to shirk their personal responsibility. The other is born by those who create the circumstances that create the moral hazard in the first place. 

Consider a business that has a policy saying the last security guard to check whether the back door is locked is held responsible if there is a theft. That might give security guards an incentive to not check the back door as often, thus decreasing the chance that they are the last one to check it, but increasing the chance of theft. 

Insurance companies, governments and other decision-makers need to ensure that the policies and systems they put in place don’t create perverse incentives that steer people towards reckless or unethical behaviour. And if they are unable to eliminate moral hazards, they need to put in place other policies that provide oversight and accountability for decision making, and punish those who act unethically. 

Few systems or processes will be perfect, and we always require individuals to exercise their ethical judgement when acting within them. But the more we can avoid creating the conditions for moral hazards, the less incentives we’ll create for people to act unethically. 

copy license

Ethics Explainer: Altruism

Amelia notices her elderly neighbour struggling with their shopping and lends them a hand. Mo decides to start volunteering for a local animal shelter after seeing a ‘help wanted’ ad. Alexis has been donating blood twice a year since they heard it was in such short supply.  

These are all examples, of behaviours that put the well-being of others first – otherwise known as altruism.  

Altruism is a principle and practice that concerns the motivation and desire to positively affect another being for their own sake. Amelia’s act is altruistic because she wishes to alleviate some suffering from her neighbour, Mo’s because he wishes to do the same for the animals and shelter workers, and Alexis’ because they wish to do the same to dozens of strangers. 

Crucially, motivation is what is key in altruism.  

If Alexis only donates blood because they really want the free food, then they’re not acting altruistically. Even though the blood is still being donated, even though lives are still being saved, even though the act itself is still good. If their motivation comes from self-interest alone, then the act lacks the other-directedness or selflessness of altruism. Likewise, if Mo’s motivation actually comes from wanting to look good to his partner, or if Amelia’s motivation comes from wanting to be put in her neighbour’s will, their actions are no longer altruistic. 

This is because altruism is characterised as the opposite of selfishness. Rather than prioritising themself, the altruist will be concerned with the well-being of others. However, actions do remain altruistic even if there are mixed motives.  

Consider Amelia again. She might truly care for her elderly neighbour. Maybe it’s even a relative or a good family friend. Nevertheless, part of her motivation for helping might also be the potential to gain an inheritance. While this self-interest seems at odds with altruism, so long as her altruistic motive (genuine care and compassion) also remains then the act can still be considered altruistic, though it is sometimes referred to as “weak” altruism. 

Altruism can (and should) also be understood separately from self-sacrifice. Altruism needn’t be self-sacrificial, though it is often thought of in that way. Altruistic behaviours can often involve little or no effort and still benefit others, like someone giving away their concert ticket because they can no longer attend.  

How much is enough? 

There is a general idea that everyone should be altruistic in some ways at some times; though it’s unclear to what extent this is a moral responsibility. 

Aristotle, in his discussions of eudaimonia, speaks of loving others for their own sake. So, it could be argued that in pursuit of eudaimonia, we have a responsibility to be altruistic at least to the extent that we embody the virtues of care and compassion.  

Another more common idea is the Golden Rule: treat others as you would like to be treated. Although this maxim, or variations of it, is often related to Christianity, it actually dates at least as far back as Ancient Egypt and has arisen in countless different societies and cultures throughout history. While there is a hint of self-interest in the reciprocity, the Golden Rule ultimately encourages us to be altruistic by appealing to empathy. 

We can find this kind of reasoning in other everyday examples as well. If someone gives up their seat for a pregnant person on a train, it’s likely that they’re being altruistic. Part of their reasoning might be similar to the Golden Rule: if they were pregnant, they’d want someone to give up a seat for them to rest.  

Common altruistic acts often occur because, consciously or unconsciously, we empathise with the position of others. 

Effective Altruism 

So far, we have been describing altruism and some other concepts that steer us toward it. However, here is an ethical theory that has many strong things to say about our altruistic obligations and that is consequentialism (concern for the outcomes of our actions).  

Given that, consequentialism can lead us to arguments that altruism is a moral obligation in many circumstances, especially when the actions are of no or little cost to us, since the outcomes are inherently positive.  

For example, Australian philosopher Peter Singer has written extensively on our ethical obligations to donate to charity. He argues that most people should help others because most people are in a position where they can do a lot for significantly less fortunate people with relatively little effort. This might look different for different people – it could be donating clothes, giving to charity, volunteering, signing petitions. Whatever it is, the type of help isn’t necessarily demanding (donating clothes) and can be proportional (donating relative to your income).  

One philosophical and social movement that heavily emphasises this consequentialist outlook is effective altruism, co-founded by Singer, and philosophers Toby Ord and Will MacAskill. 

The effective altruist’s argument is that it’s not good enough just to be altruistic; we must also make efforts to ensure that our good deeds are as impactful as possible through evidence-based research and reasoning. 

Stemming from the empirical foundation, this movement takes a seemingly radical stance on impartiality and the extent of our ethical obligations to help others. Much of this reasoning mirrors a principle outlined by Singer in his 1972 article, “Famine, Affluence and Morality”:  

“If it is in our power to prevent something very bad from happening, without thereby sacrificing anything morally significant, we ought, morally, to do it.” 

This seems like a reasonable statement to many people, but effective altruists argue that what follows from it is much more than our day-to-day incidental kindness. What is morally required of us is much stronger, given most people’s relative position to the world’s worst-off. For example, Toby Ord uses this kind of reasoning to encourage people to commit to donating at least 10% of their income to charity through his organisation “Giving What We Can”.  

Effective altruists generally also encourage prioritising the interests of future generations and other sentient beings, like non-human animals, as well as emphasising the need to prioritise charity in efficient ways, which often means donating to causes that seem distant or removed from the individual’s own life. 

While reasons for and extent of altruistic behaviour can vary, ethics tells us that it’s something we should be concerned with. Whether you’re a Platonist who values kindness, or a consequentialist who cares about the greater good, ethics encourages us to think about the role of altruism in our lives and consider when and how we can help others.  

copy license

Ethics explainer: Cultural Pluralism

Imagine a large, cosmopolitan city, where people from uncountable backgrounds and with numerous beliefs all thrive together. People embrace different cultural traditions, speak varying languages, enjoy countless cuisines, and educate their children on diverse histories and practices.

This is the kind of pluralism that most people are familiar with, but a diverse and culturally integrated area like this is specifically an example of cultural pluralism.

Pluralism in a general sense says there can be multiple perspectives or truths that exist simultaneously, even if some of those perspectives are contradictory. It’s contrasted with monism, which says only one kind of thing exists; dualism, which says there are only two kinds of things (for example, mind and body); and nihilism, which says that no things exist.

So, while pluralism more broadly refers to a diversity of views, perspectives or truths, cultural pluralism refers specifically to a diversity of cultures that co-exist – ideally harmoniously and constructively – while maintaining their unique cultural identities.

Sometimes an entire country can be considered culturally pluralistic, and in other places there may be culturally pluralistic hubs (like states or suburbs where there is a thriving multicultural community within a larger more broadly homogenous area).

On the other end of the spectrum is cultural monism, the idea that a certain area or population should have only one culture. Culturally monistic places (for example, Japan or North Korea) rely on an implicit or explicit pressure for others to assimilate. Whereas assimilation involves the homogenisation of culture, pluralism encourages diversity, often embracing people of different ethnic groups, backgrounds, religions, practices and beliefs to come together and share in their differences.

A pluralistic society is more welcoming and supportive of minority cultures because people don’t feel pressured to hide or change their identities. Instead, diverse experiences are recognised as opportunities for learning and celebration. This invites travel and immigration, and translates into better mental health for migrants, the promotion of harmony and acceptance of others, and enhances creativity by exposing people to perspectives and experiences outside of their usual remit.

We also know what the alternative is in many cases. Australia has a dark history of assimilation practices, a symptom of racist, colonial perspectives that saw the decimation of First Nations people and their cultures. Cultural pluralism is one response to this sort of cultural domination that has been damaging throughout history and remains so in many places today.

However, there are plenty of ethical complications that arise in the pursuit of cultural plurality.

For example, sociologist Robert D. Putnam published research in 2007 that spoke about negative short-medium term effects of ethnically diverse neighbourhoods. He found that, on average, trust, altruism and community cooperation was lower in these neighbourhoods, even between those of the same or similar ethnicities.

While Putnam denied that his findings were anti-multicultural, and argues that there are several positive long-term effects of diverse societies, the research does indicate some of the risks associated with cultural pluralism. It can take a large amount of effort and social infrastructure to build and maintain diverse communities, and if this fails or is done poorly it can cause fragmentation of cultural communities.

This also accords with an argument made by journalist David Goodhart, that says people are generally divided into “Anywheres” (people with a mobile identity) and “Somewheres” (people, usually outside of urban areas, who have marginalised, long-term, location-based identities). This incongruity, he says, accounts for things like Brexit and the election of Donald Trump, because they speak to the Somewheres who are threatened by changes to their status quo. Pluralism, Goodhart notes, risks overlooking the discomfort these communities face if they are not properly supported and informed.

Other issues with pluralism include the prioritisation of competing cultural values and traditions. What if one person’s culture is fundamentally intolerant of another person’s culture? This is something we see especially with cultures organised around or heavily influenced by religion. For example, Christianity and Islam are often at odds with many different cultures around issues of sexual preference and gendered rights and responsibilities.

If we are to imagine a truly culturally pluralistic society, how do we ethically integrate people who are intolerant of others?

Pluralism as a cultural ideal also has direct implications for things like politics and law, raising the age-old question about the relationship between morality and the law. If we want a pluralistic society generally, how do the variations in beliefs, values and principles translate into law? Is it better to have a centralised legal system or do we want a legal plurality that reflects the diversity of the area?

This does already exist in some capacity – many countries have Islamic courts that enforce Sharia law for their communities in addition to the overarching governmental law. This parallel law-enforcement also exists in some colonised countries, where parts of Indigenous law have been recognised. For example, in Australia, with the Mabo decision.

Another feature of genuine cultural pluralism that has huge ethical implications and considerations is diversity of media. This is the idea that there should be (that is, a media system that is not monopolised) and diverse representation in media (that is, media that presents varying perspectives and analyses).

Firstly, this ensures that media, especially news media, stays accountable through comparison and competition, rather than a select powerful few being able to widely disseminate their opinions unchecked. Secondly, it fosters a greater sense of understanding and acceptance by exposing people to perspectives, experiences and opinions that they might otherwise be ignorant or reflexively wary of. Thirdly, as a result, it reduces the risk that media, as a powerful disseminator of culture, could end up creating or reinforcing a monoculture.

While cultural pluralism is often seen as an obviously good thing in western liberal societies, it isn’t without substantial challenges. In the pursuit of tolerance, acceptance and harmony, we must be wary of fragmenting cultures and ensure that diverse communities have adequate social supports to thrive.


Ethics explainer: Normativity

Have you ever spoken to someone and realised that they’re standing a little too close for comfort?

Personal space isn’t something we tend to actively think about; it’s usually an invisible and subconscious expectation or preference. However, when someone violates our expectations, they suddenly become very clear. If someone stands too close to you while talking, you might become uncomfortable or irritated. If a stranger sits right next to you in a public place when there are plenty of other seats, you might feel annoyed or confused.

That’s because personal space is an example of a norm. Norms are communal expectations that are taken up by various populations, usually serving shared values or principles, that direct us towards certain behaviours. For example, the norm of personal space is an expectation that looks different depending on where you are.

In some countries, the norm is to keep distance when talking to strangers, but very close when talking to close friends, family or partners. In other countries, everyone can be relatively close, and in others still, not even close relationships should invade your personal space. This is an example of a norm that we follow subconsciously.

We don’t tend to notice what our expectation even is until someone breaks it, at which point we might think they’re disrespecting personal or social boundaries.

Norms are an embodiment of a phenomenon called normativity, which refers to the tendency of humans and societies to regulate or evaluate human conduct. Normativity pervades our daily lives, influencing our decisions, behaviors, and societal structures. It encompasses a range of principles, standards, and values that guide human actions and shape our understanding of what’s considered right or wrong, good or bad.

Norms can be explicit or implicit, originating from various sources like cultural traditions, social institutions, religious beliefs, or philosophical frameworks. Often norms are implicit because they are unspoken expectations that people absorb as they experience the world around them.

Take, for example, the norms of handshakes, kisses, hugs, bows, and other forms of greeting. Depending on your country, time period, culture, age, and many other factors, some of these will be more common and expected than others. Regardless, though, each of them has a or function like showing respect, affection or familiarity.

While these might seem like trivial examples, norms have historically played a large role in more significant things, like oppression. Norms are effectively social pressures, so conformity is important to their effect – especially in places or times where the flouting of norms results in some kind of public or social rebuke.

So, norms can sometimes be to the detriment of people who don’t feel their preferences or values reflected in them, especially when conformity itself is a norm. One of the major changes in western liberal society has been the loosening of norms – the ability for people to live more authentically themselves.

Normative Ethics

Normativity is also an important aspect of ethical philosophy. Normative ethics is the philosophical inquiry into the nature of moral judgments and the principles that should govern human actions. It seeks to answer fundamental questions like “What should I do?”, “How should I live? and “Which norms should I follow?”. Normative ethical theories provide frameworks for evaluating the morality of specific actions or ethical dilemmas.

Some normative ethical theories include:

  • Consequentialism, which says we should determine moral valued based on the consequences of actions.
  • Deontology, which says we should determine moral value by looking at someone’s coherence with consistent duties or obligations.
  • Virtue ethics, which focuses on alignment with various virtues (like honesty, courage, compassion, respect, etc.) with an emphasis on developing dispositions that cultivate these virtues.
  • Contractualism, informed by the idea of the social contract, which says we should act in ways and for reasons that would be agreed to by all reasonable people in the same circumstances.
  • Feminist ethics, or the ethics of care, which says that we should challenge the understand and challenge the way that gender has operated to inform historical ethical beliefs and how it still affects our moral practices today.

Normativity extends beyond individual actions and plays a significant role in shaping societal norms, as we saw earlier, but also laws and policies. They influence social expectations, moral codes, and legal frameworks, guiding collective behavior and fostering social cohesion. Sometimes, like in the case of traffic laws, social norms and laws work in a circular way, reinforcing each other.

However, our normative views aren’t static or unchangeable.

Over time, societal norms and values evolve, reflecting shifts in normative perspectives (cultural, social, and philosophical). Often, we see social norms culminating in the changing of outdated laws that accurately reflected the normative views of the time, but no longer do.

While it’s ethically significant that norms shift over time and adapt to their context, it’s important to note that these changes often happen slowly. Eventually, changes in norms influence changes in laws, and this can often happen even more slowly, as we have seen with homosexuality laws around the world.


Ethics explainer: Nihilism

“If nothing matters, then all the pain and guilt you feel for making nothing of your life goes away.” – Jobu Tupaki, Everything Everywhere All At Once 

Do our lives matter? 

Nihilism is a school of philosophical thought proposing that our existence fundamentally lacks inherent meaning. It rejects various aspects of human existence that are generally accepted and considered fundamental, like objective truth, moral truth and the value and purpose of life. Its origin is the Latin word ‘nihil’, which means ‘nothing’.  

The most common branches of nihilism are existential and moral nihilism, though there are many others, including epistemological, political, metaphysical and medical nihilism. 

Existential nihilism  

In popular use, nihilism usually refers to existential nihilism, a precursor to existentialist thought. This is the idea that life has no inherent meaning, value or purpose and it’s also often (because of this) linked with feelings of despair or apathy. Nihilists in media are usually portrayed as moody, brooding or radical types who have decided that we are insignificant specks floating around an infinite universe, and that therefore nothing matters.  

Nihilist ideas date as far back as Buddha; though the beginning of its uprising in western literature appeared in the early 19th century. This shift was largely a response to the diminishing moral authority of the church (and religion at large) and the rise of secularism and rationalism. This rejection led to the view that the universe had no grand design or purpose, that we are all simply cogs in the machine of the existence. 

Though he wasn’t a nihilist himself, Friedrich Nietzsche is the poster-child for much of contemporary nihilism, especially in pop culture and online circles. Nietzsche wrote extensively on it in the late 19th century, speaking of the crisis we find ourselves in when we realise that the world lacks the intrinsic meaning or value that we want or believed it to have. This is ultimately something that he wanted us to overcome.  

He saw humans responding to this crisis in two ways: passive or active nihilism.  

For Nietzsche, passive nihilists are those who resign themselves to the meaninglessness of life, slowly separating themselves from their own will or desires to minimise the suffering they face from the random chaos of the world. 

In media, this kind of pessimistic nihilism is sometimes embodied by characters who then act on it in a destructive way. For example, the antagonist, Jobu Topaki in Everything Everywhere All At Once comes to this realisation through her multi-dimensional awareness, which convinces her that because of the infinite nature of reality, none of her choices matter and so she attempts to destroy herself to escape the insignificance and meaninglessness she feels. 

Jobu Topaki, Everything Everywhere All At Once (2022)

Active nihilists instead see nihilism as a freeing condition, revealing a world where they are emboldened to create something new on top of the destruction of the old values and ways of thinking.  

Nietzsche’s idea of the active nihilist is the Übermensch (“superman”), a person who overcomes the struggle of nihilism by working to create their own meaning in the face of meaninglessness. They see the absurdity of life as something to be embraced, giving them the ability to live in a way that enforces their own values and “levels the playing field” of past values.  

Moral nihilism

Existential nihilism often gives way to moral nihilism, the idea that morality doesn’t exist, that no moral choices are preferable in comparison to others. Because, if our lives don’t have intrinsic meaning, if objective values don’t exist, then by what standard can we call actions right or wrong? We normally see this kind of nihilism embodied by anarchic characters in media. 

An infamous example is the Joker from the Batman franchise. Especially in renditions like The Dark Knight (2008) and Joker (2019), the Joker is portrayed as someone whose expectations of the world have failed him, whose tortuous existence has led him to believe that nothing matters, the world doesn’t care, and that in the face of that, we shouldn’t care about anything or anyone either. In his words, “everything burns” in the end, so he sees no problem in hastening that destruction and ultimately the destruction of himself. 

The Joker, 2019

“Now comes the part where I relieve you, the little people, of the burden of your useless lives.”

The Joker epitomises the populist understanding of nihilism and one of the primary ethical risks of this philosophical world view. For some people, viewing their lives as lacking inherent meaning or value causes a psychological spiral into apathy.  

This spiral can cause people to become self-destructive, reclusive, suicidal and otherwise hasten towards “nothingness”. In others, it can cause outwardly destructive actions because of their perception that since nothing matters in some kind of objective sense, they can do whatever they want (think American Psycho).  

Nihilism has particularly flourished in many online subcultures, fuelling the apathy of edgelords towards the plights of marginalised populations and often resulting in a tendency towards verbal and physical violence. One of the major challenges of nihilism, historically and today, is that it’s not obviously false. This is where we rely on philosophy to be able to justify why any morality should exist at all. 

Where to go from here

A common thread runs through many of the nihilist and existentialist writers about what we should do in the face of inherent meaninglessness: create it ourselves. 

Existentialists like Simone de Beauvoir and Jean-Paul Sartre talk about the importance of recognising the freedom that this kind of perspective gives us. And, equally, the importance of making sure that we make meaning for ourselves and for others through our life. 

For some people, that might be a return to religion. But there are plenty of other ways to create meaning in life: focusing on what’s subjectively meaningful to you or those you care about and fully embracing those things. Existence doesn’t need to have intrinsic meaning for us to care. 


Thought experiment: "Chinese room" argument

If a computer responds to questions in an intelligent way, does that mean it is genuinely intelligent?

Since its release to the public in November 2022, ChatGPT has taken the world by storm. Anyone can log in, ask a series of questions, and receive very detailed and reasonable responses.

Given the startling clarity of the responses, the fluidity of the language and the speed of the response, it is easy to assume that ChatGPT “understands” what it’s reporting back. The very language used by ChatGPT, and the way it types out each word individually, reinforces the feeling that we are “chatting” with another intelligent being.

But this raises the question of whether ChatGPT, or any other large language model (LLM) like it, is genuinely capable of “understanding” anything, at least in the way that humans do. This is where a thought experiment concocted in the 1980s becomes especially relevant today.

“The Chinese room”

Imagine you’re a monolingual native English speaker sitting in a small windowless room surrounded by filing cabinets with drawers filled with cards, each featuring one or more Chinese characters. You also have a book of detailed instructions written in English on how to manipulate those cards.

Given you’re a native English speaker with no understanding of Chinese, the only thing that will make sense to you will be the book of instructions.

Now imagine that someone outside the room slips a series of Chinese characters under the door. You look in the book and find instructions telling you what to do if you see that very series of characters. The instructions culminate by having you pick out another series of Chinese characters and slide them back under the door.

You have no idea what the characters mean but they make perfect sense to the native Chinese speaker on the outside. In fact, the series of characters they originally slid under the door formed a question and the characters you returned formed a perfectly reasonable response. To the native Chinese speaker outside, it looks, for all intents and purposes, like the person inside the room understands Chinese. Yet you have no such understanding.

This is the “Chinese room” thought experiment proposed by the philosopher John Searle in 1980 to challenge the idea that a computer that simply follows a program can have a genuine understanding of what it is saying. Because Searle was American, he chose Chinese for his thought experiment. But the experiment would equally apply to a monolingual Chinese speaker being given cards written in English or a Spanish speaker given cards written in Cherokee, and so on.

Functionalism and Strong AI

Philosophers have long debated what it means to have a mind that is capable of having mental states, like thoughts or feelings. One view that was particularly popular in the late 20th century was called “functionalism”.

Functionalism states that a mental state is not defined by how it’s produced, such as requiring that it must be the product of a brain in action. It is also not defined by what it feels like, such as requiring that pain have a particular unpleasant sensation. Instead, functionalism says that a mental state is defined by what it does.

This means that if something produces the same aversive response that pain does in us, even if it is done by a computer rather than a brain, then it is just as much a mental state as it is when a human experiences pain.

Functionalism is related to a view that Searle called “Strong AI”. This view says that if we produce a computer that behaves and responds to stimuli in exactly the same way that a human would, then we should consider that computer to have genuine mental states. “Weak AI”, on the other hand, simply claims that all such a computer is doing is simulating mental states.

Searle offered the Chinese room thought experiment to show that being able to answer a question intelligently is not sufficient to prove Strong AI. It could be that the computer is functionally proficient in speaking Chinese without actually understanding Chinese.

ChatGPT room

While the Chinese room remained a much-debated thought experiment in philosophy for over 40 years, today we can all see the experiment made real whenever we log into Chat GPT. Large language models like ChatGPT are the Chinese room argument made real. They are incredibly sophisticated versions of the filing cabinet, reflecting the corpus of text upon which they’re trained, and the instructions, representing the probabilities used to decide how to pick which character or word to display next.

So even if we feel that ChatGPT – or a future more capable LLM – understands what it’s saying, if we believe that the person in the Chinese room doesn’t understand Chinese, and that LLMs operate in much the same way as the Chinese room, then we must conclude that it doesn’t really understand what it’s saying.

This observation has relevance for ethical considerations as well. If we believe that genuine ethical action requires the actor to have certain mental states, like intentions or beliefs, or that ethics requires the individual to possess certain virtues, like integrity or honesty – then we might conclude that a LLM is incapable of being genuinely ethical if it lacks these things.

A LLM might still be able to express ethical statements and follow prescribed ethical guidelines imposed by its creators – as has been the case in the creators of ChatGPT limiting its responses around sensitive topics such as racism, violence and self-harm – but even if it looks like it has its own ethical beliefs and convictions, that could be an illusion similar to the Chinese room.


Ethics Explainer: Moral injury

Moral injury occurs when we are forced to violate our deepest ethical values and it can have a serious impact on our wellbeing.

In the 1980s, the American psychiatrist Jonathan Shay was helping veterans of the war in Vietnam deal with the traumas they had experienced. He noticed that many of his patients were experiencing high levels of despair accompanied by feelings of guilt and shame, along with a decline of trust in themselves and others. This led to them disengaging from their friends, family and society at large, accompanied by episodes of suicidality and interpersonal violence. 

Shay realised that this was not posttraumatic stress disorder (PTSD), this was something different. Shay saw that these veterans were not just traumatised by what had happened to them, they were ‘wounded’ by what they had done to others. He called this new condition “moral injury,” describing it as a “soul wound inflicted by doing something that violates one’s own ethics, ideals, or attachments”. 

The “injury” is to our very self-conception as ethical beings, which is a core aspect of our identity. As Shay stated about his patients, moral injury “deteriorates their character; their ideals, ambitions, and attachments begin to change and shrink.”  

Moral injury is, at its heart, an ethical issue. It is caused when we are faced with decisions or directives that force us to challenge or violate our most deeply held ethical values, like if a soldier is forced to endanger civilians or a nurse feels they can’t offer each of their patients the care they deserve due to staff shortages.  

Sometimes this ethical compromise can be caused by the circumstances people are placed in, like working in an organisation that is chronically under-resourced. Sometimes it can be caused by management expecting them to do something that goes against their values, like overlooking inappropriate behaviour among colleagues in the workplace in order to protect high performers or revenue generators. 

Symptoms

There are several common symptoms of moral injury. The first is guilt. This manifests as intense discomfort and hyper-sensitivity towards how others regard us, and can lead to irritability, denial or projection of negative feelings, such as anger, onto others. 

Guilt can tip over into shame, which is a form of intense negative self-evaluation or self-disgust. This is why shame sometimes manifests as stomach pains or digestive issues. Shame can be debilitating and demotivating, causing a negative spiral into despondency. 

Excessive guilt and shame can lead to anxiety, which is a feeling of fear that doesn’t have an obvious cause. Anxiety can cause distraction, irritability, fatigue, insomnia as well as body and muscle aches. 

Moral injury also challenges our self-image as ethical beings, sometimes leading to us losing trust in our own ability to do what is right. This can rob us of a sense of agency, causing us to feel powerless, becoming passive, despondent and feeling resigned to the forces that act upon us. It can also erode our own moral compass and cause us to question the moral character of others, which can further shake our feeling that the other people and society at large are guided by ethical principles that we value. 

The negative emotions and self-assessment that accompany moral injury can also cause us to withdraw from social or emotional engagement with others. This can involve a reluctance to interact socially as well as empathy fatigue, where we have difficulty or lack the desire to share in others’ emotions. 

Distinctions

Moral injury is often mistaken for PTSD or burnout, but they are different issues. Burnout is a response to chronic stress due to unreasonable demands, such a relentless workloads, long hours, chronic under resourcing. It can lead to emotional exhaustion and, in extreme cases, depersonalisation, where people feel detached from their lives and just continue on autopilot. But it’s possible to suffer from burnout even if you are not compromising your deepest ethical values; you might feel burnout but still agree that the work you’re doing is worthwhile. 

PTSD is a response to witnessing or experiencing intense trauma or threat, especially mortal danger. It can be amplified if the individual survived the danger while those around them, especially close friends or colleagues, did not survive. This could be experienced following a round of poorly managed redundancies, where those who keep their jobs have survivor guilt. Thus, PTSD is typically a response to something that you have witnessed or experienced, whereas moral injury is related to something that you have done (or not been able to do) to others.  

Moral injury affects a wide range of industries and professions, from the military to healthcare to government and corporate organisations, and its impacts can be easily overlooked or mistaken for other issues. But with a greater awareness of moral injury and its causes, we’ll be better equipped to prevent and treat it. 

 

If you or someone you know is suffering from moral injury you can contact Ethi-call, a free and independent helpline provided by The Ethics Centre. Trained counsellors will talk you through the ethical dimension of your situation and provide resources to help understand it and to decide on the best course of action. To book a call visit www.ethi-call.com 

The Ethics Centre is a thought leader in assessing organisational cultural health and building leadership capability to make good ethical decisions. We have helped a number of organisations across a number of industries deal with moral injury, burnout and PTSD. To arrange a confidential conversation contact the team at consulting@ethics.org.au. Or visit our consulting page to learn more. 


Ethics Explainer: Longtermism

Longtermism argues that we should prioritise the interests of the vast number of people who might live in the distant future rather that the relatively few people who do live today.

Do we have a responsibility to care for the welfare of people in future generations? Given the tremendous efforts people are making to prevent dangerous climate change today, it seems that many people do feel some responsibility to consider how their actions impact those who are yet to be born. 

But if you take this responsibility seriously, it could have profound implications. These implications are maximally embraced by an ethical stance called ‘longtermism,’ which argues we must consider how our actions affect the long-term future of humanity and that we should prioritise actions that will have the greatest positive impact on future generations, even if they come at a high cost today. 

Longtermism is a view that emerged from the effective altruism movement, which seeks to find the best ways to make a positive impact on the world. But where effective altruism focuses on making the current or near-future world as good as it can be, longtermism takes a much broader perspective. 

Billions and billions

The longtermist argument starts by asserting that the welfare of someone living a thousand years from now is no less important than the welfare of someone living today. This is similar to Peter Singer’s argument that the welfare of someone living on the other side of the world is no less ethically important than the welfare of your family, friends or local community. We might have a stronger emotional connection to those nearer to us, but we have no reason to preference their welfare over that of people more spatially or temporally removed from us. 

Longtermists then urge us to consider that there will likely be many more people in the future than there are alive today. Indeed, humanity might persist for many thousands or even millions of years, perhaps even colonising other planets. This means there could be hundreds of billions of people, not to mention other sentient species or artificial intelligences that also experience pain or happiness, throughout the lifetime of the universe.  

The numbers escalate quickly, so if there’s even a 0.1% chance that our species colonises the galaxy and persists for a billion years, then that means the expected number of future people could number in the hundreds of trillions.  

The longtermism argument concludes that if we believe we have some responsibility to future people, and if there are many times more future people than there are people alive today, then we ought to prioritise the interests of future generations over the interests of those alive today.  

This is no trivial conclusion. It implies that we should make whatever sacrifices necessary today to benefit those who might live many thousands of years in the future. This means doing everything we can to eliminate existential threats that might snuff out humanity, which would not only be a tragedy for those who die as a result of that event but also a tragedy for the many more people who were denied an opportunity to be born. It also means we should invest everything we can in developing technology and infrastructure to benefit future generations, even if that means our own welfare is diminished today. 

Not without cost

Longtermism has captured the attention and support of some very wealthy and influential individuals, such as Skype co-founder Jaan Tallinn and Facebook co-founder Dustin Moskovitz. Organisations such as 80,000 Hours also use longtermism as a framework to help guide career decisions for people looking to do the most good over their lifetime.  

However, it also has its detractors, who warn about it distracting us from present and near-term suffering and threats like climate change, or accelerating the development of technologies that could end up being more harmful than beneficial, like superintelligent AI.  

Even supporters of longtermism have debated its plausibility as an ethical theory. Some argue that it might promote ‘fanaticism,’ where we end up prioritising actions that have a very low chance of benefiting a very high number of people in the distant future rather than focusing on achievable actions that could reliably benefit people living today. 

Others question the idea that we can reliably predict the impacts of our actions on the distant future. It might be that even our most ardent efforts today ‘wash out’ into historical insignificance only a few centuries from now and have no impact on people living a thousand or a million years hence. Thus, we ought to focus on the near-term rather than the long-term. 

Longtermism is an ethical theory with real impact. It redirects our attention from those alive today to those who might live in the distant future. Some of the implications are relatively uncontroversial, such as suggesting we should work hard to prevent existential threats. But its bolder conclusions might be cold comfort for those who see suffering and injustice in the world today and would rather focus on correcting that than helping build a world for people who may or may not live a million years from now. 


Ethics Explainer: Truth & Honesty

How do we know we’re telling the truth? If someone asks you for the time, do you ever consider the accuracy of your response? 

In everyday life, truth is often thought of as a simple concept. Something is factual, false, or unknown. Similarly, honesty is usually seen as the difference between ‘telling the truth’ and lying (with some grey areas like white lies or equivocations in between). ‘Telling the truth’ is somewhat of a misnomer, though. Since honesty is mostly about sincerity, people can be honest without being accurate about the truth. 

In philosophy, truth is anything but simple and weaves itself into a host of other areas. In epistemology, for example, philosophers interrogate the nature of truth by looking at it through the lens of knowledge.  

After all, if we want to be truthful, we need to know what is true. 

Figuring that out can be hard, not just practically, but metaphysically.  

Theories of Truth

There are several competing theories that attempt to explain what truth is, the most popular of which is the correspondence theory. Correspondence refers to the way our minds relate to reality. In it, truth is a belief or statement that corresponds to how the world ‘really’ is independent of our minds or perceptions of it. As popular as this theory is, it does prompt the question: how do we know what the world is like outside of our experience of it? 

Many people, especially scientists and philosophers, have to grapple with the idea that we are limited in our ability to understand reality. For every new discovery, there seems to be another question left unanswered. This being the case, the correspondence theory leads us to a problem of not being able to speak about things being true because we don’t have an accurate understanding of reality. 

Another theory of truth is the coherence theory. This states that truth is a matter of coherence within and between systems of beliefs. Rather than the truth of our beliefs relying on a relation to the external world, it relies on their consistency with other beliefs within a system.  

The strength of this theory is that it doesn’t depend on us having an accurate understanding of reality in order for us to speak about something being true. The weakness is that we can imagine there being several different comprehensive and cohesive system of beliefs that, and thus different people having different ‘true’ beliefs that are impossible to adjudicate between. 

Yet another theory of truth is pragmatist, although there are a couple of varieties, as with pragmatism in general. Broadly, we can think of pragmatist truth as a more lenient and practical correspondence theory.  

For pragmatists, what the world is ‘really’ like only matters as far as it impacts the usefulness of our beliefs in practice.  

So, pragmatist truth is in a sense malleable; it, like the scientific method it’s closely linked with, sees truth as a useful tool for understanding the world, but recognises that with new information and experiment the ‘truth’ will change. 

Ethical aspects of truth and honesty 

Regardless of the theory of truth that you subscribe to, there are practical applications of truth that have a significant impact on how to behave ethically. One of these applications is honesty.  

Honesty, in a simple sense, is speaking what we wholeheartedly believe to be true.  

Honesty comes up a lot in classical ethical frameworks and, as with lots of ethical concepts, isn’t as straightforward as it seems. 

In Aristotelian virtue ethics, honesty permeates many other virtues, like friendship, but is also a virtue in itself that lies between habitual lying and boastfulness or callousness. So, a virtue ethicist might say a severe lack of honesty would result in someone who is untrustworthy or misleading, while too much honesty might result in someone who says unnecessary truthful things at the expense of people’s feelings. 

A classic example is a friend who asks you for your opinion on what they’re wearing. Let’s say you don’t think what they’re wearing is nice or flattering. You could be overly honest and hurt their feelings, you could lie and potentially embarrass them, or you could frame your honesty in a way that is moderate and constructive, like “I think this other colour/fit suits you better”.  

This middle ground is also often where consequentialism lands on these kinds of interpersonal truth dynamics because of its focus on consequences. Broadly, the truth is important for social cohesion, but consequentialism might tell us to act with a bit more or a bit less honesty depending on the individual situations and outcomes, like if the truth would cause significant harm. 

Deontology, on the other hand, following in the footsteps of Immanuel Kant, holds honesty as an absolute moral obligation. Kant was known to say that honesty was imperative even if a murderer was at your door asking where your friend was! 

Outside of the general moral frameworks, there are some interesting ethical questions we can ask about the nature of our obligations to truth. Do certain people or relations have a stronger right to the truth? For example, many people find it acceptable and even useful to lie to children, especially when they’re young. Does this imply age or maturity has an impact on our right to the truth? If the answer to this is that it’s okay in a paternalistic capacity, then why doesn’t that usually fly with adults?  

What about if we compare strangers to friends and family? Why do we intuitively feel that our close friends or family ‘deserve’ the truth from us, while someone off the street doesn’t?  

If we do have a moral obligation towards the truth, does this also imply an obligation to keep ourselves well-informed so that we can be truthful in a meaningful way? 

The nature of truth remains elusive, yet the way we treat it in our interpersonal lives is still as relevant as ever. Honesty is a useful and easier way of framing lots of conversations about truth, although it has its own complexities to be aware of, like the limits of its virtue.