Ethics explainer: Nihilism

“If nothing matters, then all the pain and guilt you feel for making nothing of your life goes away.” – Jobu Tupaki, Everything Everywhere All At Once 

Do our lives matter? 

Nihilism is a school of philosophical thought proposing that our existence fundamentally lacks inherent meaning. It rejects various aspects of human existence that are generally accepted and considered fundamental, like objective truth, moral truth and the value and purpose of life. Its origin is the Latin word ‘nihil’, which means ‘nothing’.  

The most common branches of nihilism are existential and moral nihilism, though there are many others, including epistemological, political, metaphysical and medical nihilism. 

Existential nihilism  

In popular use, nihilism usually refers to existential nihilism, a precursor to existentialist thought. This is the idea that life has no inherent meaning, value or purpose and it’s also often (because of this) linked with feelings of despair or apathy. Nihilists in media are usually portrayed as moody, brooding or radical types who have decided that we are insignificant specks floating around an infinite universe, and that therefore nothing matters.  

Nihilist ideas date as far back as Buddha; though the beginning of its uprising in western literature appeared in the early 19th century. This shift was largely a response to the diminishing moral authority of the church (and religion at large) and the rise of secularism and rationalism. This rejection led to the view that the universe had no grand design or purpose, that we are all simply cogs in the machine of the existence. 

Though he wasn’t a nihilist himself, Friedrich Nietzsche is the poster-child for much of contemporary nihilism, especially in pop culture and online circles. Nietzsche wrote extensively on it in the late 19th century, speaking of the crisis we find ourselves in when we realise that the world lacks the intrinsic meaning or value that we want or believed it to have. This is ultimately something that he wanted us to overcome.  

He saw humans responding to this crisis in two ways: passive or active nihilism.  

For Nietzsche, passive nihilists are those who resign themselves to the meaninglessness of life, slowly separating themselves from their own will or desires to minimise the suffering they face from the random chaos of the world. 

In media, this kind of pessimistic nihilism is sometimes embodied by characters who then act on it in a destructive way. For example, the antagonist, Jobu Topaki in Everything Everywhere All At Once comes to this realisation through her multi-dimensional awareness, which convinces her that because of the infinite nature of reality, none of her choices matter and so she attempts to destroy herself to escape the insignificance and meaninglessness she feels. 

Jobu Topaki, Everything Everywhere All At Once (2022)

Active nihilists instead see nihilism as a freeing condition, revealing a world where they are emboldened to create something new on top of the destruction of the old values and ways of thinking.  

Nietzsche’s idea of the active nihilist is the Übermensch (“superman”), a person who overcomes the struggle of nihilism by working to create their own meaning in the face of meaninglessness. They see the absurdity of life as something to be embraced, giving them the ability to live in a way that enforces their own values and “levels the playing field” of past values.  

Moral nihilism

Existential nihilism often gives way to moral nihilism, the idea that morality doesn’t exist, that no moral choices are preferable in comparison to others. Because, if our lives don’t have intrinsic meaning, if objective values don’t exist, then by what standard can we call actions right or wrong? We normally see this kind of nihilism embodied by anarchic characters in media. 

An infamous example is the Joker from the Batman franchise. Especially in renditions like The Dark Knight (2008) and Joker (2019), the Joker is portrayed as someone whose expectations of the world have failed him, whose tortuous existence has led him to believe that nothing matters, the world doesn’t care, and that in the face of that, we shouldn’t care about anything or anyone either. In his words, “everything burns” in the end, so he sees no problem in hastening that destruction and ultimately the destruction of himself. 

The Joker, 2019

“Now comes the part where I relieve you, the little people, of the burden of your useless lives.”

The Joker epitomises the populist understanding of nihilism and one of the primary ethical risks of this philosophical world view. For some people, viewing their lives as lacking inherent meaning or value causes a psychological spiral into apathy.  

This spiral can cause people to become self-destructive, reclusive, suicidal and otherwise hasten towards “nothingness”. In others, it can cause outwardly destructive actions because of their perception that since nothing matters in some kind of objective sense, they can do whatever they want (think American Psycho).  

Nihilism has particularly flourished in many online subcultures, fuelling the apathy of edgelords towards the plights of marginalised populations and often resulting in a tendency towards verbal and physical violence. One of the major challenges of nihilism, historically and today, is that it’s not obviously false. This is where we rely on philosophy to be able to justify why any morality should exist at all. 

Where to go from here

A common thread runs through many of the nihilist and existentialist writers about what we should do in the face of inherent meaninglessness: create it ourselves. 

Existentialists like Simone de Beauvoir and Jean-Paul Sartre talk about the importance of recognising the freedom that this kind of perspective gives us. And, equally, the importance of making sure that we make meaning for ourselves and for others through our life. 

For some people, that might be a return to religion. But there are plenty of other ways to create meaning in life: focusing on what’s subjectively meaningful to you or those you care about and fully embracing those things. Existence doesn’t need to have intrinsic meaning for us to care. 


Thought experiment: "Chinese room" argument

If a computer responds to questions in an intelligent way, does that mean it is genuinely intelligent?

Since its release to the public in November 2022, ChatGPT has taken the world by storm. Anyone can log in, ask a series of questions, and receive very detailed and reasonable responses.

Given the startling clarity of the responses, the fluidity of the language and the speed of the response, it is easy to assume that ChatGPT “understands” what it’s reporting back. The very language used by ChatGPT, and the way it types out each word individually, reinforces the feeling that we are “chatting” with another intelligent being.

But this raises the question of whether ChatGPT, or any other large language model (LLM) like it, is genuinely capable of “understanding” anything, at least in the way that humans do. This is where a thought experiment concocted in the 1980s becomes especially relevant today.

“The Chinese room”

Imagine you’re a monolingual native English speaker sitting in a small windowless room surrounded by filing cabinets with drawers filled with cards, each featuring one or more Chinese characters. You also have a book of detailed instructions written in English on how to manipulate those cards.

Given you’re a native English speaker with no understanding of Chinese, the only thing that will make sense to you will be the book of instructions.

Now imagine that someone outside the room slips a series of Chinese characters under the door. You look in the book and find instructions telling you what to do if you see that very series of characters. The instructions culminate by having you pick out another series of Chinese characters and slide them back under the door.

You have no idea what the characters mean but they make perfect sense to the native Chinese speaker on the outside. In fact, the series of characters they originally slid under the door formed a question and the characters you returned formed a perfectly reasonable response. To the native Chinese speaker outside, it looks, for all intents and purposes, like the person inside the room understands Chinese. Yet you have no such understanding.

This is the “Chinese room” thought experiment proposed by the philosopher John Searle in 1980 to challenge the idea that a computer that simply follows a program can have a genuine understanding of what it is saying. Because Searle was American, he chose Chinese for his thought experiment. But the experiment would equally apply to a monolingual Chinese speaker being given cards written in English or a Spanish speaker given cards written in Cherokee, and so on.

Functionalism and Strong AI

Philosophers have long debated what it means to have a mind that is capable of having mental states, like thoughts or feelings. One view that was particularly popular in the late 20th century was called “functionalism”.

Functionalism states that a mental state is not defined by how it’s produced, such as requiring that it must be the product of a brain in action. It is also not defined by what it feels like, such as requiring that pain have a particular unpleasant sensation. Instead, functionalism says that a mental state is defined by what it does.

This means that if something produces the same aversive response that pain does in us, even if it is done by a computer rather than a brain, then it is just as much a mental state as it is when a human experiences pain.

Functionalism is related to a view that Searle called “Strong AI”. This view says that if we produce a computer that behaves and responds to stimuli in exactly the same way that a human would, then we should consider that computer to have genuine mental states. “Weak AI”, on the other hand, simply claims that all such a computer is doing is simulating mental states.

Searle offered the Chinese room thought experiment to show that being able to answer a question intelligently is not sufficient to prove Strong AI. It could be that the computer is functionally proficient in speaking Chinese without actually understanding Chinese.

ChatGPT room

While the Chinese room remained a much-debated thought experiment in philosophy for over 40 years, today we can all see the experiment made real whenever we log into Chat GPT. Large language models like ChatGPT are the Chinese room argument made real. They are incredibly sophisticated versions of the filing cabinet, reflecting the corpus of text upon which they’re trained, and the instructions, representing the probabilities used to decide how to pick which character or word to display next.

So even if we feel that ChatGPT – or a future more capable LLM – understands what it’s saying, if we believe that the person in the Chinese room doesn’t understand Chinese, and that LLMs operate in much the same way as the Chinese room, then we must conclude that it doesn’t really understand what it’s saying.

This observation has relevance for ethical considerations as well. If we believe that genuine ethical action requires the actor to have certain mental states, like intentions or beliefs, or that ethics requires the individual to possess certain virtues, like integrity or honesty – then we might conclude that a LLM is incapable of being genuinely ethical if it lacks these things.

A LLM might still be able to express ethical statements and follow prescribed ethical guidelines imposed by its creators – as has been the case in the creators of ChatGPT limiting its responses around sensitive topics such as racism, violence and self-harm – but even if it looks like it has its own ethical beliefs and convictions, that could be an illusion similar to the Chinese room.


Ethics Explainer: Moral injury

Moral injury occurs when we are forced to violate our deepest ethical values and it can have a serious impact on our wellbeing.

In the 1980s, the American psychiatrist Jonathan Shay was helping veterans of the war in Vietnam deal with the traumas they had experienced. He noticed that many of his patients were experiencing high levels of despair accompanied by feelings of guilt and shame, along with a decline of trust in themselves and others. This led to them disengaging from their friends, family and society at large, accompanied by episodes of suicidality and interpersonal violence. 

Shay realised that this was not posttraumatic stress disorder (PTSD), this was something different. Shay saw that these veterans were not just traumatised by what had happened to them, they were ‘wounded’ by what they had done to others. He called this new condition “moral injury,” describing it as a “soul wound inflicted by doing something that violates one’s own ethics, ideals, or attachments”. 

The “injury” is to our very self-conception as ethical beings, which is a core aspect of our identity. As Shay stated about his patients, moral injury “deteriorates their character; their ideals, ambitions, and attachments begin to change and shrink.”  

Moral injury is, at its heart, an ethical issue. It is caused when we are faced with decisions or directives that force us to challenge or violate our most deeply held ethical values, like if a soldier is forced to endanger civilians or a nurse feels they can’t offer each of their patients the care they deserve due to staff shortages.  

Sometimes this ethical compromise can be caused by the circumstances people are placed in, like working in an organisation that is chronically under-resourced. Sometimes it can be caused by management expecting them to do something that goes against their values, like overlooking inappropriate behaviour among colleagues in the workplace in order to protect high performers or revenue generators. 

Symptoms

There are several common symptoms of moral injury. The first is guilt. This manifests as intense discomfort and hyper-sensitivity towards how others regard us, and can lead to irritability, denial or projection of negative feelings, such as anger, onto others. 

Guilt can tip over into shame, which is a form of intense negative self-evaluation or self-disgust. This is why shame sometimes manifests as stomach pains or digestive issues. Shame can be debilitating and demotivating, causing a negative spiral into despondency. 

Excessive guilt and shame can lead to anxiety, which is a feeling of fear that doesn’t have an obvious cause. Anxiety can cause distraction, irritability, fatigue, insomnia as well as body and muscle aches. 

Moral injury also challenges our self-image as ethical beings, sometimes leading to us losing trust in our own ability to do what is right. This can rob us of a sense of agency, causing us to feel powerless, becoming passive, despondent and feeling resigned to the forces that act upon us. It can also erode our own moral compass and cause us to question the moral character of others, which can further shake our feeling that the other people and society at large are guided by ethical principles that we value. 

The negative emotions and self-assessment that accompany moral injury can also cause us to withdraw from social or emotional engagement with others. This can involve a reluctance to interact socially as well as empathy fatigue, where we have difficulty or lack the desire to share in others’ emotions. 

Distinctions

Moral injury is often mistaken for PTSD or burnout, but they are different issues. Burnout is a response to chronic stress due to unreasonable demands, such a relentless workloads, long hours, chronic under resourcing. It can lead to emotional exhaustion and, in extreme cases, depersonalisation, where people feel detached from their lives and just continue on autopilot. But it’s possible to suffer from burnout even if you are not compromising your deepest ethical values; you might feel burnout but still agree that the work you’re doing is worthwhile. 

PTSD is a response to witnessing or experiencing intense trauma or threat, especially mortal danger. It can be amplified if the individual survived the danger while those around them, especially close friends or colleagues, did not survive. This could be experienced following a round of poorly managed redundancies, where those who keep their jobs have survivor guilt. Thus, PTSD is typically a response to something that you have witnessed or experienced, whereas moral injury is related to something that you have done (or not been able to do) to others.  

Moral injury affects a wide range of industries and professions, from the military to healthcare to government and corporate organisations, and its impacts can be easily overlooked or mistaken for other issues. But with a greater awareness of moral injury and its causes, we’ll be better equipped to prevent and treat it. 

 

If you or someone you know is suffering from moral injury you can contact Ethi-call, a free and independent helpline provided by The Ethics Centre. Trained counsellors will talk you through the ethical dimension of your situation and provide resources to help understand it and to decide on the best course of action. To book a call visit www.ethi-call.com 

The Ethics Centre is a thought leader in assessing organisational cultural health and building leadership capability to make good ethical decisions. We have helped a number of organisations across a number of industries deal with moral injury, burnout and PTSD. To arrange a confidential conversation contact the team at consulting@ethics.org.au. Or visit our consulting page to learn more. 


Ethics Explainer: Longtermism

Longtermism argues that we should prioritise the interests of the vast number of people who might live in the distant future rather that the relatively few people who do live today.

Do we have a responsibility to care for the welfare of people in future generations? Given the tremendous efforts people are making to prevent dangerous climate change today, it seems that many people do feel some responsibility to consider how their actions impact those who are yet to be born. 

But if you take this responsibility seriously, it could have profound implications. These implications are maximally embraced by an ethical stance called ‘longtermism,’ which argues we must consider how our actions affect the long-term future of humanity and that we should prioritise actions that will have the greatest positive impact on future generations, even if they come at a high cost today. 

Longtermism is a view that emerged from the effective altruism movement, which seeks to find the best ways to make a positive impact on the world. But where effective altruism focuses on making the current or near-future world as good as it can be, longtermism takes a much broader perspective. 

Billions and billions

The longtermist argument starts by asserting that the welfare of someone living a thousand years from now is no less important than the welfare of someone living today. This is similar to Peter Singer’s argument that the welfare of someone living on the other side of the world is no less ethically important than the welfare of your family, friends or local community. We might have a stronger emotional connection to those nearer to us, but we have no reason to preference their welfare over that of people more spatially or temporally removed from us. 

Longtermists then urge us to consider that there will likely be many more people in the future than there are alive today. Indeed, humanity might persist for many thousands or even millions of years, perhaps even colonising other planets. This means there could be hundreds of billions of people, not to mention other sentient species or artificial intelligences that also experience pain or happiness, throughout the lifetime of the universe.  

The numbers escalate quickly, so if there’s even a 0.1% chance that our species colonises the galaxy and persists for a billion years, then that means the expected number of future people could number in the hundreds of trillions.  

The longtermism argument concludes that if we believe we have some responsibility to future people, and if there are many times more future people than there are people alive today, then we ought to prioritise the interests of future generations over the interests of those alive today.  

This is no trivial conclusion. It implies that we should make whatever sacrifices necessary today to benefit those who might live many thousands of years in the future. This means doing everything we can to eliminate existential threats that might snuff out humanity, which would not only be a tragedy for those who die as a result of that event but also a tragedy for the many more people who were denied an opportunity to be born. It also means we should invest everything we can in developing technology and infrastructure to benefit future generations, even if that means our own welfare is diminished today. 

Not without cost

Longtermism has captured the attention and support of some very wealthy and influential individuals, such as Skype co-founder Jaan Tallinn and Facebook co-founder Dustin Moskovitz. Organisations such as 80,000 Hours also use longtermism as a framework to help guide career decisions for people looking to do the most good over their lifetime.  

However, it also has its detractors, who warn about it distracting us from present and near-term suffering and threats like climate change, or accelerating the development of technologies that could end up being more harmful than beneficial, like superintelligent AI.  

Even supporters of longtermism have debated its plausibility as an ethical theory. Some argue that it might promote ‘fanaticism,’ where we end up prioritising actions that have a very low chance of benefiting a very high number of people in the distant future rather than focusing on achievable actions that could reliably benefit people living today. 

Others question the idea that we can reliably predict the impacts of our actions on the distant future. It might be that even our most ardent efforts today ‘wash out’ into historical insignificance only a few centuries from now and have no impact on people living a thousand or a million years hence. Thus, we ought to focus on the near-term rather than the long-term. 

Longtermism is an ethical theory with real impact. It redirects our attention from those alive today to those who might live in the distant future. Some of the implications are relatively uncontroversial, such as suggesting we should work hard to prevent existential threats. But its bolder conclusions might be cold comfort for those who see suffering and injustice in the world today and would rather focus on correcting that than helping build a world for people who may or may not live a million years from now. 


Ethics Explainer: Truth & Honesty

How do we know we’re telling the truth? If someone asks you for the time, do you ever consider the accuracy of your response? 

In everyday life, truth is often thought of as a simple concept. Something is factual, false, or unknown. Similarly, honesty is usually seen as the difference between ‘telling the truth’ and lying (with some grey areas like white lies or equivocations in between). ‘Telling the truth’ is somewhat of a misnomer, though. Since honesty is mostly about sincerity, people can be honest without being accurate about the truth. 

In philosophy, truth is anything but simple and weaves itself into a host of other areas. In epistemology, for example, philosophers interrogate the nature of truth by looking at it through the lens of knowledge.  

After all, if we want to be truthful, we need to know what is true. 

Figuring that out can be hard, not just practically, but metaphysically.  

Theories of Truth

There are several competing theories that attempt to explain what truth is, the most popular of which is the correspondence theory. Correspondence refers to the way our minds relate to reality. In it, truth is a belief or statement that corresponds to how the world ‘really’ is independent of our minds or perceptions of it. As popular as this theory is, it does prompt the question: how do we know what the world is like outside of our experience of it? 

Many people, especially scientists and philosophers, have to grapple with the idea that we are limited in our ability to understand reality. For every new discovery, there seems to be another question left unanswered. This being the case, the correspondence theory leads us to a problem of not being able to speak about things being true because we don’t have an accurate understanding of reality. 

Another theory of truth is the coherence theory. This states that truth is a matter of coherence within and between systems of beliefs. Rather than the truth of our beliefs relying on a relation to the external world, it relies on their consistency with other beliefs within a system.  

The strength of this theory is that it doesn’t depend on us having an accurate understanding of reality in order for us to speak about something being true. The weakness is that we can imagine there being several different comprehensive and cohesive system of beliefs that, and thus different people having different ‘true’ beliefs that are impossible to adjudicate between. 

Yet another theory of truth is pragmatist, although there are a couple of varieties, as with pragmatism in general. Broadly, we can think of pragmatist truth as a more lenient and practical correspondence theory.  

For pragmatists, what the world is ‘really’ like only matters as far as it impacts the usefulness of our beliefs in practice.  

So, pragmatist truth is in a sense malleable; it, like the scientific method it’s closely linked with, sees truth as a useful tool for understanding the world, but recognises that with new information and experiment the ‘truth’ will change. 

Ethical aspects of truth and honesty 

Regardless of the theory of truth that you subscribe to, there are practical applications of truth that have a significant impact on how to behave ethically. One of these applications is honesty.  

Honesty, in a simple sense, is speaking what we wholeheartedly believe to be true.  

Honesty comes up a lot in classical ethical frameworks and, as with lots of ethical concepts, isn’t as straightforward as it seems. 

In Aristotelian virtue ethics, honesty permeates many other virtues, like friendship, but is also a virtue in itself that lies between habitual lying and boastfulness or callousness. So, a virtue ethicist might say a severe lack of honesty would result in someone who is untrustworthy or misleading, while too much honesty might result in someone who says unnecessary truthful things at the expense of people’s feelings. 

A classic example is a friend who asks you for your opinion on what they’re wearing. Let’s say you don’t think what they’re wearing is nice or flattering. You could be overly honest and hurt their feelings, you could lie and potentially embarrass them, or you could frame your honesty in a way that is moderate and constructive, like “I think this other colour/fit suits you better”.  

This middle ground is also often where consequentialism lands on these kinds of interpersonal truth dynamics because of its focus on consequences. Broadly, the truth is important for social cohesion, but consequentialism might tell us to act with a bit more or a bit less honesty depending on the individual situations and outcomes, like if the truth would cause significant harm. 

Deontology, on the other hand, following in the footsteps of Immanuel Kant, holds honesty as an absolute moral obligation. Kant was known to say that honesty was imperative even if a murderer was at your door asking where your friend was! 

Outside of the general moral frameworks, there are some interesting ethical questions we can ask about the nature of our obligations to truth. Do certain people or relations have a stronger right to the truth? For example, many people find it acceptable and even useful to lie to children, especially when they’re young. Does this imply age or maturity has an impact on our right to the truth? If the answer to this is that it’s okay in a paternalistic capacity, then why doesn’t that usually fly with adults?  

What about if we compare strangers to friends and family? Why do we intuitively feel that our close friends or family ‘deserve’ the truth from us, while someone off the street doesn’t?  

If we do have a moral obligation towards the truth, does this also imply an obligation to keep ourselves well-informed so that we can be truthful in a meaningful way? 

The nature of truth remains elusive, yet the way we treat it in our interpersonal lives is still as relevant as ever. Honesty is a useful and easier way of framing lots of conversations about truth, although it has its own complexities to be aware of, like the limits of its virtue. 


Ethics Explainer: Critical Race Theory

Critical Race Theory (CRT) seeks to explain the multitude of ways that race and racism have become embedded in modern societies. The core idea is that we need to look beyond individual acts of racism and make structural changes to prevent and remedy racial discrimination.

History

Despite debates about Critical Race Theory hitting the headlines relatively recently, the theory has been around for over 30 years. It was originally developed in the 1980s by Derrick Bell, a prominent civil rights activist and legal scholar. Bell argued that racial discrimination didn’t just occur because of individual prejudices but also because of systemic forces, including discriminatory laws, regulations and institutional biases in education, welfare and healthcare.  

During the 1950s and 1960s in America, there were many legal changes that moved the country towards racial equality. Some of the most significant legal changes include the Supreme Court’s decision in Brown v. Board of Education, which explicitly banned racial apartheid in American schools, the Civil Rights Act of 1964 and the Voting Rights Act of 1965.

These rulings and laws formally criminalised segregation, legalised interracial marriage and reduced restrictions in access to the ballot box that had been commonplace in many parts of America since the 1870s. There was also a concerted effort across education and the media to combat racially discriminatory beliefs and attitudes.

However, legal scholars noticed that even in spite of these prominent efforts, racism persisted throughout the country. How could racial equality be legislated by the highest court in America, and yet racial discrimination still occur every day?  

Overview

Critical race theory, often shortened to CRT, is an academic framework that was developed out of legal scholarship that wanted to explain how institutions like the law perpetuates racial discrimination. The theory evolved to have an additional focus on how to change structures and institutions to produce a more equitable world. Today, CRT is mostly confined to academia, and while some elements of CRT may inform parts of primary and secondary education, very few schools teach CRT in its full form.  

Some of the foundational principles of CRT are:  

  1. CRT asserts that race is socially constructed. This means that the social and behavioural differences we see between different racialised groups are products of the society that they live in, not something biological or “natural.”  
Source: Museums Victoria Collections

There is a long history of people using science to attempt to prove that there were significant social and psychological differences among people of different racial groups. They claimed these differences justified the poor treatment of people of different ‘inferior races’, or the ‘breeding out’ of certain races. This is how white Australians justified the atrocities committed in the Stolen Generations, such as the attempted ‘breeding out’ of Aboriginal people.  

        2. Racism is systemic and institutional. Imagine if everyone in the world magically erased all their racial biases. Racism would still exist, because there are systems and institutions that uphold racial discrimination, even if the people within them aren’t necessarily racist.  

There are many examples of systemic and institutional racism around the world. They become evident when a system doesn’t have anything explicitly racist or discriminatory about it, but there are still differences in who benefits from that system. One example is the education system: it’s not explicitly racist, but students of different racial backgrounds have different educational outcomes and levels of attainment. In the US, this occurs because public schools are funded by both local and state governments, which means that children going to school in lower socioeconomic areas will be attending schools that receive less funding. Statistically, people of colour are more likely to live in lower socioeconomic areas of America.  So, even though the education system isn’t explicitly racist (i.e., treating students of one racial background differently from students of a different racial background), their racial background still impacts their educational outcomes.

        3. There is often more than one part of identity that can impact a person’s interaction with systems and institutions in society. Race is just one of many parts of identity that influences how a person will interact with the world. Different identities, including race, gender, sexuality, socioeconomic status, religion and ability, intersect with each other and compound. This is an idea known as “intersectionality.” 

Most of the time, it’s not just one part of a person’s identity that is impacting their experiences in the world. Someone who is a Black woman will experience racism differently from a Black man, because gender will impact experience, just like race. A wealthy Chinese-Australian person will have a different experience living in Australia than a working class Chinese-Australian person. Ultimately, CRT tells us that we need to look at race in conjunction with other facets of identity that impact a person’s experience.  

Critical Race Theory and racism in Australia

As Australians, it’s easy to point the finger at the US and think “well, at least we aren’t as bad as them.” However, this mentality of only focusing on the worst instances of racism means we often ignore the happenings closer to home. A 2021 survey conducted by the ABC found that 76% of Australians from a non-European background reported experiencing racial discrimination. One-third of all Australians have experienced racism in the workplace and two-thirds of students from non-Anglo backgrounds have experienced racism in school.  

In addition to frequent instances of racism, Australia’s history is fraught with racism that is predominantly left out of high school history textbooks. From our early colonial history to racial discrimination during the gold rush in the 1850s to anti-immigration rhetoric today, we don’t need to look far for examples of racial discrimination. A little known part of Australian history is that non-British immigrants from 1901 until the 1960s were told that if they moved to Australia, they had to shed their languages and culture.  

Even though CRT originates in the US, it is a useful framework for encouraging a closer analysis of Australia’s racist history and how this has caused the imbalances and inequalities we see today. And once we understand the systemic and institutional forces that promote or sustain racial injustice, we can take measures to correct them to produce more equitable outcomes for all. 

If you want to learn more about how race has impacted the world today, here are some good places to start:  

  • Nell Painter’s Soul Murder and Slavery – her work has focused on the generational psychological impact of the trauma of slavery. Here is an interview where Painter talks a little bit about her work.  
  • Nikole Hannah-Jones’ 1619 Project, with the New York Times – you can listen to the podcast on Spotify, which has six great episodes on some of the less reported ways that slavery has impacted the functioning of US society.   
  • Dear White People – a Netflix show that deals with some of the complications of race on a US college campus.  
  • Ladies in Black – a movie about Sydney c. 1950s, shows many instances of the casual racism towards refugees and immigrants from Europe.  


For a deeper dive on Critical Race Theory, Claire G. Coleman presents Words Are Weapons and Sisonke Msimang and Stan Grant present Precious White Lives as Festival of Dangerous Ideas 2022Tickets on sale now.


Ethics Explainer: Gender

Gender is a complex social concept that broadly refers to characteristics, like roles, behaviours and norms, associated with masculinity and femininity.  

Historically, gender in Western cultures has been a simple thing to define because it was seen as an extension of biological sex: ‘women’ were human females and ‘men’ were human males, where female and male were understood as biological categories. 

This was due to a view that espouses the idea that biology (i.e., sex) predetermines or limits a host of social, psychological and behavioural traits that are inherently different between men and women, a view often referred to as biological determinism. This is where we get stereotypes like “men are rational and unemotional” and “women are passive and caring”.  

While most people reject biologically deterministic views today, most still don’t distinguish between sex and gender. However, the conversation is slowly beginning to shift as a result of decades of feminist literature.  

Additionally, it’s worth noting that outside of Western traditions, gender has been a much more fluid and complex concept for thousands of years. Hundreds of traditional cultures around the world have conceptions of gender that extend beyond the binary of men and women. 

Feminist Gender Theory 

Feminism has had a long history of challenging assumptions about gender, especially since the late twentieth century. Alongside some psychologists at the time, feminists began differentiating between sex and gender to argue that many of the differences between men and women that people took to be intrinsic were really the result of social and cultural conditioning.  

Prior to this, sex and gender were thought be essentially the same thing. This encouraged people to confer biological differences onto social and cultural expectations. Feminists argue that this is a self-fulfilling misconception that produces oppression in many different ways; for example, socially and culturally limiting attitudes that prevent women from engaging in “masculine” activities and vice versa.  

Really, they say, gender is social and sex is biological. Philosopher Simone de Beauvoir famously said: “One is not born, but rather becomes, a woman”.  

Gender being social means that it’s a concept that is constructed and shaped by our perceptions of masculinity and femininity, and that it can vary between societies and cultures. Sex being biological means that it’s scientifically observable (though the idea of binary sex is also being questioned given there are over 100 million intersex people all over the world). 

Philosophers like Simone de Beauvoir argued that gendered assumptions and expectations were so deeply engrained in our lives that they began to appear biologically predetermined, which gave credence to the idea of women being subservient because they were biologically so. 

“Social discrimination produces in women moral and intellectual effects so profound that they appear to be caused by nature.” 

Gender and Identity 

Gender being socially constructed means that it is mutable. With this increasingly mainstream understanding, people with more diverse gender identities than simply that which they were assigned at birth (cisgender) have been able to identify themselves in ways that more closely reflect their experiences and expressions. 

For example, some people identify with a different gender than what they were assigned at birth based on their sex (transgender); some people don’t identify as either man or woman, and instead feel that they are somewhere in between, or that the binary conception of gender doesn’t fit their experience and identity at all (non-binary). In many non-western cultures, gender has never been a binary concept. 

Unfortunately, with the inherently identity-based nature of gender, a host of ethical issues arise mostly in the form of discrimination. 

Transgender people, for example, are often the target of discrimination. This can be in areas as simple as what bathrooms they use to more complicated areas like participation in elite sports. Notably, these examples of discrimination are almost always targeted at transfeminine people (those who identify as women after being assigned men at birth). 

Additionally, there are ethical considerations that have to be taken into account when young people, particularly minors, make decisions about affirming their gender. Currently, it’s standard medical practice for people under 18 to be barred from making decisions about permanent medical procedures, though this still allows them to (with professional, medical guidance) take puberty blockers that help to mitigate extra dysphoria linked to undergoing puberty in a gender the person doesn’t identify with. 

Gender stereotypes in general also have negatives effects on all genders. Genderqueer people are often the targets of violence and discrimination. Women have historically been and are still oppressed in many ways because of systemic gender biases, like being discouraged to work in certain fields, being paid less for similar work or being harassed in various areas of their lives. Men also face harmful effects of rigid gender norms that often result in risk-taking behaviour, internalisation of mental health struggles, and encouraging violent or anti-social behaviour. 

The Future of Gender 

This has been an overview of the most common views on gender. However, there are also many variations on the traditional feminist view that other feminists argue are more accurate depictions of reality.  

bell hooks was known to criticise some variations of gender that revolved around sexuality because they did not properly account for the way that class, race and socio-economic status changed the way that a woman was viewed and expected to behave. For example, many views of gender are from the perspective of white, western women and so fail to represent women in more marginalised circumstances. 

Along similar lines, Judith Butler criticises the very idea of grouping people into genders, arguing that it is and will always be inherently normative and hence exclusionary. For Butler, gender is not simply about identity, it’s primarily about equality and justice. 

Even some earlier gender theorists like Gayle Rubin argue for the eventual abolishment of gender from society, in which people are free to express themselves in whatever individual way they desire, free from any norms or expectations based on their biology and subsequent socialisation. 

“The dream I find most compelling is one of an androgynous and genderless (though not sexless) society, in which one’s sexual anatomy is irrelevant to who one is, what one does, and with whom one makes love.” 

Gender is currently a very active research and debate area, not only in philosophy, but also in sociology, politics and LGBTQI+ education. While theories about identity often result in conflict due to its inherently personal nature, it’s promising to see such a clear area where work by philosophers has significantly influenced public discourse with profound effects on many people’s lives. 

 

For a deeper dive on gender, Alok Vaid-Menon presents Beyond the Gender Binary as part of Festival of Dangerous Ideas 2022. Tickets on sale now.


Ethics Explainer: Social philosophy

Social philosophy is concerned with anything and everything about society and the people who live in it.

What’s the difference between a house and a cave, or a garden and a field of wildflowers? There are some things that are built by people, such as houses and gardens, that wouldn’t exist without human intervention. Similarly, there are some things that are natural, such as caves and fields of wildflowers, that would continue to exist as they were without humans. However, there is a grey area in the middle that social philosophers study, including topics like gender, race, ethics, law, politics, and relationships. Social philosophers spend their time parsing what parts of the world are constructed by humans and what parts are natural. 

We can see the beginnings of the philosophical debate of social versus natural through Aristotle’s and Plato’s justifications for slavery. Aristotle believed that some people were incapable of being their own masters, and this was a natural difference between a slave and a free person. Plato, on the other hand, believed that anyone who was inferior to the Greeks could be enslaved, a difference that was made possible by the existence of Greek society. 

Through the Middle Ages, attention turned to questioning religion and the divine right of monarchs. During this era, it was believed that monarchs were given their authority by God, which was why they had so much more power than the average person. British philosopher John Locke is well known for arguing that every man was created equally, and that everyone had an equal right to life, liberty, and pursuit of property. His conclusion was that these fundamental rights were natural to everyone, which contradicted the social norms that gave almost unlimited power to monarchs. The idea that a monarch naturally had the same fundamental rights as someone who worked the land would have to fundamentally change the structure of society. 

During the 19th century, some philosophers began to question social categories and where they came from. Many people at the time held that social classes, or groups of people of the same socioeconomic status, were a result of biological, or natural, differences between people. Karl Marx, known for his 1848 pamphlet The Communist Manifesto, proposed his own theory about social classes. He argued that these socioeconomic differences that formed social differences were a result of the type of work that someone did and therefore social classes were socially, not biologically, constructed. 

Today social philosophers are concerned with a variety of questions, including questions about race, gender, social change, and institutions that contribute to inequality. One example of a social philosopher who studies gender and race is Sally Haslanger. She has spent her time asking what are the defining characteristics of gender and race, and where these characteristics come from. In other cases, social philosophy is blended with cognitive psychology and behavioural studies, asking which of our behaviours are influenced by the society we live in and which behaviours are “natural,” or a product of our biology. 

Social philosophy and ethics

Many of the questions social philosophers are concerned with are intertwined with ethics. Part of living in a society requires an (often unwritten) ethical code of conduct that ensures everything functions smoothly. 

Thomas Hobbes’ social contract theory spells out the connection between a society and ethics. Hobbes believed that instead of ethics being something that existed naturally, a code of ethics and morality would arise when a group of free, self-interested, and rational people lived together in a society. Ethics would arise because people would find that better things could come from working together and trusting each other than would arise from doing everything on their own. 

Today, much of how we act is determined by the societies we live in. The kinds of clothes we wear, the media we interact with, and how we talk to each other change depending on the norms of our society. This can complicate ethics: should we change our ethical code when we move to a different society with different norms? For example, one culture may say that it’s morally acceptable to eat meat, while a different culture may not. Should a person have to change the way they act moving from the meat-eating culture to the non-meat-eating culture? Moral relativists would say it is possible for both cultures to be morally right, and that we should act accordingly depending on which culture we are interacting with.

A significant reason that social philosophy is still such a nebulous field is that everyone has different life experiences and interacts with society differently. Additionally, different people feel like they owe different levels of commitment to the people around them. Ultimately, it’s a serious challenge for philosophers to come up with social theories that resonate with everyone the theory is supposed to include. 


Ethics Explainer: Trust

Trust forms the foundation for relationships, cooperation, social interaction and the development of societies, but it is equally important as it is dangerous.

From hunter-gatherers to globalised societies, trust is the essential lubricant of social functioning at any scale.

Imagine something as simple as driving to get groceries. You leave your house, get in your car and drive to the shops. In doing so, you’re relying on several overlapping layers of trust that are engrained in our society. You trust that:

  • your neighbours won’t break into your house while you’re gone
    • the police will deal with them appropriately if they do
    • the insurance company will deal with you fairly if they do
  • the manufacturer of your car was responsible and competent
  • the other drivers on the road will all obey the traffic laws
  • your money has retained its value and that the shopkeeper will take it

and so on. Almost every aspect of our lives depends on these underlying relationships of trust with the people around us, and when they are betrayed – especially repeatedly – they can crumble and leave behind instability.

When we trust others, we’re depending on them to fulfil expectations that aren’t guaranteed to be met and that leaves us vulnerable or at risk.

Vulnerability is an important aspect of trust; it creates a tricky tension between our need to rely on others and our need to protect ourselves from risk and harm. To avoid vulnerability and guard against betrayal, we might decide not to trust anyone, but that leads to a miserable life: a life devoid of friendship or intimacy, and ultimately of convenience too, as we need to trust strangers on an almost daily basis, from taxi drivers to teachers to police.

So, we must trust others and learn how and when to be vulnerable to live a fulfilling life.

We trust people by giving them the space and freedom to do what they have been trusted to do, without necessary observation or oversight, with the expectation that they’re:

  1. Competent enough to do what they have been trusted to do and
  2. Willing to do it.

“Trust is an ability to rely on somebody to do what they have said they will do, even when no one is watching them” – Simon Longstaff AO

Trust and trustworthiness

An important distinction is the difference between trust and trustworthiness. Trust is an attitude that we have towards others (or sometimes ourselves!) that indicates our hope or expectation that the object of our trust is trustworthy.

Trustworthiness is a property or characteristic of others and in ideal situations has a reciprocal relationship with trust. That is, ideally, trust is an attitude towards trustworthy people, and trustworthy people will be trusted.

Of course, we have all experienced that we don’t live in an ideal world. Often untrustworthy people are perceived to be trustworthy because of lies, clever marketing or overwhelming charisma. Equally, trustworthy people can often be misrepresented to appear untrustworthy.

Interpersonal trust and institutional trust

There are two kinds of trust, the most common and intuitive kind being interpersonal trust – that is, trust between individuals. We might trust our friends with secrets or trust our family with babysitting or trust other drivers with our lives on the road.

A trickier kind of trust is that of institutions and government. They’re not as directly accessible as individual people

Nevertheless, we can and do trust institutions and governments to do as they say they will do, or what they are supposed to do: act in the interests of their people. This is a hallmark of society. But when this trust is eroded, we are left with an eroded society.

The ethics of trust

One of the first practical problems is knowing whom to trust. It’s easy to rely on perceived authority figures in our lives, and often we will simply need to trust that the people who are close to us are looking out for us, but in general we should be on the lookout for consistent moral behaviour.

Ultimately, there is no way to know whom to trust with certainty, but there are many indicators we can use to decide. Are they an honest person? Have they been reliable in the past? Are they self-centred or do they concern themselves with the wellbeing of others? Answering questions like these can help to minimise the risk you take on if you choose to trust someone.

  • How do we rebuild trust?
  • How should we act if we don’t trust someone?
  • If someone breaks our trust, should we distrust them?
  • When or how often should we re-examine our trust of someone?
  • When is trust a good thing? When is it bad?
  • What’s so important about vulnerability?

Answering these questions involves a complex mix of knowing how trust works, knowing the habits, motives and values of others, and knowing ourselves.


Ethics Explainer: Teleology

Often, when we try to understand something, we ask questions like “What is it for?”. Knowing something’s purpose or end-goal is commonly seen as integral to comprehending or constructing it. This is the practice or viewpoint of teleology.

Teleology comes from two Greek words: telos, meaning “end, purpose or goal”, and logos, meaning “explanation or reason”.

From this, we get teleology: an explanation of something that refers to its end, purpose or goal.

For example, take a kitchen knife. We might ask why a knife takes the form and features that it does. If we referred to the past – to the process of its making, for example – that would be a causal (etiological) explanation. But a teleological explanation would be something that refers to its end, like: “Its purpose is to cut”.  Someone might then ask: “But what makes a good knife?”, and the answer would be: “A good knife is a knife that cuts well.” It’s this guiding principle – knowing and focusing on the purpose – that allows knife-makers to make confident decisions in the smithing process and know that their knife is good, even if it’s never used.

What once was an acorn…

In Western philosophy, teleology originated in the writings and ideas of Plato and then Aristotle. For the Ancient Greeks, telos was a bit more grounded in the inherent nature of things compared to the man-made example of a knife.

For example, a seed’s telos is to grow into an adult plant. An acorn’s telos is to grow into an oak tree. A chair’s telos is to be sat on. For Aristotle, a telos didn’t necessarily need to involve any deliberation, intention or intelligence.

However, this is where teleological explanations have caused issue.

Teleological explanations are sometimes used in evolutionary biology as a kind of shorthand, much to the dismay of many scientists. This is because the teleological phrasing of biological traits can falsely present the facts as supporting some kind of intelligent design.

For example, take the long neck of giraffes. A shorthand teleological explanation of this trait might be that “evolution gave giraffes long necks for the purpose of reaching less competitive food sources”. However, this explanation wrongly implies some kind of forward-looking purpose for evolved traits, or that there is some kind of intention baked into evolution.

Instead, evolutionary biology suggests that giraffes with short necks were less likely to survive, leaving the longer-necked giraffes to breed and pass on their long-neck genes, eventually increasing the average length of their necks.

Notice how the accurate explanation doesn’t refer to any purpose or goal. This kind of description is needed when talking about things like nature or people (at least, if you don’t believe in gods), though teleological explanations can still be useful elsewhere.

Ethics and decision-making

Teleology is more helpful and impactful in ethics, or decision-making in general.

Aristotle was a big proponent of human teleology, seen in the concept of eudaimonia (flourishing). He believed that human flourishing was the goal or purpose of each person, and that we could all strive towards this “life well-lived” by living in moderation, according to various virtues.

Teleology is also often compared or confused with consequentialism, but they are not the same. If you were to take a business that specialises in home security, for example, a consequentialist would tell you to look at the consequences of your service to see if it is effective and good. Sometimes, though, it will be hard to tell if the outcome (e.g., fewer break-ins or attempted break-ins) can be attributed to your business and not other factors, like changes in laws, policing, homelessness, etc., or you might not yet have any outcomes to analyse.

Instead, teleological approaches to business decision-making would have you focus on the purpose of your service i.e., to prevent home intrusion and ensure security. With that in mind, you could construct your services to meet these goals in a variety of ways, keeping this purpose in mind when making hiring decisions, planning redundancies, etc., and be confident that your service would fulfil its purpose well (even if it is never needed!).

But how do we decide what a good purpose is?

Simply using a teleological lens doesn’t make us ethical. If we’re trying to be ethical, we want to make sure that our purpose itself is good. One option to do this is to find a purpose that is intrinsically good – things like justice, security, health and happiness, rather than things that are a means to an end, like profit or personal gain.

This viewpoint needn’t only apply to business. In trying to be better, more ethical people, we can employ these same teleological views and principles to inform our own decisions and actions. Rather than thinking about the consequences of our actions, we can instead think about what purpose we’re trying to achieve, and then form our decisions based on whether they align with that purpose.