Big Thinker: Karl Marx

Karl Marx (1818-1883) was a philosopher, economist, and revolutionary thinker whose criticisms of capitalism and breakdowns of class struggle continue to influence contemporary thought about economic inequality and the worth of individual labour. 

He was not only a prominent figure in the world of philosophy but also a key player in economic and political theory. Marx’s life and work were deeply intertwined with the tumultuous historical backdrop of the 19th century, marked by the Industrial Revolution and the rise of capitalism. 

Born in Trier, Prussia (now in Germany), Marx began with a focus on law and philosophy at the University of Bonn and later at the University of Berlin. During his time in Berlin, he encountered the ideas of G.W.F. Hegel, whose methods significantly influenced Marx’s own philosophical approach.  

In collaboration with Friedrich Engels, Marx developed and refined his ideas, culminating in some of the most influential works in the history of political philosophy. For example, his infamous The Communist Manifesto (1848) and Das Kapital (1867, 1885, 1894). 

Historical materialism and class struggle

One of Marx’s central ideas was historical materialism, a theory that analyses the evolution of societies through the lens of economic systems. According to Marx, the structure of a society is primarily determined by its mode of production: the ways commodities and services are produced and distributed, and the social relations that affect these functions. In capitalist societies, the means of production are privately owned, leading to a class-based social structure separating the owners and the workers. 

Marx’s analysis of class struggle underscores the ethical imperative of addressing economic inequality. He argued that under capitalism, the bourgeoisie (owners of the means of production) exploit the proletariat (the working class) for their own profit. This exploitation, he claims, is the engine that drives the capitalist system, where workers are paid less than the value of their labour while the bourgeoisie reap the profits. This exploitation also results in alienation, where workers are estranged from the full effects of their labour and, Marx argues, even from their own humanity. 

Marx’s arguments call for a reevaluation of the inherent fairness of such a system. He questions the morality of a society where wealth and power are concentrated in the hands of a few while the masses toil in poverty. This is an ethical challenge that continues to resonate in contemporary discussions about income inequality and social justice.  

Marx’s critique challenges us to consider whether a society that values profit and efficiency over the well-being and fulfillment of its members is ethically justifiable.

To address this concern, Marx envisioned a classless society, where the means of production would be collectively owned. This transition, he believed, would eliminate the inherent exploitation of capitalism and lead to a more just and equitable society. While the practical realisation of this vision has proven challenging, it remains a foundational ethical ideal for some, emphasising the need to confront economic disparities for the sake of human dignity and fairness. 

Critique of capitalism and commodification

Marx’s critique of capitalism extended beyond its class divisions. He also examined the profound impact of capitalism on human relationships and the commodification of virtually everything, including labour, under this system. For Marx, capitalism reduced individuals to mere commodities, bought and sold in the labour market.  

Marx’s critique of commodification highlights the importance of valuing individuals beyond their economic contributions. He argued that in a capitalist society, individuals are often reduced to their economic worth, which can erode their sense of self-worth and dignity. Addressing this ethical concern calls for recognising the intrinsic value of every person and fostering functions in societies that prioritise human well-being over profit. 

The communist vision

Marx’s ultimate vision was communism, a classless society where resources would be shared collectively. In such a society, the state as we know it would wither away, and individuals would contribute to the common good according to their abilities and receive according to their needs. 

This communist vision raises questions about the ethics of property and ownership. It challenges us to rethink the distribution of resources in society and consider alternative models that prioritise equity and communal well-being. While achieving a truly communist society might be complex or even out of reach, the aspiration of creating a world where everyone’s needs are met and individuals contribute to the best of their abilities is still a general ethical ideal many people intuitively strive for. 

Despite this, Marx’s ideas have faced much criticism. Many believe that a classless society with a centralised power risks authoritarianism, Marx’s economic planning lacked detail, communism goes against human nature of self-interest and competition, and historical and contemporary communist systems face large practical challenges. 

In spite of, and sometimes because of, these challenges, Marx’s ideas continue to spark ethical discussions about economic inequality, commodification, and the nature of human relationships in contemporary society. His legacy serves as a reminder of the enduring importance of grappling with questions of justice, equality, and human dignity in our ever-evolving social and economic landscapes. 


Ethics explainer: Cultural Pluralism

Imagine a large, cosmopolitan city, where people from uncountable backgrounds and with numerous beliefs all thrive together. People embrace different cultural traditions, speak varying languages, enjoy countless cuisines, and educate their children on diverse histories and practices.

This is the kind of pluralism that most people are familiar with, but a diverse and culturally integrated area like this is specifically an example of cultural pluralism.

Pluralism in a general sense says there can be multiple perspectives or truths that exist simultaneously, even if some of those perspectives are contradictory. It’s contrasted with monism, which says only one kind of thing exists; dualism, which says there are only two kinds of things (for example, mind and body); and nihilism, which says that no things exist.

So, while pluralism more broadly refers to a diversity of views, perspectives or truths, cultural pluralism refers specifically to a diversity of cultures that co-exist – ideally harmoniously and constructively – while maintaining their unique cultural identities.

Sometimes an entire country can be considered culturally pluralistic, and in other places there may be culturally pluralistic hubs (like states or suburbs where there is a thriving multicultural community within a larger more broadly homogenous area).

On the other end of the spectrum is cultural monism, the idea that a certain area or population should have only one culture. Culturally monistic places (for example, Japan or North Korea) rely on an implicit or explicit pressure for others to assimilate. Whereas assimilation involves the homogenisation of culture, pluralism encourages diversity, often embracing people of different ethnic groups, backgrounds, religions, practices and beliefs to come together and share in their differences.

A pluralistic society is more welcoming and supportive of minority cultures because people don’t feel pressured to hide or change their identities. Instead, diverse experiences are recognised as opportunities for learning and celebration. This invites travel and immigration, and translates into better mental health for migrants, the promotion of harmony and acceptance of others, and enhances creativity by exposing people to perspectives and experiences outside of their usual remit.

We also know what the alternative is in many cases. Australia has a dark history of assimilation practices, a symptom of racist, colonial perspectives that saw the decimation of First Nations people and their cultures. Cultural pluralism is one response to this sort of cultural domination that has been damaging throughout history and remains so in many places today.

However, there are plenty of ethical complications that arise in the pursuit of cultural plurality.

For example, sociologist Robert D. Putnam published research in 2007 that spoke about negative short-medium term effects of ethnically diverse neighbourhoods. He found that, on average, trust, altruism and community cooperation was lower in these neighbourhoods, even between those of the same or similar ethnicities.

While Putnam denied that his findings were anti-multicultural, and argues that there are several positive long-term effects of diverse societies, the research does indicate some of the risks associated with cultural pluralism. It can take a large amount of effort and social infrastructure to build and maintain diverse communities, and if this fails or is done poorly it can cause fragmentation of cultural communities.

This also accords with an argument made by journalist David Goodhart, that says people are generally divided into “Anywheres” (people with a mobile identity) and “Somewheres” (people, usually outside of urban areas, who have marginalised, long-term, location-based identities). This incongruity, he says, accounts for things like Brexit and the election of Donald Trump, because they speak to the Somewheres who are threatened by changes to their status quo. Pluralism, Goodhart notes, risks overlooking the discomfort these communities face if they are not properly supported and informed.

Other issues with pluralism include the prioritisation of competing cultural values and traditions. What if one person’s culture is fundamentally intolerant of another person’s culture? This is something we see especially with cultures organised around or heavily influenced by religion. For example, Christianity and Islam are often at odds with many different cultures around issues of sexual preference and gendered rights and responsibilities.

If we are to imagine a truly culturally pluralistic society, how do we ethically integrate people who are intolerant of others?

Pluralism as a cultural ideal also has direct implications for things like politics and law, raising the age-old question about the relationship between morality and the law. If we want a pluralistic society generally, how do the variations in beliefs, values and principles translate into law? Is it better to have a centralised legal system or do we want a legal plurality that reflects the diversity of the area?

This does already exist in some capacity – many countries have Islamic courts that enforce Sharia law for their communities in addition to the overarching governmental law. This parallel law-enforcement also exists in some colonised countries, where parts of Indigenous law have been recognised. For example, in Australia, with the Mabo decision.

Another feature of genuine cultural pluralism that has huge ethical implications and considerations is diversity of media. This is the idea that there should be (that is, a media system that is not monopolised) and diverse representation in media (that is, media that presents varying perspectives and analyses).

Firstly, this ensures that media, especially news media, stays accountable through comparison and competition, rather than a select powerful few being able to widely disseminate their opinions unchecked. Secondly, it fosters a greater sense of understanding and acceptance by exposing people to perspectives, experiences and opinions that they might otherwise be ignorant or reflexively wary of. Thirdly, as a result, it reduces the risk that media, as a powerful disseminator of culture, could end up creating or reinforcing a monoculture.

While cultural pluralism is often seen as an obviously good thing in western liberal societies, it isn’t without substantial challenges. In the pursuit of tolerance, acceptance and harmony, we must be wary of fragmenting cultures and ensure that diverse communities have adequate social supports to thrive.


Ethics explainer: Normativity

Have you ever spoken to someone and realised that they’re standing a little too close for comfort?

Personal space isn’t something we tend to actively think about; it’s usually an invisible and subconscious expectation or preference. However, when someone violates our expectations, they suddenly become very clear. If someone stands too close to you while talking, you might become uncomfortable or irritated. If a stranger sits right next to you in a public place when there are plenty of other seats, you might feel annoyed or confused.

That’s because personal space is an example of a norm. Norms are communal expectations that are taken up by various populations, usually serving shared values or principles, that direct us towards certain behaviours. For example, the norm of personal space is an expectation that looks different depending on where you are.

In some countries, the norm is to keep distance when talking to strangers, but very close when talking to close friends, family or partners. In other countries, everyone can be relatively close, and in others still, not even close relationships should invade your personal space. This is an example of a norm that we follow subconsciously.

We don’t tend to notice what our expectation even is until someone breaks it, at which point we might think they’re disrespecting personal or social boundaries.

Norms are an embodiment of a phenomenon called normativity, which refers to the tendency of humans and societies to regulate or evaluate human conduct. Normativity pervades our daily lives, influencing our decisions, behaviors, and societal structures. It encompasses a range of principles, standards, and values that guide human actions and shape our understanding of what’s considered right or wrong, good or bad.

Norms can be explicit or implicit, originating from various sources like cultural traditions, social institutions, religious beliefs, or philosophical frameworks. Often norms are implicit because they are unspoken expectations that people absorb as they experience the world around them.

Take, for example, the norms of handshakes, kisses, hugs, bows, and other forms of greeting. Depending on your country, time period, culture, age, and many other factors, some of these will be more common and expected than others. Regardless, though, each of them has a or function like showing respect, affection or familiarity.

While these might seem like trivial examples, norms have historically played a large role in more significant things, like oppression. Norms are effectively social pressures, so conformity is important to their effect – especially in places or times where the flouting of norms results in some kind of public or social rebuke.

So, norms can sometimes be to the detriment of people who don’t feel their preferences or values reflected in them, especially when conformity itself is a norm. One of the major changes in western liberal society has been the loosening of norms – the ability for people to live more authentically themselves.

Normative Ethics

Normativity is also an important aspect of ethical philosophy. Normative ethics is the philosophical inquiry into the nature of moral judgments and the principles that should govern human actions. It seeks to answer fundamental questions like “What should I do?”, “How should I live? and “Which norms should I follow?”. Normative ethical theories provide frameworks for evaluating the morality of specific actions or ethical dilemmas.

Some normative ethical theories include:

  • Consequentialism, which says we should determine moral valued based on the consequences of actions.
  • Deontology, which says we should determine moral value by looking at someone’s coherence with consistent duties or obligations.
  • Virtue ethics, which focuses on alignment with various virtues (like honesty, courage, compassion, respect, etc.) with an emphasis on developing dispositions that cultivate these virtues.
  • Contractualism, informed by the idea of the social contract, which says we should act in ways and for reasons that would be agreed to by all reasonable people in the same circumstances.
  • Feminist ethics, or the ethics of care, which says that we should challenge the understand and challenge the way that gender has operated to inform historical ethical beliefs and how it still affects our moral practices today.

Normativity extends beyond individual actions and plays a significant role in shaping societal norms, as we saw earlier, but also laws and policies. They influence social expectations, moral codes, and legal frameworks, guiding collective behavior and fostering social cohesion. Sometimes, like in the case of traffic laws, social norms and laws work in a circular way, reinforcing each other.

However, our normative views aren’t static or unchangeable.

Over time, societal norms and values evolve, reflecting shifts in normative perspectives (cultural, social, and philosophical). Often, we see social norms culminating in the changing of outdated laws that accurately reflected the normative views of the time, but no longer do.

While it’s ethically significant that norms shift over time and adapt to their context, it’s important to note that these changes often happen slowly. Eventually, changes in norms influence changes in laws, and this can often happen even more slowly, as we have seen with homosexuality laws around the world.


Ethics explainer: Nihilism

“If nothing matters, then all the pain and guilt you feel for making nothing of your life goes away.” – Jobu Tupaki, Everything Everywhere All At Once 

Do our lives matter? 

Nihilism is a school of philosophical thought proposing that our existence fundamentally lacks inherent meaning. It rejects various aspects of human existence that are generally accepted and considered fundamental, like objective truth, moral truth and the value and purpose of life. Its origin is the Latin word ‘nihil’, which means ‘nothing’.  

The most common branches of nihilism are existential and moral nihilism, though there are many others, including epistemological, political, metaphysical and medical nihilism. 

Existential nihilism  

In popular use, nihilism usually refers to existential nihilism, a precursor to existentialist thought. This is the idea that life has no inherent meaning, value or purpose and it’s also often (because of this) linked with feelings of despair or apathy. Nihilists in media are usually portrayed as moody, brooding or radical types who have decided that we are insignificant specks floating around an infinite universe, and that therefore nothing matters.  

Nihilist ideas date as far back as Buddha; though the beginning of its uprising in western literature appeared in the early 19th century. This shift was largely a response to the diminishing moral authority of the church (and religion at large) and the rise of secularism and rationalism. This rejection led to the view that the universe had no grand design or purpose, that we are all simply cogs in the machine of the existence. 

Though he wasn’t a nihilist himself, Friedrich Nietzsche is the poster-child for much of contemporary nihilism, especially in pop culture and online circles. Nietzsche wrote extensively on it in the late 19th century, speaking of the crisis we find ourselves in when we realise that the world lacks the intrinsic meaning or value that we want or believed it to have. This is ultimately something that he wanted us to overcome.  

He saw humans responding to this crisis in two ways: passive or active nihilism.  

For Nietzsche, passive nihilists are those who resign themselves to the meaninglessness of life, slowly separating themselves from their own will or desires to minimise the suffering they face from the random chaos of the world. 

In media, this kind of pessimistic nihilism is sometimes embodied by characters who then act on it in a destructive way. For example, the antagonist, Jobu Topaki in Everything Everywhere All At Once comes to this realisation through her multi-dimensional awareness, which convinces her that because of the infinite nature of reality, none of her choices matter and so she attempts to destroy herself to escape the insignificance and meaninglessness she feels. 

Jobu Topaki, Everything Everywhere All At Once (2022)

Active nihilists instead see nihilism as a freeing condition, revealing a world where they are emboldened to create something new on top of the destruction of the old values and ways of thinking.  

Nietzsche’s idea of the active nihilist is the Übermensch (“superman”), a person who overcomes the struggle of nihilism by working to create their own meaning in the face of meaninglessness. They see the absurdity of life as something to be embraced, giving them the ability to live in a way that enforces their own values and “levels the playing field” of past values.  

Moral nihilism

Existential nihilism often gives way to moral nihilism, the idea that morality doesn’t exist, that no moral choices are preferable in comparison to others. Because, if our lives don’t have intrinsic meaning, if objective values don’t exist, then by what standard can we call actions right or wrong? We normally see this kind of nihilism embodied by anarchic characters in media. 

An infamous example is the Joker from the Batman franchise. Especially in renditions like The Dark Knight (2008) and Joker (2019), the Joker is portrayed as someone whose expectations of the world have failed him, whose tortuous existence has led him to believe that nothing matters, the world doesn’t care, and that in the face of that, we shouldn’t care about anything or anyone either. In his words, “everything burns” in the end, so he sees no problem in hastening that destruction and ultimately the destruction of himself. 

The Joker, 2019

“Now comes the part where I relieve you, the little people, of the burden of your useless lives.”

The Joker epitomises the populist understanding of nihilism and one of the primary ethical risks of this philosophical world view. For some people, viewing their lives as lacking inherent meaning or value causes a psychological spiral into apathy.  

This spiral can cause people to become self-destructive, reclusive, suicidal and otherwise hasten towards “nothingness”. In others, it can cause outwardly destructive actions because of their perception that since nothing matters in some kind of objective sense, they can do whatever they want (think American Psycho).  

Nihilism has particularly flourished in many online subcultures, fuelling the apathy of edgelords towards the plights of marginalised populations and often resulting in a tendency towards verbal and physical violence. One of the major challenges of nihilism, historically and today, is that it’s not obviously false. This is where we rely on philosophy to be able to justify why any morality should exist at all. 

Where to go from here

A common thread runs through many of the nihilist and existentialist writers about what we should do in the face of inherent meaninglessness: create it ourselves. 

Existentialists like Simone de Beauvoir and Jean-Paul Sartre talk about the importance of recognising the freedom that this kind of perspective gives us. And, equally, the importance of making sure that we make meaning for ourselves and for others through our life. 

For some people, that might be a return to religion. But there are plenty of other ways to create meaning in life: focusing on what’s subjectively meaningful to you or those you care about and fully embracing those things. Existence doesn’t need to have intrinsic meaning for us to care. 


Thought experiment: "Chinese room" argument

If a computer responds to questions in an intelligent way, does that mean it is genuinely intelligent?

Since its release to the public in November 2022, ChatGPT has taken the world by storm. Anyone can log in, ask a series of questions, and receive very detailed and reasonable responses.

Given the startling clarity of the responses, the fluidity of the language and the speed of the response, it is easy to assume that ChatGPT “understands” what it’s reporting back. The very language used by ChatGPT, and the way it types out each word individually, reinforces the feeling that we are “chatting” with another intelligent being.

But this raises the question of whether ChatGPT, or any other large language model (LLM) like it, is genuinely capable of “understanding” anything, at least in the way that humans do. This is where a thought experiment concocted in the 1980s becomes especially relevant today.

“The Chinese room”

Imagine you’re a monolingual native English speaker sitting in a small windowless room surrounded by filing cabinets with drawers filled with cards, each featuring one or more Chinese characters. You also have a book of detailed instructions written in English on how to manipulate those cards.

Given you’re a native English speaker with no understanding of Chinese, the only thing that will make sense to you will be the book of instructions.

Now imagine that someone outside the room slips a series of Chinese characters under the door. You look in the book and find instructions telling you what to do if you see that very series of characters. The instructions culminate by having you pick out another series of Chinese characters and slide them back under the door.

You have no idea what the characters mean but they make perfect sense to the native Chinese speaker on the outside. In fact, the series of characters they originally slid under the door formed a question and the characters you returned formed a perfectly reasonable response. To the native Chinese speaker outside, it looks, for all intents and purposes, like the person inside the room understands Chinese. Yet you have no such understanding.

This is the “Chinese room” thought experiment proposed by the philosopher John Searle in 1980 to challenge the idea that a computer that simply follows a program can have a genuine understanding of what it is saying. Because Searle was American, he chose Chinese for his thought experiment. But the experiment would equally apply to a monolingual Chinese speaker being given cards written in English or a Spanish speaker given cards written in Cherokee, and so on.

Functionalism and Strong AI

Philosophers have long debated what it means to have a mind that is capable of having mental states, like thoughts or feelings. One view that was particularly popular in the late 20th century was called “functionalism”.

Functionalism states that a mental state is not defined by how it’s produced, such as requiring that it must be the product of a brain in action. It is also not defined by what it feels like, such as requiring that pain have a particular unpleasant sensation. Instead, functionalism says that a mental state is defined by what it does.

This means that if something produces the same aversive response that pain does in us, even if it is done by a computer rather than a brain, then it is just as much a mental state as it is when a human experiences pain.

Functionalism is related to a view that Searle called “Strong AI”. This view says that if we produce a computer that behaves and responds to stimuli in exactly the same way that a human would, then we should consider that computer to have genuine mental states. “Weak AI”, on the other hand, simply claims that all such a computer is doing is simulating mental states.

Searle offered the Chinese room thought experiment to show that being able to answer a question intelligently is not sufficient to prove Strong AI. It could be that the computer is functionally proficient in speaking Chinese without actually understanding Chinese.

ChatGPT room

While the Chinese room remained a much-debated thought experiment in philosophy for over 40 years, today we can all see the experiment made real whenever we log into Chat GPT. Large language models like ChatGPT are the Chinese room argument made real. They are incredibly sophisticated versions of the filing cabinet, reflecting the corpus of text upon which they’re trained, and the instructions, representing the probabilities used to decide how to pick which character or word to display next.

So even if we feel that ChatGPT – or a future more capable LLM – understands what it’s saying, if we believe that the person in the Chinese room doesn’t understand Chinese, and that LLMs operate in much the same way as the Chinese room, then we must conclude that it doesn’t really understand what it’s saying.

This observation has relevance for ethical considerations as well. If we believe that genuine ethical action requires the actor to have certain mental states, like intentions or beliefs, or that ethics requires the individual to possess certain virtues, like integrity or honesty – then we might conclude that a LLM is incapable of being genuinely ethical if it lacks these things.

A LLM might still be able to express ethical statements and follow prescribed ethical guidelines imposed by its creators – as has been the case in the creators of ChatGPT limiting its responses around sensitive topics such as racism, violence and self-harm – but even if it looks like it has its own ethical beliefs and convictions, that could be an illusion similar to the Chinese room.


Ethics Explainer: Moral injury

Moral injury occurs when we are forced to violate our deepest ethical values and it can have a serious impact on our wellbeing.

In the 1980s, the American psychiatrist Jonathan Shay was helping veterans of the war in Vietnam deal with the traumas they had experienced. He noticed that many of his patients were experiencing high levels of despair accompanied by feelings of guilt and shame, along with a decline of trust in themselves and others. This led to them disengaging from their friends, family and society at large, accompanied by episodes of suicidality and interpersonal violence. 

Shay realised that this was not posttraumatic stress disorder (PTSD), this was something different. Shay saw that these veterans were not just traumatised by what had happened to them, they were ‘wounded’ by what they had done to others. He called this new condition “moral injury,” describing it as a “soul wound inflicted by doing something that violates one’s own ethics, ideals, or attachments”. 

The “injury” is to our very self-conception as ethical beings, which is a core aspect of our identity. As Shay stated about his patients, moral injury “deteriorates their character; their ideals, ambitions, and attachments begin to change and shrink.”  

Moral injury is, at its heart, an ethical issue. It is caused when we are faced with decisions or directives that force us to challenge or violate our most deeply held ethical values, like if a soldier is forced to endanger civilians or a nurse feels they can’t offer each of their patients the care they deserve due to staff shortages.  

Sometimes this ethical compromise can be caused by the circumstances people are placed in, like working in an organisation that is chronically under-resourced. Sometimes it can be caused by management expecting them to do something that goes against their values, like overlooking inappropriate behaviour among colleagues in the workplace in order to protect high performers or revenue generators. 

Symptoms

There are several common symptoms of moral injury. The first is guilt. This manifests as intense discomfort and hyper-sensitivity towards how others regard us, and can lead to irritability, denial or projection of negative feelings, such as anger, onto others. 

Guilt can tip over into shame, which is a form of intense negative self-evaluation or self-disgust. This is why shame sometimes manifests as stomach pains or digestive issues. Shame can be debilitating and demotivating, causing a negative spiral into despondency. 

Excessive guilt and shame can lead to anxiety, which is a feeling of fear that doesn’t have an obvious cause. Anxiety can cause distraction, irritability, fatigue, insomnia as well as body and muscle aches. 

Moral injury also challenges our self-image as ethical beings, sometimes leading to us losing trust in our own ability to do what is right. This can rob us of a sense of agency, causing us to feel powerless, becoming passive, despondent and feeling resigned to the forces that act upon us. It can also erode our own moral compass and cause us to question the moral character of others, which can further shake our feeling that the other people and society at large are guided by ethical principles that we value. 

The negative emotions and self-assessment that accompany moral injury can also cause us to withdraw from social or emotional engagement with others. This can involve a reluctance to interact socially as well as empathy fatigue, where we have difficulty or lack the desire to share in others’ emotions. 

Distinctions

Moral injury is often mistaken for PTSD or burnout, but they are different issues. Burnout is a response to chronic stress due to unreasonable demands, such a relentless workloads, long hours, chronic under resourcing. It can lead to emotional exhaustion and, in extreme cases, depersonalisation, where people feel detached from their lives and just continue on autopilot. But it’s possible to suffer from burnout even if you are not compromising your deepest ethical values; you might feel burnout but still agree that the work you’re doing is worthwhile. 

PTSD is a response to witnessing or experiencing intense trauma or threat, especially mortal danger. It can be amplified if the individual survived the danger while those around them, especially close friends or colleagues, did not survive. This could be experienced following a round of poorly managed redundancies, where those who keep their jobs have survivor guilt. Thus, PTSD is typically a response to something that you have witnessed or experienced, whereas moral injury is related to something that you have done (or not been able to do) to others.  

Moral injury affects a wide range of industries and professions, from the military to healthcare to government and corporate organisations, and its impacts can be easily overlooked or mistaken for other issues. But with a greater awareness of moral injury and its causes, we’ll be better equipped to prevent and treat it. 

 

If you or someone you know is suffering from moral injury you can contact Ethi-call, a free and independent helpline provided by The Ethics Centre. Trained counsellors will talk you through the ethical dimension of your situation and provide resources to help understand it and to decide on the best course of action. To book a call visit www.ethi-call.com 

The Ethics Centre is a thought leader in assessing organisational cultural health and building leadership capability to make good ethical decisions. We have helped a number of organisations across a number of industries deal with moral injury, burnout and PTSD. To arrange a confidential conversation contact the team at consulting@ethics.org.au. Or visit our consulting page to learn more. 


Ethics Explainer: Longtermism

Longtermism argues that we should prioritise the interests of the vast number of people who might live in the distant future rather that the relatively few people who do live today.

Do we have a responsibility to care for the welfare of people in future generations? Given the tremendous efforts people are making to prevent dangerous climate change today, it seems that many people do feel some responsibility to consider how their actions impact those who are yet to be born. 

But if you take this responsibility seriously, it could have profound implications. These implications are maximally embraced by an ethical stance called ‘longtermism,’ which argues we must consider how our actions affect the long-term future of humanity and that we should prioritise actions that will have the greatest positive impact on future generations, even if they come at a high cost today. 

Longtermism is a view that emerged from the effective altruism movement, which seeks to find the best ways to make a positive impact on the world. But where effective altruism focuses on making the current or near-future world as good as it can be, longtermism takes a much broader perspective. 

Billions and billions

The longtermist argument starts by asserting that the welfare of someone living a thousand years from now is no less important than the welfare of someone living today. This is similar to Peter Singer’s argument that the welfare of someone living on the other side of the world is no less ethically important than the welfare of your family, friends or local community. We might have a stronger emotional connection to those nearer to us, but we have no reason to preference their welfare over that of people more spatially or temporally removed from us. 

Longtermists then urge us to consider that there will likely be many more people in the future than there are alive today. Indeed, humanity might persist for many thousands or even millions of years, perhaps even colonising other planets. This means there could be hundreds of billions of people, not to mention other sentient species or artificial intelligences that also experience pain or happiness, throughout the lifetime of the universe.  

The numbers escalate quickly, so if there’s even a 0.1% chance that our species colonises the galaxy and persists for a billion years, then that means the expected number of future people could number in the hundreds of trillions.  

The longtermism argument concludes that if we believe we have some responsibility to future people, and if there are many times more future people than there are people alive today, then we ought to prioritise the interests of future generations over the interests of those alive today.  

This is no trivial conclusion. It implies that we should make whatever sacrifices necessary today to benefit those who might live many thousands of years in the future. This means doing everything we can to eliminate existential threats that might snuff out humanity, which would not only be a tragedy for those who die as a result of that event but also a tragedy for the many more people who were denied an opportunity to be born. It also means we should invest everything we can in developing technology and infrastructure to benefit future generations, even if that means our own welfare is diminished today. 

Not without cost

Longtermism has captured the attention and support of some very wealthy and influential individuals, such as Skype c0-founder Jaan Tallinn and Dustin Moskovitz, who co-founded Facebook. Organisations such as 80,000 Hours also use longtermism as a framework to help guide career decisions for people looking to do the most good over their lifetime.  

However, it also has its detractors, who warn about it distracting us from present and near-term suffering and threats like climate change, or accelerating the development of technologies that could end up being more harmful than beneficial, like superintelligent AI.  

Even supporters of longtermism have debated its plausibility as an ethical theory. Some argue that it might promote ‘fanaticism,’ where we end up prioritising actions that have a very low chance of benefiting a very high number of people in the distant future rather than focusing on achievable actions that could reliably benefit people living today. 

Others question the idea that we can reliably predict the impacts of our actions on the distant future. It might be that even our most ardent efforts today ‘wash out’ into historical insignificance only a few centuries from now and have no impact on people living a thousand or a million years hence. Thus, we ought to focus on the near-term rather than the long-term. 

Longtermism is an ethical theory with real impact. It redirects our attention from those alive today to those who might live in the distant future. Some of the implications are relatively uncontroversial, such as suggesting we should work hard to prevent existential threats. But its bolder conclusions might be cold comfort for those who see suffering and injustice in the world today and would rather focus on correcting that than helping build a world for people who may or may not live a million years from now. 


Ethics Explainer: Truth & Honesty

How do we know we’re telling the truth? If someone asks you for the time, do you ever consider the accuracy of your response? 

In everyday life, truth is often thought of as a simple concept. Something is factual, false, or unknown. Similarly, honesty is usually seen as the difference between ‘telling the truth’ and lying (with some grey areas like white lies or equivocations in between). ‘Telling the truth’ is somewhat of a misnomer, though. Since honesty is mostly about sincerity, people can be honest without being accurate about the truth. 

In philosophy, truth is anything but simple and weaves itself into a host of other areas. In epistemology, for example, philosophers interrogate the nature of truth by looking at it through the lens of knowledge.  

After all, if we want to be truthful, we need to know what is true. 

Figuring that out can be hard, not just practically, but metaphysically.  

Theories of Truth

There are several competing theories that attempt to explain what truth is, the most popular of which is the correspondence theory. Correspondence refers to the way our minds relate to reality. In it, truth is a belief or statement that corresponds to how the world ‘really’ is independent of our minds or perceptions of it. As popular as this theory is, it does prompt the question: how do we know what the world is like outside of our experience of it? 

Many people, especially scientists and philosophers, have to grapple with the idea that we are limited in our ability to understand reality. For every new discovery, there seems to be another question left unanswered. This being the case, the correspondence theory leads us to a problem of not being able to speak about things being true because we don’t have an accurate understanding of reality. 

Another theory of truth is the coherence theory. This states that truth is a matter of coherence within and between systems of beliefs. Rather than the truth of our beliefs relying on a relation to the external world, it relies on their consistency with other beliefs within a system.  

The strength of this theory is that it doesn’t depend on us having an accurate understanding of reality in order for us to speak about something being true. The weakness is that we can imagine there being several different comprehensive and cohesive system of beliefs that, and thus different people having different ‘true’ beliefs that are impossible to adjudicate between. 

Yet another theory of truth is pragmatist, although there are a couple of varieties, as with pragmatism in general. Broadly, we can think of pragmatist truth as a more lenient and practical correspondence theory.  

For pragmatists, what the world is ‘really’ like only matters as far as it impacts the usefulness of our beliefs in practice.  

So, pragmatist truth is in a sense malleable; it, like the scientific method it’s closely linked with, sees truth as a useful tool for understanding the world, but recognises that with new information and experiment the ‘truth’ will change. 

Ethical aspects of truth and honesty 

Regardless of the theory of truth that you subscribe to, there are practical applications of truth that have a significant impact on how to behave ethically. One of these applications is honesty.  

Honesty, in a simple sense, is speaking what we wholeheartedly believe to be true.  

Honesty comes up a lot in classical ethical frameworks and, as with lots of ethical concepts, isn’t as straightforward as it seems. 

In Aristotelian virtue ethics, honesty permeates many other virtues, like friendship, but is also a virtue in itself that lies between habitual lying and boastfulness or callousness. So, a virtue ethicist might say a severe lack of honesty would result in someone who is untrustworthy or misleading, while too much honesty might result in someone who says unnecessary truthful things at the expense of people’s feelings. 

A classic example is a friend who asks you for your opinion on what they’re wearing. Let’s say you don’t think what they’re wearing is nice or flattering. You could be overly honest and hurt their feelings, you could lie and potentially embarrass them, or you could frame your honesty in a way that is moderate and constructive, like “I think this other colour/fit suits you better”.  

This middle ground is also often where consequentialism lands on these kinds of interpersonal truth dynamics because of its focus on consequences. Broadly, the truth is important for social cohesion, but consequentialism might tell us to act with a bit more or a bit less honesty depending on the individual situations and outcomes, like if the truth would cause significant harm. 

Deontology, on the other hand, following in the footsteps of Immanuel Kant, holds honesty as an absolute moral obligation. Kant was known to say that honesty was imperative even if a murderer was at your door asking where your friend was! 

Outside of the general moral frameworks, there are some interesting ethical questions we can ask about the nature of our obligations to truth. Do certain people or relations have a stronger right to the truth? For example, many people find it acceptable and even useful to lie to children, especially when they’re young. Does this imply age or maturity has an impact on our right to the truth? If the answer to this is that it’s okay in a paternalistic capacity, then why doesn’t that usually fly with adults?  

What about if we compare strangers to friends and family? Why do we intuitively feel that our close friends or family ‘deserve’ the truth from us, while someone off the street doesn’t?  

If we do have a moral obligation towards the truth, does this also imply an obligation to keep ourselves well-informed so that we can be truthful in a meaningful way? 

The nature of truth remains elusive, yet the way we treat it in our interpersonal lives is still as relevant as ever. Honesty is a useful and easier way of framing lots of conversations about truth, although it has its own complexities to be aware of, like the limits of its virtue. 


Ethics Explainer: Critical Race Theory

Critical Race Theory (CRT) seeks to explain the multitude of ways that race and racism have become embedded in modern societies. The core idea is that we need to look beyond individual acts of racism and make structural changes to prevent and remedy racial discrimination.

History

Despite debates about Critical Race Theory hitting the headlines relatively recently, the theory has been around for over 30 years. It was originally developed in the 1980s by Derrick Bell, a prominent civil rights activist and legal scholar. Bell argued that racial discrimination didn’t just occur because of individual prejudices but also because of systemic forces, including discriminatory laws, regulations and institutional biases in education, welfare and healthcare.  

During the 1950s and 1960s in America, there were many legal changes that moved the country towards racial equality. Some of the most significant legal changes include the Supreme Court’s decision in Brown v. Board of Education, which explicitly banned racial apartheid in American schools, the Civil Rights Act of 1964 and the Voting Rights Act of 1965.

These rulings and laws formally criminalised segregation, legalised interracial marriage and reduced restrictions in access to the ballot box that had been commonplace in many parts of America since the 1870s. There was also a concerted effort across education and the media to combat racially discriminatory beliefs and attitudes.

However, legal scholars noticed that even in spite of these prominent efforts, racism persisted throughout the country. How could racial equality be legislated by the highest court in America, and yet racial discrimination still occur every day?  

Overview

Critical race theory, often shortened to CRT, is an academic framework that was developed out of legal scholarship that wanted to explain how institutions like the law perpetuates racial discrimination. The theory evolved to have an additional focus on how to change structures and institutions to produce a more equitable world. Today, CRT is mostly confined to academia, and while some elements of CRT may inform parts of primary and secondary education, very few schools teach CRT in its full form.  

Some of the foundational principles of CRT are:  

  1. CRT asserts that race is socially constructed. This means that the social and behavioural differences we see between different racialised groups are products of the society that they live in, not something biological or “natural.”  
Source: Museums Victoria Collections

There is a long history of people using science to attempt to prove that there were significant social and psychological differences among people of different racial groups. They claimed these differences justified the poor treatment of people of different ‘inferior races’, or the ‘breeding out’ of certain races. This is how white Australians justified the atrocities committed in the Stolen Generations, such as the attempted ‘breeding out’ of Aboriginal people.  

        2. Racism is systemic and institutional. Imagine if everyone in the world magically erased all their racial biases. Racism would still exist, because there are systems and institutions that uphold racial discrimination, even if the people within them aren’t necessarily racist.  

There are many examples of systemic and institutional racism around the world. They become evident when a system doesn’t have anything explicitly racist or discriminatory about it, but there are still differences in who benefits from that system. One example is the education system: it’s not explicitly racist, but students of different racial backgrounds have different educational outcomes and levels of attainment. In the US, this occurs because public schools are funded by both local and state governments, which means that children going to school in lower socioeconomic areas will be attending schools that receive less funding. Statistically, people of colour are more likely to live in lower socioeconomic areas of America.  So, even though the education system isn’t explicitly racist (i.e., treating students of one racial background differently from students of a different racial background), their racial background still impacts their educational outcomes.

        3. There is often more than one part of identity that can impact a person’s interaction with systems and institutions in society. Race is just one of many parts of identity that influences how a person will interact with the world. Different identities, including race, gender, sexuality, socioeconomic status, religion and ability, intersect with each other and compound. This is an idea known as “intersectionality.” 

Most of the time, it’s not just one part of a person’s identity that is impacting their experiences in the world. Someone who is a Black woman will experience racism differently from a Black man, because gender will impact experience, just like race. A wealthy Chinese-Australian person will have a different experience living in Australia than a working class Chinese-Australian person. Ultimately, CRT tells us that we need to look at race in conjunction with other facets of identity that impact a person’s experience.  

Critical Race Theory and racism in Australia

As Australians, it’s easy to point the finger at the US and think “well, at least we aren’t as bad as them.” However, this mentality of only focusing on the worst instances of racism means we often ignore the happenings closer to home. A 2021 survey conducted by the ABC found that 76% of Australians from a non-European background reported experiencing racial discrimination. One-third of all Australians have experienced racism in the workplace and two-thirds of students from non-Anglo backgrounds have experienced racism in school.  

In addition to frequent instances of racism, Australia’s history is fraught with racism that is predominantly left out of high school history textbooks. From our early colonial history to racial discrimination during the gold rush in the 1850s to anti-immigration rhetoric today, we don’t need to look far for examples of racial discrimination. A little known part of Australian history is that non-British immigrants from 1901 until the 1960s were told that if they moved to Australia, they had to shed their languages and culture.  

Even though CRT originates in the US, it is a useful framework for encouraging a closer analysis of Australia’s racist history and how this has caused the imbalances and inequalities we see today. And once we understand the systemic and institutional forces that promote or sustain racial injustice, we can take measures to correct them to produce more equitable outcomes for all. 

If you want to learn more about how race has impacted the world today, here are some good places to start:  

  • Nell Painter’s Soul Murder and Slavery – her work has focused on the generational psychological impact of the trauma of slavery. Here is an interview where Painter talks a little bit about her work.  
  • Nikole Hannah-Jones’ 1619 Project, with the New York Times – you can listen to the podcast on Spotify, which has six great episodes on some of the less reported ways that slavery has impacted the functioning of US society.   
  • Dear White People – a Netflix show that deals with some of the complications of race on a US college campus.  
  • Ladies in Black – a movie about Sydney c. 1950s, shows many instances of the casual racism towards refugees and immigrants from Europe.  


For a deeper dive on Critical Race Theory, Claire G. Coleman presents Words Are Weapons and Sisonke Msimang and Stan Grant present Precious White Lives as Festival of Dangerous Ideas 2022Tickets on sale now.


Ethics Explainer: Gender

Gender is a complex social concept that broadly refers to characteristics, like roles, behaviours and norms, associated with masculinity and femininity.  

Historically, gender in Western cultures has been a simple thing to define because it was seen as an extension of biological sex: ‘women’ were human females and ‘men’ were human males, where female and male were understood as biological categories. 

This was due to a view that espouses the idea that biology (i.e., sex) predetermines or limits a host of social, psychological and behavioural traits that are inherently different between men and women, a view often referred to as biological determinism. This is where we get stereotypes like “men are rational and unemotional” and “women are passive and caring”.  

While most people reject biologically deterministic views today, most still don’t distinguish between sex and gender. However, the conversation is slowly beginning to shift as a result of decades of feminist literature.  

Additionally, it’s worth noting that outside of Western traditions, gender has been a much more fluid and complex concept for thousands of years. Hundreds of traditional cultures around the world have conceptions of gender that extend beyond the binary of men and women. 

Feminist Gender Theory 

Feminism has had a long history of challenging assumptions about gender, especially since the late twentieth century. Alongside some psychologists at the time, feminists began differentiating between sex and gender to argue that many of the differences between men and women that people took to be intrinsic were really the result of social and cultural conditioning.  

Prior to this, sex and gender were thought be essentially the same thing. This encouraged people to confer biological differences onto social and cultural expectations. Feminists argue that this is a self-fulfilling misconception that produces oppression in many different ways; for example, socially and culturally limiting attitudes that prevent women from engaging in “masculine” activities and vice versa.  

Really, they say, gender is social and sex is biological. Philosopher Simone de Beauvoir famously said: “One is not born, but rather becomes, a woman”.  

Gender being social means that it’s a concept that is constructed and shaped by our perceptions of masculinity and femininity, and that it can vary between societies and cultures. Sex being biological means that it’s scientifically observable (though the idea of binary sex is also being questioned given there are over 100 million intersex people all over the world). 

Philosophers like Simone de Beauvoir argued that gendered assumptions and expectations were so deeply engrained in our lives that they began to appear biologically predetermined, which gave credence to the idea of women being subservient because they were biologically so. 

“Social discrimination produces in women moral and intellectual effects so profound that they appear to be caused by nature.” 

Gender and Identity 

Gender being socially constructed means that it is mutable. With this increasingly mainstream understanding, people with more diverse gender identities than simply that which they were assigned at birth (cisgender) have been able to identify themselves in ways that more closely reflect their experiences and expressions. 

For example, some people identify with a different gender than what they were assigned at birth based on their sex (transgender); some people don’t identify as either man or woman, and instead feel that they are somewhere in between, or that the binary conception of gender doesn’t fit their experience and identity at all (non-binary). In many non-western cultures, gender has never been a binary concept. 

Unfortunately, with the inherently identity-based nature of gender, a host of ethical issues arise mostly in the form of discrimination. 

Transgender people, for example, are often the target of discrimination. This can be in areas as simple as what bathrooms they use to more complicated areas like participation in elite sports. Notably, these examples of discrimination are almost always targeted at transfeminine people (those who identify as women after being assigned men at birth). 

Additionally, there are ethical considerations that have to be taken into account when young people, particularly minors, make decisions about affirming their gender. Currently, it’s standard medical practice for people under 18 to be barred from making decisions about permanent medical procedures, though this still allows them to (with professional, medical guidance) take puberty blockers that help to mitigate extra dysphoria linked to undergoing puberty in a gender the person doesn’t identify with. 

Gender stereotypes in general also have negatives effects on all genders. Genderqueer people are often the targets of violence and discrimination. Women have historically been and are still oppressed in many ways because of systemic gender biases, like being discouraged to work in certain fields, being paid less for similar work or being harassed in various areas of their lives. Men also face harmful effects of rigid gender norms that often result in risk-taking behaviour, internalisation of mental health struggles, and encouraging violent or anti-social behaviour. 

The Future of Gender 

This has been an overview of the most common views on gender. However, there are also many variations on the traditional feminist view that other feminists argue are more accurate depictions of reality.  

bell hooks was known to criticise some variations of gender that revolved around sexuality because they did not properly account for the way that class, race and socio-economic status changed the way that a woman was viewed and expected to behave. For example, many views of gender are from the perspective of white, western women and so fail to represent women in more marginalised circumstances. 

Along similar lines, Judith Butler criticises the very idea of grouping people into genders, arguing that it is and will always be inherently normative and hence exclusionary. For Butler, gender is not simply about identity, it’s primarily about equality and justice. 

Even some earlier gender theorists like Gayle Rubin argue for the eventual abolishment of gender from society, in which people are free to express themselves in whatever individual way they desire, free from any norms or expectations based on their biology and subsequent socialisation. 

“The dream I find most compelling is one of an androgynous and genderless (though not sexless) society, in which one’s sexual anatomy is irrelevant to who one is, what one does, and with whom one makes love.” 

Gender is currently a very active research and debate area, not only in philosophy, but also in sociology, politics and LGBTQI+ education. While theories about identity often result in conflict due to its inherently personal nature, it’s promising to see such a clear area where work by philosophers has significantly influenced public discourse with profound effects on many people’s lives. 

 

For a deeper dive on gender, Alok Vaid-Menon presents Beyond the Gender Binary as part of Festival of Dangerous Ideas 2022. Tickets on sale now.