What is ethics?

Ethics asks how we should live, what choices we should make and what makes our lives worth living.

It helps us define the conditions of a good choice and then figure out which of all the options available to us is the best one. Ethics is the process of questioning, discovering and defending our values, principles and purpose. It’s about finding out who we are and staying true to that in the face of temptations, challenges and uncertainty. It’s not always fun and it’s hardly ever easy, but if we commit to it, we set ourselves up to make decisions we can stand by, building a life that’s truly our own and a future we want to be a part of.


Big Thinker: Karl Marx

Karl Marx (1818-1883) was a philosopher, economist, and revolutionary thinker whose criticisms of capitalism and breakdowns of class struggle continue to influence contemporary thought about economic inequality and the worth of individual labour. 

He was not only a prominent figure in the world of philosophy but also a key player in economic and political theory. Marx’s life and work were deeply intertwined with the tumultuous historical backdrop of the 19th century, marked by the Industrial Revolution and the rise of capitalism. 

Born in Trier, Prussia (now in Germany), Marx began with a focus on law and philosophy at the University of Bonn and later at the University of Berlin. During his time in Berlin, he encountered the ideas of G.W.F. Hegel, whose methods significantly influenced Marx’s own philosophical approach.  

In collaboration with Friedrich Engels, Marx developed and refined his ideas, culminating in some of the most influential works in the history of political philosophy. For example, his infamous The Communist Manifesto (1848) and Das Kapital (1867, 1885, 1894). 

Historical materialism and class struggle

One of Marx’s central ideas was historical materialism, a theory that analyses the evolution of societies through the lens of economic systems. According to Marx, the structure of a society is primarily determined by its mode of production: the ways commodities and services are produced and distributed, and the social relations that affect these functions. In capitalist societies, the means of production are privately owned, leading to a class-based social structure separating the owners and the workers. 

Marx’s analysis of class struggle underscores the ethical imperative of addressing economic inequality. He argued that under capitalism, the bourgeoisie (owners of the means of production) exploit the proletariat (the working class) for their own profit. This exploitation, he claims, is the engine that drives the capitalist system, where workers are paid less than the value of their labour while the bourgeoisie reap the profits. This exploitation also results in alienation, where workers are estranged from the full effects of their labour and, Marx argues, even from their own humanity. 

Marx’s arguments call for a reevaluation of the inherent fairness of such a system. He questions the morality of a society where wealth and power are concentrated in the hands of a few while the masses toil in poverty. This is an ethical challenge that continues to resonate in contemporary discussions about income inequality and social justice.  

Marx’s critique challenges us to consider whether a society that values profit and efficiency over the well-being and fulfillment of its members is ethically justifiable.

To address this concern, Marx envisioned a classless society, where the means of production would be collectively owned. This transition, he believed, would eliminate the inherent exploitation of capitalism and lead to a more just and equitable society. While the practical realisation of this vision has proven challenging, it remains a foundational ethical ideal for some, emphasising the need to confront economic disparities for the sake of human dignity and fairness. 

Critique of capitalism and commodification

Marx’s critique of capitalism extended beyond its class divisions. He also examined the profound impact of capitalism on human relationships and the commodification of virtually everything, including labour, under this system. For Marx, capitalism reduced individuals to mere commodities, bought and sold in the labour market.  

Marx’s critique of commodification highlights the importance of valuing individuals beyond their economic contributions. He argued that in a capitalist society, individuals are often reduced to their economic worth, which can erode their sense of self-worth and dignity. Addressing this ethical concern calls for recognising the intrinsic value of every person and fostering functions in societies that prioritise human well-being over profit. 

The communist vision

Marx’s ultimate vision was communism, a classless society where resources would be shared collectively. In such a society, the state as we know it would wither away, and individuals would contribute to the common good according to their abilities and receive according to their needs. 

This communist vision raises questions about the ethics of property and ownership. It challenges us to rethink the distribution of resources in society and consider alternative models that prioritise equity and communal well-being. While achieving a truly communist society might be complex or even out of reach, the aspiration of creating a world where everyone’s needs are met and individuals contribute to the best of their abilities is still a general ethical ideal many people intuitively strive for. 

Despite this, Marx’s ideas have faced much criticism. Many believe that a classless society with a centralised power risks authoritarianism, Marx’s economic planning lacked detail, communism goes against human nature of self-interest and competition, and historical and contemporary communist systems face large practical challenges. 

In spite of, and sometimes because of, these challenges, Marx’s ideas continue to spark ethical discussions about economic inequality, commodification, and the nature of human relationships in contemporary society. His legacy serves as a reminder of the enduring importance of grappling with questions of justice, equality, and human dignity in our ever-evolving social and economic landscapes. 


Ethics explainer: Cultural Pluralism

Imagine a large, cosmopolitan city, where people from uncountable backgrounds and with numerous beliefs all thrive together. People embrace different cultural traditions, speak varying languages, enjoy countless cuisines, and educate their children on diverse histories and practices.

This is the kind of pluralism that most people are familiar with, but a diverse and culturally integrated area like this is specifically an example of cultural pluralism.

Pluralism in a general sense says there can be multiple perspectives or truths that exist simultaneously, even if some of those perspectives are contradictory. It’s contrasted with monism, which says only one kind of thing exists; dualism, which says there are only two kinds of things (for example, mind and body); and nihilism, which says that no things exist.

So, while pluralism more broadly refers to a diversity of views, perspectives or truths, cultural pluralism refers specifically to a diversity of cultures that co-exist – ideally harmoniously and constructively – while maintaining their unique cultural identities.

Sometimes an entire country can be considered culturally pluralistic, and in other places there may be culturally pluralistic hubs (like states or suburbs where there is a thriving multicultural community within a larger more broadly homogenous area).

On the other end of the spectrum is cultural monism, the idea that a certain area or population should have only one culture. Culturally monistic places (for example, Japan or North Korea) rely on an implicit or explicit pressure for others to assimilate. Whereas assimilation involves the homogenisation of culture, pluralism encourages diversity, often embracing people of different ethnic groups, backgrounds, religions, practices and beliefs to come together and share in their differences.

A pluralistic society is more welcoming and supportive of minority cultures because people don’t feel pressured to hide or change their identities. Instead, diverse experiences are recognised as opportunities for learning and celebration. This invites travel and immigration, and translates into better mental health for migrants, the promotion of harmony and acceptance of others, and enhances creativity by exposing people to perspectives and experiences outside of their usual remit.

We also know what the alternative is in many cases. Australia has a dark history of assimilation practices, a symptom of racist, colonial perspectives that saw the decimation of First Nations people and their cultures. Cultural pluralism is one response to this sort of cultural domination that has been damaging throughout history and remains so in many places today.

However, there are plenty of ethical complications that arise in the pursuit of cultural plurality.

For example, sociologist Robert D. Putnam published research in 2007 that spoke about negative short-medium term effects of ethnically diverse neighbourhoods. He found that, on average, trust, altruism and community cooperation was lower in these neighbourhoods, even between those of the same or similar ethnicities.

While Putnam denied that his findings were anti-multicultural, and argues that there are several positive long-term effects of diverse societies, the research does indicate some of the risks associated with cultural pluralism. It can take a large amount of effort and social infrastructure to build and maintain diverse communities, and if this fails or is done poorly it can cause fragmentation of cultural communities.

This also accords with an argument made by journalist David Goodhart, that says people are generally divided into “Anywheres” (people with a mobile identity) and “Somewheres” (people, usually outside of urban areas, who have marginalised, long-term, location-based identities). This incongruity, he says, accounts for things like Brexit and the election of Donald Trump, because they speak to the Somewheres who are threatened by changes to their status quo. Pluralism, Goodhart notes, risks overlooking the discomfort these communities face if they are not properly supported and informed.

Other issues with pluralism include the prioritisation of competing cultural values and traditions. What if one person’s culture is fundamentally intolerant of another person’s culture? This is something we see especially with cultures organised around or heavily influenced by religion. For example, Christianity and Islam are often at odds with many different cultures around issues of sexual preference and gendered rights and responsibilities.

If we are to imagine a truly culturally pluralistic society, how do we ethically integrate people who are intolerant of others?

Pluralism as a cultural ideal also has direct implications for things like politics and law, raising the age-old question about the relationship between morality and the law. If we want a pluralistic society generally, how do the variations in beliefs, values and principles translate into law? Is it better to have a centralised legal system or do we want a legal plurality that reflects the diversity of the area?

This does already exist in some capacity – many countries have Islamic courts that enforce Sharia law for their communities in addition to the overarching governmental law. This parallel law-enforcement also exists in some colonised countries, where parts of Indigenous law have been recognised. For example, in Australia, with the Mabo decision.

Another feature of genuine cultural pluralism that has huge ethical implications and considerations is diversity of media. This is the idea that there should be (that is, a media system that is not monopolised) and diverse representation in media (that is, media that presents varying perspectives and analyses).

Firstly, this ensures that media, especially news media, stays accountable through comparison and competition, rather than a select powerful few being able to widely disseminate their opinions unchecked. Secondly, it fosters a greater sense of understanding and acceptance by exposing people to perspectives, experiences and opinions that they might otherwise be ignorant or reflexively wary of. Thirdly, as a result, it reduces the risk that media, as a powerful disseminator of culture, could end up creating or reinforcing a monoculture.

While cultural pluralism is often seen as an obviously good thing in western liberal societies, it isn’t without substantial challenges. In the pursuit of tolerance, acceptance and harmony, we must be wary of fragmenting cultures and ensure that diverse communities have adequate social supports to thrive.


Ethics explainer: Normativity

Have you ever spoken to someone and realised that they’re standing a little too close for comfort?

Personal space isn’t something we tend to actively think about; it’s usually an invisible and subconscious expectation or preference. However, when someone violates our expectations, they suddenly become very clear. If someone stands too close to you while talking, you might become uncomfortable or irritated. If a stranger sits right next to you in a public place when there are plenty of other seats, you might feel annoyed or confused.

That’s because personal space is an example of a norm. Norms are communal expectations that are taken up by various populations, usually serving shared values or principles, that direct us towards certain behaviours. For example, the norm of personal space is an expectation that looks different depending on where you are.

In some countries, the norm is to keep distance when talking to strangers, but very close when talking to close friends, family or partners. In other countries, everyone can be relatively close, and in others still, not even close relationships should invade your personal space. This is an example of a norm that we follow subconsciously.

We don’t tend to notice what our expectation even is until someone breaks it, at which point we might think they’re disrespecting personal or social boundaries.

Norms are an embodiment of a phenomenon called normativity, which refers to the tendency of humans and societies to regulate or evaluate human conduct. Normativity pervades our daily lives, influencing our decisions, behaviors, and societal structures. It encompasses a range of principles, standards, and values that guide human actions and shape our understanding of what’s considered right or wrong, good or bad.

Norms can be explicit or implicit, originating from various sources like cultural traditions, social institutions, religious beliefs, or philosophical frameworks. Often norms are implicit because they are unspoken expectations that people absorb as they experience the world around them.

Take, for example, the norms of handshakes, kisses, hugs, bows, and other forms of greeting. Depending on your country, time period, culture, age, and many other factors, some of these will be more common and expected than others. Regardless, though, each of them has a or function like showing respect, affection or familiarity.

While these might seem like trivial examples, norms have historically played a large role in more significant things, like oppression. Norms are effectively social pressures, so conformity is important to their effect – especially in places or times where the flouting of norms results in some kind of public or social rebuke.

So, norms can sometimes be to the detriment of people who don’t feel their preferences or values reflected in them, especially when conformity itself is a norm. One of the major changes in western liberal society has been the loosening of norms – the ability for people to live more authentically themselves.

Normative Ethics

Normativity is also an important aspect of ethical philosophy. Normative ethics is the philosophical inquiry into the nature of moral judgments and the principles that should govern human actions. It seeks to answer fundamental questions like “What should I do?”, “How should I live? and “Which norms should I follow?”. Normative ethical theories provide frameworks for evaluating the morality of specific actions or ethical dilemmas.

Some normative ethical theories include:

  • Consequentialism, which says we should determine moral valued based on the consequences of actions.
  • Deontology, which says we should determine moral value by looking at someone’s coherence with consistent duties or obligations.
  • Virtue ethics, which focuses on alignment with various virtues (like honesty, courage, compassion, respect, etc.) with an emphasis on developing dispositions that cultivate these virtues.
  • Contractualism, informed by the idea of the social contract, which says we should act in ways and for reasons that would be agreed to by all reasonable people in the same circumstances.
  • Feminist ethics, or the ethics of care, which says that we should challenge the understand and challenge the way that gender has operated to inform historical ethical beliefs and how it still affects our moral practices today.

Normativity extends beyond individual actions and plays a significant role in shaping societal norms, as we saw earlier, but also laws and policies. They influence social expectations, moral codes, and legal frameworks, guiding collective behavior and fostering social cohesion. Sometimes, like in the case of traffic laws, social norms and laws work in a circular way, reinforcing each other.

However, our normative views aren’t static or unchangeable.

Over time, societal norms and values evolve, reflecting shifts in normative perspectives (cultural, social, and philosophical). Often, we see social norms culminating in the changing of outdated laws that accurately reflected the normative views of the time, but no longer do.

While it’s ethically significant that norms shift over time and adapt to their context, it’s important to note that these changes often happen slowly. Eventually, changes in norms influence changes in laws, and this can often happen even more slowly, as we have seen with homosexuality laws around the world.


Ethics explainer: Nihilism

“If nothing matters, then all the pain and guilt you feel for making nothing of your life goes away.” – Jobu Tupaki, Everything Everywhere All At Once 

Do our lives matter? 

Nihilism is a school of philosophical thought proposing that our existence fundamentally lacks inherent meaning. It rejects various aspects of human existence that are generally accepted and considered fundamental, like objective truth, moral truth and the value and purpose of life. Its origin is the Latin word ‘nihil’, which means ‘nothing’.  

The most common branches of nihilism are existential and moral nihilism, though there are many others, including epistemological, political, metaphysical and medical nihilism. 

Existential nihilism  

In popular use, nihilism usually refers to existential nihilism, a precursor to existentialist thought. This is the idea that life has no inherent meaning, value or purpose and it’s also often (because of this) linked with feelings of despair or apathy. Nihilists in media are usually portrayed as moody, brooding or radical types who have decided that we are insignificant specks floating around an infinite universe, and that therefore nothing matters.  

Nihilist ideas date as far back as Buddha; though the beginning of its uprising in western literature appeared in the early 19th century. This shift was largely a response to the diminishing moral authority of the church (and religion at large) and the rise of secularism and rationalism. This rejection led to the view that the universe had no grand design or purpose, that we are all simply cogs in the machine of the existence. 

Though he wasn’t a nihilist himself, Friedrich Nietzsche is the poster-child for much of contemporary nihilism, especially in pop culture and online circles. Nietzsche wrote extensively on it in the late 19th century, speaking of the crisis we find ourselves in when we realise that the world lacks the intrinsic meaning or value that we want or believed it to have. This is ultimately something that he wanted us to overcome.  

He saw humans responding to this crisis in two ways: passive or active nihilism.  

For Nietzsche, passive nihilists are those who resign themselves to the meaninglessness of life, slowly separating themselves from their own will or desires to minimise the suffering they face from the random chaos of the world. 

In media, this kind of pessimistic nihilism is sometimes embodied by characters who then act on it in a destructive way. For example, the antagonist, Jobu Topaki in Everything Everywhere All At Once comes to this realisation through her multi-dimensional awareness, which convinces her that because of the infinite nature of reality, none of her choices matter and so she attempts to destroy herself to escape the insignificance and meaninglessness she feels. 

Jobu Topaki, Everything Everywhere All At Once (2022)

Active nihilists instead see nihilism as a freeing condition, revealing a world where they are emboldened to create something new on top of the destruction of the old values and ways of thinking.  

Nietzsche’s idea of the active nihilist is the Übermensch (“superman”), a person who overcomes the struggle of nihilism by working to create their own meaning in the face of meaninglessness. They see the absurdity of life as something to be embraced, giving them the ability to live in a way that enforces their own values and “levels the playing field” of past values.  

Moral nihilism

Existential nihilism often gives way to moral nihilism, the idea that morality doesn’t exist, that no moral choices are preferable in comparison to others. Because, if our lives don’t have intrinsic meaning, if objective values don’t exist, then by what standard can we call actions right or wrong? We normally see this kind of nihilism embodied by anarchic characters in media. 

An infamous example is the Joker from the Batman franchise. Especially in renditions like The Dark Knight (2008) and Joker (2019), the Joker is portrayed as someone whose expectations of the world have failed him, whose tortuous existence has led him to believe that nothing matters, the world doesn’t care, and that in the face of that, we shouldn’t care about anything or anyone either. In his words, “everything burns” in the end, so he sees no problem in hastening that destruction and ultimately the destruction of himself. 

The Joker, 2019

“Now comes the part where I relieve you, the little people, of the burden of your useless lives.”

The Joker epitomises the populist understanding of nihilism and one of the primary ethical risks of this philosophical world view. For some people, viewing their lives as lacking inherent meaning or value causes a psychological spiral into apathy.  

This spiral can cause people to become self-destructive, reclusive, suicidal and otherwise hasten towards “nothingness”. In others, it can cause outwardly destructive actions because of their perception that since nothing matters in some kind of objective sense, they can do whatever they want (think American Psycho).  

Nihilism has particularly flourished in many online subcultures, fuelling the apathy of edgelords towards the plights of marginalised populations and often resulting in a tendency towards verbal and physical violence. One of the major challenges of nihilism, historically and today, is that it’s not obviously false. This is where we rely on philosophy to be able to justify why any morality should exist at all. 

Where to go from here

A common thread runs through many of the nihilist and existentialist writers about what we should do in the face of inherent meaninglessness: create it ourselves. 

Existentialists like Simone de Beauvoir and Jean-Paul Sartre talk about the importance of recognising the freedom that this kind of perspective gives us. And, equally, the importance of making sure that we make meaning for ourselves and for others through our life. 

For some people, that might be a return to religion. But there are plenty of other ways to create meaning in life: focusing on what’s subjectively meaningful to you or those you care about and fully embracing those things. Existence doesn’t need to have intrinsic meaning for us to care. 


The ethical dilemma of the 4-day work week

Ahead of an automation and artificial intelligence revolution, and a possible global recession, we are sizing up ways to ‘work smarter, not harder’. Could the 4-day work week be the key to helping us adapt and thrive in the future?

As the workforce plunged into a pandemic that upended our traditional work hours, workplaces and workloads, we received the collective opportunity to question the 9-5, Monday to Friday model that has driven the global economy for the past several decades.

Workers were astounded by what they’d gained back from working remotely and with more flexible hours. Not only did the care of elderly, sick or young people become easier from the home office, but also hours that were previously spent commuting shifted to more family and personal time. 

This change in where we work sparked further thought about how much time we spend working. In 2022, the largest and most successful trial of a four-day working week delivered impressive results. Some 92% of 61 UK companies who participated in a two-month trial of the shorter week declared they’d be sticking with the 100:80:100 model in what the 4 Day Week director Joe Ryle called a “major breakthrough moment” for the movement.  

Momentum Mental Health chief executive officer Debbie Bailey, who participated in the study, said her team had maintained productivity and increased output. But what had stirred her more deeply was a measurable “increase in work-life balance, happiness at work, sleep per night, and a reduction in stress” among staff. 

However, Bailey said, the shorter working week must remain viable for her bottom line, something she ensures through a tailor-made ‘Rules of Engagement’ in her team. “For example, if we don’t maintain 100 per cent outputs, an individual or the full team can be required to return to a 5-day week pattern,” she explained. 

Beyond staff satisfaction, a successful implementation of the 4-day week model could also boost the bottom line for businesses.

Reimagining a more ethical working environment, advocates say, can yield comprehensive social benefits, including balancing gender roles, elongated lifespans, increased employee well-being, improved staff recruitment and retention and a much-needed reduction in workers’ carbon footprint as Australia works towards net-zero by 2050. 

University of Queensland Business School’s associate professor Remi Ayoko says working parents with a young family will benefit the most from a modified work week, with far greater leisure time away from the keyboard offering more opportunity for travel and adventure further afield, as well as increased familial bonding and life experiences along the way.  

However, similar to remote work, the 4-day working week has not been without its criticisms. Workplace connectivity is one aspect that can fall by the wayside when implementing the model – a valuable culture-building part of work, according to the University of Auckland’s Helen Delaney and Loughborough University’s Catherine Casey. 

Some workers reported that “the urgency and pressure was causing “heightened stress levels,” leaving them in need of the additional day off to recover from work intensity. This raises the question of whether it is ethical for a workplace to demand a more robotic and less human-focussed performance.  

In November last year, Australian staff at several of Unilever’s household names, including Dove, Rexona, Surf, Omo, TRESemmé, Continental and Streets, trialed a 100:80:100 model in the workplace. Factory workers did not take part due to union agreements.  

To maintain productivity, Unilever staff were advised to cut “lesser value” activities during working hours, like superfluous meetings and the use of staff collaboration tool Microsoft Teams, in order to “free up time to work on items that matter most to the people we serve, externally and internally”. 

If eyebrows were raised by that instruction, they needed only look across the ditch at Unilever New Zealand, where an 18-month trial yielded impressive results. Some 80 staff took a third (34%) fewer sick days, stress levels fell by a third (33%), and issues with work-life balance tumbled by two-thirds (67%). An independent team from the University of Technology Sydney monitored the results. 

Keogh Consulting CEO Margit Mansfield told ABC Perth that she would advise business leaders considering the 4-day week to first assess the existing flexibility and autonomy arrangements in place – put simply, looking into where and when your staff actually want to work – to determine the most ethically advantageous way to shake things up. 

Mansfield says focussing on redesigning jobs to suit new working environments can be a far more positive experience than retrofitting old ones with new ways. It can mean changing “the whole ecosystem around whatever the reduced hours are, because it’s not just simply, well, ‘just be as productive in four days’, and ‘you’re on five if the job is so big that it just simply cannot be done’.” 

New modes of working, whether in shorter weeks or remote, are also seeing the workplace grappling with a trust revolution. On the one hand, the rise of project management software like Asana is helping managers monitor deliverables and workload in an open, transparent and ethical way, while on the other, controversial tracking software installed on work computers is causing many people, already concerned about their data privacy, to consider other workplaces. 

It is important to recognise that the relationship between employer and employee is not one-sided and the reciprocation of trust is essential for creating a work environment that fosters productivity, innovation and wellbeing.

While employees now anticipate flexibility to maintain a healthy work-life balance, employers also have expectations – one of which is that employees still contribute to the culture of the organisation. 

When employees are engaged and motivated they are more likely to contribute to the culture of the organisation which can inform the way the business interacts with society more broadly. Trust reciprocation is not just about meeting individual needs but also working together on a common purpose. By prioritising the well-being of their employees and empowering them to contribute to the culture of the organisation a virtuous cycle is being created. Whether this is a 4-day working week or a hybrid structure is for the employer and employee to explore. 

CEO of Microsoft, Satya Nadella says forming a new world working relationship based on trust between all parties can be far more powerful for a business than building parameters around workers. After all, “people come to work for other people, not because of some policy”.  


Thought experiment: "Chinese room" argument

If a computer responds to questions in an intelligent way, does that mean it is genuinely intelligent?

Since its release to the public in November 2022, ChatGPT has taken the world by storm. Anyone can log in, ask a series of questions, and receive very detailed and reasonable responses.

Given the startling clarity of the responses, the fluidity of the language and the speed of the response, it is easy to assume that ChatGPT “understands” what it’s reporting back. The very language used by ChatGPT, and the way it types out each word individually, reinforces the feeling that we are “chatting” with another intelligent being.

But this raises the question of whether ChatGPT, or any other large language model (LLM) like it, is genuinely capable of “understanding” anything, at least in the way that humans do. This is where a thought experiment concocted in the 1980s becomes especially relevant today.

“The Chinese room”

Imagine you’re a monolingual native English speaker sitting in a small windowless room surrounded by filing cabinets with drawers filled with cards, each featuring one or more Chinese characters. You also have a book of detailed instructions written in English on how to manipulate those cards.

Given you’re a native English speaker with no understanding of Chinese, the only thing that will make sense to you will be the book of instructions.

Now imagine that someone outside the room slips a series of Chinese characters under the door. You look in the book and find instructions telling you what to do if you see that very series of characters. The instructions culminate by having you pick out another series of Chinese characters and slide them back under the door.

You have no idea what the characters mean but they make perfect sense to the native Chinese speaker on the outside. In fact, the series of characters they originally slid under the door formed a question and the characters you returned formed a perfectly reasonable response. To the native Chinese speaker outside, it looks, for all intents and purposes, like the person inside the room understands Chinese. Yet you have no such understanding.

This is the “Chinese room” thought experiment proposed by the philosopher John Searle in 1980 to challenge the idea that a computer that simply follows a program can have a genuine understanding of what it is saying. Because Searle was American, he chose Chinese for his thought experiment. But the experiment would equally apply to a monolingual Chinese speaker being given cards written in English or a Spanish speaker given cards written in Cherokee, and so on.

Functionalism and Strong AI

Philosophers have long debated what it means to have a mind that is capable of having mental states, like thoughts or feelings. One view that was particularly popular in the late 20th century was called “functionalism”.

Functionalism states that a mental state is not defined by how it’s produced, such as requiring that it must be the product of a brain in action. It is also not defined by what it feels like, such as requiring that pain have a particular unpleasant sensation. Instead, functionalism says that a mental state is defined by what it does.

This means that if something produces the same aversive response that pain does in us, even if it is done by a computer rather than a brain, then it is just as much a mental state as it is when a human experiences pain.

Functionalism is related to a view that Searle called “Strong AI”. This view says that if we produce a computer that behaves and responds to stimuli in exactly the same way that a human would, then we should consider that computer to have genuine mental states. “Weak AI”, on the other hand, simply claims that all such a computer is doing is simulating mental states.

Searle offered the Chinese room thought experiment to show that being able to answer a question intelligently is not sufficient to prove Strong AI. It could be that the computer is functionally proficient in speaking Chinese without actually understanding Chinese.

ChatGPT room

While the Chinese room remained a much-debated thought experiment in philosophy for over 40 years, today we can all see the experiment made real whenever we log into Chat GPT. Large language models like ChatGPT are the Chinese room argument made real. They are incredibly sophisticated versions of the filing cabinet, reflecting the corpus of text upon which they’re trained, and the instructions, representing the probabilities used to decide how to pick which character or word to display next.

So even if we feel that ChatGPT – or a future more capable LLM – understands what it’s saying, if we believe that the person in the Chinese room doesn’t understand Chinese, and that LLMs operate in much the same way as the Chinese room, then we must conclude that it doesn’t really understand what it’s saying.

This observation has relevance for ethical considerations as well. If we believe that genuine ethical action requires the actor to have certain mental states, like intentions or beliefs, or that ethics requires the individual to possess certain virtues, like integrity or honesty – then we might conclude that a LLM is incapable of being genuinely ethical if it lacks these things.

A LLM might still be able to express ethical statements and follow prescribed ethical guidelines imposed by its creators – as has been the case in the creators of ChatGPT limiting its responses around sensitive topics such as racism, violence and self-harm – but even if it looks like it has its own ethical beliefs and convictions, that could be an illusion similar to the Chinese room.


Ethics Explainer: Moral injury

Moral injury occurs when we are forced to violate our deepest ethical values and it can have a serious impact on our wellbeing.

In the 1980s, the American psychiatrist Jonathan Shay was helping veterans of the war in Vietnam deal with the traumas they had experienced. He noticed that many of his patients were experiencing high levels of despair accompanied by feelings of guilt and shame, along with a decline of trust in themselves and others. This led to them disengaging from their friends, family and society at large, accompanied by episodes of suicidality and interpersonal violence. 

Shay realised that this was not posttraumatic stress disorder (PTSD), this was something different. Shay saw that these veterans were not just traumatised by what had happened to them, they were ‘wounded’ by what they had done to others. He called this new condition “moral injury,” describing it as a “soul wound inflicted by doing something that violates one’s own ethics, ideals, or attachments”. 

The “injury” is to our very self-conception as ethical beings, which is a core aspect of our identity. As Shay stated about his patients, moral injury “deteriorates their character; their ideals, ambitions, and attachments begin to change and shrink.”  

Moral injury is, at its heart, an ethical issue. It is caused when we are faced with decisions or directives that force us to challenge or violate our most deeply held ethical values, like if a soldier is forced to endanger civilians or a nurse feels they can’t offer each of their patients the care they deserve due to staff shortages.  

Sometimes this ethical compromise can be caused by the circumstances people are placed in, like working in an organisation that is chronically under-resourced. Sometimes it can be caused by management expecting them to do something that goes against their values, like overlooking inappropriate behaviour among colleagues in the workplace in order to protect high performers or revenue generators. 

Symptoms

There are several common symptoms of moral injury. The first is guilt. This manifests as intense discomfort and hyper-sensitivity towards how others regard us, and can lead to irritability, denial or projection of negative feelings, such as anger, onto others. 

Guilt can tip over into shame, which is a form of intense negative self-evaluation or self-disgust. This is why shame sometimes manifests as stomach pains or digestive issues. Shame can be debilitating and demotivating, causing a negative spiral into despondency. 

Excessive guilt and shame can lead to anxiety, which is a feeling of fear that doesn’t have an obvious cause. Anxiety can cause distraction, irritability, fatigue, insomnia as well as body and muscle aches. 

Moral injury also challenges our self-image as ethical beings, sometimes leading to us losing trust in our own ability to do what is right. This can rob us of a sense of agency, causing us to feel powerless, becoming passive, despondent and feeling resigned to the forces that act upon us. It can also erode our own moral compass and cause us to question the moral character of others, which can further shake our feeling that the other people and society at large are guided by ethical principles that we value. 

The negative emotions and self-assessment that accompany moral injury can also cause us to withdraw from social or emotional engagement with others. This can involve a reluctance to interact socially as well as empathy fatigue, where we have difficulty or lack the desire to share in others’ emotions. 

Distinctions

Moral injury is often mistaken for PTSD or burnout, but they are different issues. Burnout is a response to chronic stress due to unreasonable demands, such a relentless workloads, long hours, chronic under resourcing. It can lead to emotional exhaustion and, in extreme cases, depersonalisation, where people feel detached from their lives and just continue on autopilot. But it’s possible to suffer from burnout even if you are not compromising your deepest ethical values; you might feel burnout but still agree that the work you’re doing is worthwhile. 

PTSD is a response to witnessing or experiencing intense trauma or threat, especially mortal danger. It can be amplified if the individual survived the danger while those around them, especially close friends or colleagues, did not survive. This could be experienced following a round of poorly managed redundancies, where those who keep their jobs have survivor guilt. Thus, PTSD is typically a response to something that you have witnessed or experienced, whereas moral injury is related to something that you have done (or not been able to do) to others.  

Moral injury affects a wide range of industries and professions, from the military to healthcare to government and corporate organisations, and its impacts can be easily overlooked or mistaken for other issues. But with a greater awareness of moral injury and its causes, we’ll be better equipped to prevent and treat it. 

 

If you or someone you know is suffering from moral injury you can contact Ethi-call, a free and independent helpline provided by The Ethics Centre. Trained counsellors will talk you through the ethical dimension of your situation and provide resources to help understand it and to decide on the best course of action. To book a call visit www.ethi-call.com 

The Ethics Centre is a thought leader in assessing organisational cultural health and building leadership capability to make good ethical decisions. We have helped a number of organisations across a number of industries deal with moral injury, burnout and PTSD. To arrange a confidential conversation contact the team at consulting@ethics.org.au. Or visit our consulting page to learn more. 


Ethics Explainer: Longtermism

Longtermism argues that we should prioritise the interests of the vast number of people who might live in the distant future rather that the relatively few people who do live today.

Do we have a responsibility to care for the welfare of people in future generations? Given the tremendous efforts people are making to prevent dangerous climate change today, it seems that many people do feel some responsibility to consider how their actions impact those who are yet to be born. 

But if you take this responsibility seriously, it could have profound implications. These implications are maximally embraced by an ethical stance called ‘longtermism,’ which argues we must consider how our actions affect the long-term future of humanity and that we should prioritise actions that will have the greatest positive impact on future generations, even if they come at a high cost today. 

Longtermism is a view that emerged from the effective altruism movement, which seeks to find the best ways to make a positive impact on the world. But where effective altruism focuses on making the current or near-future world as good as it can be, longtermism takes a much broader perspective. 

Billions and billions

The longtermist argument starts by asserting that the welfare of someone living a thousand years from now is no less important than the welfare of someone living today. This is similar to Peter Singer’s argument that the welfare of someone living on the other side of the world is no less ethically important than the welfare of your family, friends or local community. We might have a stronger emotional connection to those nearer to us, but we have no reason to preference their welfare over that of people more spatially or temporally removed from us. 

Longtermists then urge us to consider that there will likely be many more people in the future than there are alive today. Indeed, humanity might persist for many thousands or even millions of years, perhaps even colonising other planets. This means there could be hundreds of billions of people, not to mention other sentient species or artificial intelligences that also experience pain or happiness, throughout the lifetime of the universe.  

The numbers escalate quickly, so if there’s even a 0.1% chance that our species colonises the galaxy and persists for a billion years, then that means the expected number of future people could number in the hundreds of trillions.  

The longtermism argument concludes that if we believe we have some responsibility to future people, and if there are many times more future people than there are people alive today, then we ought to prioritise the interests of future generations over the interests of those alive today.  

This is no trivial conclusion. It implies that we should make whatever sacrifices necessary today to benefit those who might live many thousands of years in the future. This means doing everything we can to eliminate existential threats that might snuff out humanity, which would not only be a tragedy for those who die as a result of that event but also a tragedy for the many more people who were denied an opportunity to be born. It also means we should invest everything we can in developing technology and infrastructure to benefit future generations, even if that means our own welfare is diminished today. 

Not without cost

Longtermism has captured the attention and support of some very wealthy and influential individuals, such as Skype co-founder Jaan Tallinn and Facebook co-founder Dustin Moskovitz. Organisations such as 80,000 Hours also use longtermism as a framework to help guide career decisions for people looking to do the most good over their lifetime.  

However, it also has its detractors, who warn about it distracting us from present and near-term suffering and threats like climate change, or accelerating the development of technologies that could end up being more harmful than beneficial, like superintelligent AI.  

Even supporters of longtermism have debated its plausibility as an ethical theory. Some argue that it might promote ‘fanaticism,’ where we end up prioritising actions that have a very low chance of benefiting a very high number of people in the distant future rather than focusing on achievable actions that could reliably benefit people living today. 

Others question the idea that we can reliably predict the impacts of our actions on the distant future. It might be that even our most ardent efforts today ‘wash out’ into historical insignificance only a few centuries from now and have no impact on people living a thousand or a million years hence. Thus, we ought to focus on the near-term rather than the long-term. 

Longtermism is an ethical theory with real impact. It redirects our attention from those alive today to those who might live in the distant future. Some of the implications are relatively uncontroversial, such as suggesting we should work hard to prevent existential threats. But its bolder conclusions might be cold comfort for those who see suffering and injustice in the world today and would rather focus on correcting that than helping build a world for people who may or may not live a million years from now. 


Ethics Explainer: Truth & Honesty

How do we know we’re telling the truth? If someone asks you for the time, do you ever consider the accuracy of your response? 

In everyday life, truth is often thought of as a simple concept. Something is factual, false, or unknown. Similarly, honesty is usually seen as the difference between ‘telling the truth’ and lying (with some grey areas like white lies or equivocations in between). ‘Telling the truth’ is somewhat of a misnomer, though. Since honesty is mostly about sincerity, people can be honest without being accurate about the truth. 

In philosophy, truth is anything but simple and weaves itself into a host of other areas. In epistemology, for example, philosophers interrogate the nature of truth by looking at it through the lens of knowledge.  

After all, if we want to be truthful, we need to know what is true. 

Figuring that out can be hard, not just practically, but metaphysically.  

Theories of Truth

There are several competing theories that attempt to explain what truth is, the most popular of which is the correspondence theory. Correspondence refers to the way our minds relate to reality. In it, truth is a belief or statement that corresponds to how the world ‘really’ is independent of our minds or perceptions of it. As popular as this theory is, it does prompt the question: how do we know what the world is like outside of our experience of it? 

Many people, especially scientists and philosophers, have to grapple with the idea that we are limited in our ability to understand reality. For every new discovery, there seems to be another question left unanswered. This being the case, the correspondence theory leads us to a problem of not being able to speak about things being true because we don’t have an accurate understanding of reality. 

Another theory of truth is the coherence theory. This states that truth is a matter of coherence within and between systems of beliefs. Rather than the truth of our beliefs relying on a relation to the external world, it relies on their consistency with other beliefs within a system.  

The strength of this theory is that it doesn’t depend on us having an accurate understanding of reality in order for us to speak about something being true. The weakness is that we can imagine there being several different comprehensive and cohesive system of beliefs that, and thus different people having different ‘true’ beliefs that are impossible to adjudicate between. 

Yet another theory of truth is pragmatist, although there are a couple of varieties, as with pragmatism in general. Broadly, we can think of pragmatist truth as a more lenient and practical correspondence theory.  

For pragmatists, what the world is ‘really’ like only matters as far as it impacts the usefulness of our beliefs in practice.  

So, pragmatist truth is in a sense malleable; it, like the scientific method it’s closely linked with, sees truth as a useful tool for understanding the world, but recognises that with new information and experiment the ‘truth’ will change. 

Ethical aspects of truth and honesty 

Regardless of the theory of truth that you subscribe to, there are practical applications of truth that have a significant impact on how to behave ethically. One of these applications is honesty.  

Honesty, in a simple sense, is speaking what we wholeheartedly believe to be true.  

Honesty comes up a lot in classical ethical frameworks and, as with lots of ethical concepts, isn’t as straightforward as it seems. 

In Aristotelian virtue ethics, honesty permeates many other virtues, like friendship, but is also a virtue in itself that lies between habitual lying and boastfulness or callousness. So, a virtue ethicist might say a severe lack of honesty would result in someone who is untrustworthy or misleading, while too much honesty might result in someone who says unnecessary truthful things at the expense of people’s feelings. 

A classic example is a friend who asks you for your opinion on what they’re wearing. Let’s say you don’t think what they’re wearing is nice or flattering. You could be overly honest and hurt their feelings, you could lie and potentially embarrass them, or you could frame your honesty in a way that is moderate and constructive, like “I think this other colour/fit suits you better”.  

This middle ground is also often where consequentialism lands on these kinds of interpersonal truth dynamics because of its focus on consequences. Broadly, the truth is important for social cohesion, but consequentialism might tell us to act with a bit more or a bit less honesty depending on the individual situations and outcomes, like if the truth would cause significant harm. 

Deontology, on the other hand, following in the footsteps of Immanuel Kant, holds honesty as an absolute moral obligation. Kant was known to say that honesty was imperative even if a murderer was at your door asking where your friend was! 

Outside of the general moral frameworks, there are some interesting ethical questions we can ask about the nature of our obligations to truth. Do certain people or relations have a stronger right to the truth? For example, many people find it acceptable and even useful to lie to children, especially when they’re young. Does this imply age or maturity has an impact on our right to the truth? If the answer to this is that it’s okay in a paternalistic capacity, then why doesn’t that usually fly with adults?  

What about if we compare strangers to friends and family? Why do we intuitively feel that our close friends or family ‘deserve’ the truth from us, while someone off the street doesn’t?  

If we do have a moral obligation towards the truth, does this also imply an obligation to keep ourselves well-informed so that we can be truthful in a meaningful way? 

The nature of truth remains elusive, yet the way we treat it in our interpersonal lives is still as relevant as ever. Honesty is a useful and easier way of framing lots of conversations about truth, although it has its own complexities to be aware of, like the limits of its virtue.