Ethics explainer: Cultural Pluralism

Imagine a large, cosmopolitan city, where people from uncountable backgrounds and with numerous beliefs all thrive together. People embrace different cultural traditions, speak varying languages, enjoy countless cuisines, and educate their children on diverse histories and practices.

This is the kind of pluralism that most people are familiar with, but a diverse and culturally integrated area like this is specifically an example of cultural pluralism.

Pluralism in a general sense says there can be multiple perspectives or truths that exist simultaneously, even if some of those perspectives are contradictory. It’s contrasted with monism, which says only one kind of thing exists; dualism, which says there are only two kinds of things (for example, mind and body); and nihilism, which says that no things exist.

So, while pluralism more broadly refers to a diversity of views, perspectives or truths, cultural pluralism refers specifically to a diversity of cultures that co-exist – ideally harmoniously and constructively – while maintaining their unique cultural identities.

Sometimes an entire country can be considered culturally pluralistic, and in other places there may be culturally pluralistic hubs (like states or suburbs where there is a thriving multicultural community within a larger more broadly homogenous area).

On the other end of the spectrum is cultural monism, the idea that a certain area or population should have only one culture. Culturally monistic places (for example, Japan or North Korea) rely on an implicit or explicit pressure for others to assimilate. Whereas assimilation involves the homogenisation of culture, pluralism encourages diversity, often embracing people of different ethnic groups, backgrounds, religions, practices and beliefs to come together and share in their differences.

A pluralistic society is more welcoming and supportive of minority cultures because people don’t feel pressured to hide or change their identities. Instead, diverse experiences are recognised as opportunities for learning and celebration. This invites travel and immigration, and translates into better mental health for migrants, the promotion of harmony and acceptance of others, and enhances creativity by exposing people to perspectives and experiences outside of their usual remit.

We also know what the alternative is in many cases. Australia has a dark history of assimilation practices, a symptom of racist, colonial perspectives that saw the decimation of First Nations people and their cultures. Cultural pluralism is one response to this sort of cultural domination that has been damaging throughout history and remains so in many places today.

However, there are plenty of ethical complications that arise in the pursuit of cultural plurality.

For example, sociologist Robert D. Putnam published research in 2007 that spoke about negative short-medium term effects of ethnically diverse neighbourhoods. He found that, on average, trust, altruism and community cooperation was lower in these neighbourhoods, even between those of the same or similar ethnicities.

While Putnam denied that his findings were anti-multicultural, and argues that there are several positive long-term effects of diverse societies, the research does indicate some of the risks associated with cultural pluralism. It can take a large amount of effort and social infrastructure to build and maintain diverse communities, and if this fails or is done poorly it can cause fragmentation of cultural communities.

This also accords with an argument made by journalist David Goodhart, that says people are generally divided into “Anywheres” (people with a mobile identity) and “Somewheres” (people, usually outside of urban areas, who have marginalised, long-term, location-based identities). This incongruity, he says, accounts for things like Brexit and the election of Donald Trump, because they speak to the Somewheres who are threatened by changes to their status quo. Pluralism, Goodhart notes, risks overlooking the discomfort these communities face if they are not properly supported and informed.

Other issues with pluralism include the prioritisation of competing cultural values and traditions. What if one person’s culture is fundamentally intolerant of another person’s culture? This is something we see especially with cultures organised around or heavily influenced by religion. For example, Christianity and Islam are often at odds with many different cultures around issues of sexual preference and gendered rights and responsibilities.

If we are to imagine a truly culturally pluralistic society, how do we ethically integrate people who are intolerant of others?

Pluralism as a cultural ideal also has direct implications for things like politics and law, raising the age-old question about the relationship between morality and the law. If we want a pluralistic society generally, how do the variations in beliefs, values and principles translate into law? Is it better to have a centralised legal system or do we want a legal plurality that reflects the diversity of the area?

This does already exist in some capacity – many countries have Islamic courts that enforce Sharia law for their communities in addition to the overarching governmental law. This parallel law-enforcement also exists in some colonised countries, where parts of Indigenous law have been recognised. For example, in Australia, with the Mabo decision.

Another feature of genuine cultural pluralism that has huge ethical implications and considerations is diversity of media. This is the idea that there should be (that is, a media system that is not monopolised) and diverse representation in media (that is, media that presents varying perspectives and analyses).

Firstly, this ensures that media, especially news media, stays accountable through comparison and competition, rather than a select powerful few being able to widely disseminate their opinions unchecked. Secondly, it fosters a greater sense of understanding and acceptance by exposing people to perspectives, experiences and opinions that they might otherwise be ignorant or reflexively wary of. Thirdly, as a result, it reduces the risk that media, as a powerful disseminator of culture, could end up creating or reinforcing a monoculture.

While cultural pluralism is often seen as an obviously good thing in western liberal societies, it isn’t without substantial challenges. In the pursuit of tolerance, acceptance and harmony, we must be wary of fragmenting cultures and ensure that diverse communities have adequate social supports to thrive.

Big Thinker: Judith Jarvis Thomson

Judith Jarvis Thomson (1929-2020) is one of the most influential ethicists and metaphysicians of the 20th century. She’s known for changing the conversation around abortion, as well as modernising what we now know as the trolley problem.

Thomson was born in New York City on October 4th, 1929. Her mother was Catholic of Czech heritage and her father was Jewish,  who both met at a socialist summer camp. While her parents were religious, they didn’t impose their beliefs on her.  

At the age of 14, Thomson converted to Judaism, after her mother died and her father remarried a Jewish woman two years later. As an adult, she wasn’t particularly religious but she did describe herself publicly as “feel[ing] concern for Israel and for the future of the Jewish people.”   

In 1950, Thomson graduated from Barnard College with a Bachelor of Arts (BA), majoring in philosophy, and then received a second BA in philosophy from Cambridge University in England in 1952. She then went on to receive her Masters in philosophy from Cambridge in 1956 and her PhD in philosophy from Columbia University in New York in 1959.   

Violinists, trolleys and philosophical work

Even though she had received her PhD from Columbia, the philosophy department wouldn’t keep her as a professor as they didn’t hire women. In 1962, she began working as an assistant professor at Barnard college, though she later moved to Boston University and then MIT with her husband, James Thomson, for the majority of her career.  

Thomson is most famous for her thought experiments, especially the violinist case and the trolley problem. In 1971, Thomson published her book A Defense of Abortion, which presented a new kind of argument for why abortions are permissible during a time of heightened debate in the US as a result of the second wave feminist movement. Arguments that defended a woman’s right to an abortion circulated feminist publications and eventually led to the Supreme Court ruling in favour of Roe v. Wade (1973) 

“Opponents of abortion commonly spend most of their time establishing that the foetus is a person, and hardly any time explaining the step from there to the impermissibility of abortion.” – Judith Jarvis Thomson

The famous violinist case asks us to imagine if it is permissible to “unplug” ourselves from a famous violinist, even if it is only for nine months and being plugged in is the only thing keeping them alive. As Thomas Nagel said, she expresses very clearly the essentially negative character of the right to life, which is that it’s a right not to be killed unjustly, and not a right to be provided with everything necessary for life.” To this day, the violinist case is taught in classrooms and recognised as one of the most influential thought experiments arguing for the permissibility of abortion.  

Thomson is famous for another famous thought experiment, the trolley problem. In her 1976 paper “Killing, Letting Die and the Trolley Problem,” Judith Jarvis Thomson articulates a famous thought experiment, first imagined by Philippa Foot, that encourages us to think about the moral relevance of killing people, as opposed to letting people die by doing nothing to save them.  

In the trolley problem thought experiment, a runaway trolley will kill five innocent people unless someone pulls a lever. If the lever is pulled, the trolley will divert onto a different track and only one person will die. As an extension to Foot’s argument, Thomson asks us to think if there is something different about pushing a large man off a bridge, thereby killing him, to prevent five people from dying from the runaway trolley. Why does it feel different to pull a lever rather than push a person? Both have the same potential outcomes and distinguish between killing a person and letting a person die.

In the end, what Thomson finds is that oftentimes, the action as well as the outcome are morally relevant in our decision making process.  


Thomson’s extensive philosophical career hasn’t gone unnoticed. In 2012, she was awarded the American Philosophical Association’s prestigious Quinn Prize for her “service to philosophy and philosophers.” In 2015, she was awarded an honorary doctorate by the University of Cambridge, and then in 2016 she was awarded another honorary doctorate from Harvard.   

Thomson continues to inspire women in philosophy. As one of her colleagues, Sally Haslanger, says: “she entered the field when only a tiny number of women even considered pursuing a career in philosophy and proved beyond doubt that a woman could meet the highest standards of philosophical excellence … She is the atomic ice-breaker for women in philosophy.” 

Ethics explainer: Normativity

Have you ever spoken to someone and realised that they’re standing a little too close for comfort?

Personal space isn’t something we tend to actively think about; it’s usually an invisible and subconscious expectation or preference. However, when someone violates our expectations, they suddenly become very clear. If someone stands too close to you while talking, you might become uncomfortable or irritated. If a stranger sits right next to you in a public place when there are plenty of other seats, you might feel annoyed or confused.

That’s because personal space is an example of a norm. Norms are communal expectations that are taken up by various populations, usually serving shared values or principles, that direct us towards certain behaviours. For example, the norm of personal space is an expectation that looks different depending on where you are.

In some countries, the norm is to keep distance when talking to strangers, but very close when talking to close friends, family or partners. In other countries, everyone can be relatively close, and in others still, not even close relationships should invade your personal space. This is an example of a norm that we follow subconsciously.

We don’t tend to notice what our expectation even is until someone breaks it, at which point we might think they’re disrespecting personal or social boundaries.

Norms are an embodiment of a phenomenon called normativity, which refers to the tendency of humans and societies to regulate or evaluate human conduct. Normativity pervades our daily lives, influencing our decisions, behaviors, and societal structures. It encompasses a range of principles, standards, and values that guide human actions and shape our understanding of what’s considered right or wrong, good or bad.

Norms can be explicit or implicit, originating from various sources like cultural traditions, social institutions, religious beliefs, or philosophical frameworks. Often norms are implicit because they are unspoken expectations that people absorb as they experience the world around them.

Take, for example, the norms of handshakes, kisses, hugs, bows, and other forms of greeting. Depending on your country, time period, culture, age, and many other factors, some of these will be more common and expected than others. Regardless, though, each of them has a or function like showing respect, affection or familiarity.

While these might seem like trivial examples, norms have historically played a large role in more significant things, like oppression. Norms are effectively social pressures, so conformity is important to their effect – especially in places or times where the flouting of norms results in some kind of public or social rebuke.

So, norms can sometimes be to the detriment of people who don’t feel their preferences or values reflected in them, especially when conformity itself is a norm. One of the major changes in western liberal society has been the loosening of norms – the ability for people to live more authentically themselves.

Normative Ethics

Normativity is also an important aspect of ethical philosophy. Normative ethics is the philosophical inquiry into the nature of moral judgments and the principles that should govern human actions. It seeks to answer fundamental questions like “What should I do?”, “How should I live? and “Which norms should I follow?”. Normative ethical theories provide frameworks for evaluating the morality of specific actions or ethical dilemmas.

Some normative ethical theories include:

  • Consequentialism, which says we should determine moral valued based on the consequences of actions.
  • Deontology, which says we should determine moral value by looking at someone’s coherence with consistent duties or obligations.
  • Virtue ethics, which focuses on alignment with various virtues (like honesty, courage, compassion, respect, etc.) with an emphasis on developing dispositions that cultivate these virtues.
  • Contractualism, informed by the idea of the social contract, which says we should act in ways and for reasons that would be agreed to by all reasonable people in the same circumstances.
  • Feminist ethics, or the ethics of care, which says that we should challenge the understand and challenge the way that gender has operated to inform historical ethical beliefs and how it still affects our moral practices today.

Normativity extends beyond individual actions and plays a significant role in shaping societal norms, as we saw earlier, but also laws and policies. They influence social expectations, moral codes, and legal frameworks, guiding collective behavior and fostering social cohesion. Sometimes, like in the case of traffic laws, social norms and laws work in a circular way, reinforcing each other.

However, our normative views aren’t static or unchangeable.

Over time, societal norms and values evolve, reflecting shifts in normative perspectives (cultural, social, and philosophical). Often, we see social norms culminating in the changing of outdated laws that accurately reflected the normative views of the time, but no longer do.

While it’s ethically significant that norms shift over time and adapt to their context, it’s important to note that these changes often happen slowly. Eventually, changes in norms influence changes in laws, and this can often happen even more slowly, as we have seen with homosexuality laws around the world.

A new era of reckoning: Teela Reid on The Voice to Parliament

Later this year Australians will be asked to vote in a referendum on a Voice to Parliament. Can this conversation for Aboriginal and Torres Strait Islander constitutional recognition reconcile the truth of Australia’s past? Or have we embarked on a new era of reckoning with the risk that comes with a referendum?

In April 2022, Proud Wiradjuri and Wailwan woman and lawyer, Teela Reid sat down with Dr Simon Longstaff AO to discuss what reckoning means to her, what we need to make an informed decision on this vote, and what it means for us – collectively and individually.

Dr Simon Longstaff: When I think of reckoning, three things come to mind. Firstly, there’s ‘reckoning’ as in finding your bearings, dead reckoning. There’s another sense in which you reckon up the bill. And then there’s the third sense in which reckoning can be taken as a moment of recognition of one’s responsibilities and making us confront the reality of who and where we are, and what we’ve done. Perhaps we can touch on all three of those…

Teela Reid: My essay, Reckoning, Not Reconciliation, was born out of a frustration with the concept of reconciliation. I had attempted to dismiss it in some ways – whether that was to be provocative or just trying to grapple with my own sense of the world. The past three, almost four decades, has been defined in so-called Australia under this notion of reconciliation. And so, for me, as a Wiradjuri Wailwan woman, the life that I have been fortunate to live, had nothing to do with reconciliation, but it was in fact in spite of it. For me, having grown up in my community with the stories of my ancestors, my paternal and maternal lineages being hounded onto missions, laws passed so we couldn’t speak our native tongue, this notion of reconciliation kind of popped up. 

I remember growing up in that community and then walking into school where we were trained that this notion of reconciliation was to make us feel good. There was almost a sense of denial of the truth. I remember, for example, learning about the Anzacs and there were no soldiers that we were told about that were Aboriginal men or women. And then I’d walk home and my grandparents would sit me around the campfire and give me a whole different education. 

I remember just feeling really frustrated with the world. I want to unravel that and unpack this notion of reckoning. To me, it’s about everyone on this continent embracing the discomfort of the hard work we need to get done.  

Dr Simon Longstaff: How do you look at the obligations you have – in a structure where elders still are the most significant decision makers within a community – and the need to balance that cultural obligation with obligations arising in a structure like the law; where it’s about the quality of reason, the precision of language and the authority of precedent? 

Teela Reid: The western law is a discipline, it’s a practice. I vividly recall being at law school wanting to throw my textbooks up against the library when we were learning about Dicey’s rule of law where we’re all equal before the law. I remember thinking “How do I reconcile this with the stories that I know?” Clearly, the law has never treated my people fairly. There’s never been a fair go in this country for First Nations Peoples. So having those stories in my heart and in my spirit, I still took the opportunity that I was given to go to law school and now to practice law. It’s something I don’t take lightly at all, because I think we would be much worse off as a peoples if we were to take those opportunities for granted and not do the hard yards and live out those opportunities to empower our people.  

Dr Simon Longstaff: Can these different definitions of ‘reckoning’ exist side by side? Or is that a very significant tension in your life?

Teela Reid: It’s a constant tension, to be very frank. It is a very constant tension to not lose yourself as a First Nations woman in this place that’s now called Australia and you’re constantly walking a fine line. When I think about reckoning, it’s also about power. It’s about who has the authority to make decisions in different contexts while at the community level our governance systems operate in a very specific way. I think that’s what Australia forgets. We have very ancient governance systems here still very much intact. They might not be written in legislation or rule books, but they’re passed down orally through our ways of knowing and being. 

We have this higher order in Australia where there’s parliament, there’s states, there’s territories, there’s people making all these different decisions, but at the very heart of that, there is still the omission of the First Nations and there’s still an act of erasure in that. And the symbols are everywhere. It’s in the flag, it’s in the anthem, it’s in these ways that Australians speak their identity that I just don’t relate to. 

Dr Simon Longstaff: The concept of ‘payback’ is sometimes misunderstood as if it’s based in the need for revenge. Instead, it’s about those who have done wrong making restoring balance to the community – paying back what has been taken to those who have suffered loss. Is that the kind of reckoning that you have in mind – bringing it to that point where people recognise loss as a give and take? 

Teela Reid: It’s about the reconciling of the balance. There does need to be compensation, there does need to be these tough decisions and reparations for what First Nations Peoples have lost there.  

The other way in which I see it, there is a level of discomfort that comes down to this truth telling notion that we’re going to need to embrace. Where Australians are at now – each one of you are advocates, you’re campaigners. You have agency in this movement. Reckoning is going to be a really difficult process, unlike reconciliation where we’ve all felt good with our wraps, our cupcakes and our teas. No, this is going to be quite difficult. 

For so long, for 250+ years, as a nation we’ve avoided that discomfort that comes with trying to heal these wounds. Because you might not see the physical wounds, but they’re very deep and they’re intergenerational. When we think of reckoning, we’re all going to have to step up to the plate. And often what happens in a truth telling process, it’s that First Nations Peoples get the onus of having to speak our truth, when in fact white Australia needs to speak its truth. What did your ancestors do to my ancestors? Let’s start to have an open dialogue and take responsibility for that pain. Because I don’t think that we can heal until we reckon with the discomfort and the pain that I think as a nation is going to take many years to get through. 

Dr Simon Longstaff: Whatever the result, the proposed referendum on a Voice to Parliament is going to be an extraordinary moment. What are your hopes at this point? 

Teela Reid: I hope that people are willing to step up and be engaged and make informed choices for themselves in a conversation that should absolutely be based on the facts and not one in which we should be enlivening racism or anything like that. I do believe it’ll pass. I think there is so much goodwill in the Australian community. 

If you look at history, the most successful referendum in 1967, shows that Australians actually feel very deeply on this issue. I’ve travelled to lots of different parts of the country, and engaged with fence seaters or people who want to protest to these kinds of conversations. By the end of it, you sit down, you listen, you work through these conversations and people’s hearts really get it. 

Dr Simon Longstaff: Is this a beginning of a new set of possibilities in this country? 

Teela Reid: I do believe it is. We all know the Uluru Statement has called for a First Nation’s Voice. One of the things I am grappling with both in the legal sense and the moral sense, it’s the enormous compromise our people have to make decade after decade after decade for this nation to just have a breakthrough. It happened with land rights. The Barunga Statement was gifted to Hawke. Hawke promised a treaty, he gave the nation reconciliation. It’s probably why we’re three, four decades behind right now. I don’t think that Prime Ministers like Hawke should be revered for what they’ve done. Even around when Whitlam came in for his short time in power, there were decades of activism and movements that were demanding big things, big changes. And it was only because of that activism that within those three, four years there was able to be this watershed moment of legislation and changes. 

One of the things I am grappling with both in the legal sense and the moral sense, it’s the enormous compromise our people have to make decade after decade after decade for this nation to just have a breakthrough.

And so here I am thinking now, there’s another compromise being made. Perhaps this is more my moment of having to grapple with the way in which compromises are made in the political space.  

I hope that everyday Australians are able to take this opportunity this year to actually reflect and educate themselves on the bigger picture and part of the story to this. Because we’re only really seeing it through that one little kind of myopic lens now that there’s going to be one ballot box and you’re going to be voting yes or no. But there is so much more to this. 

Simon Longstaff: Are you optimistic that people understand this? Can we resolve the question of legitimacy? 

Teela Reid: I do believe it’ll pass, yes. I think there’s so much good faith and goodwill in the people. The strategy behind this movement is correct. Going to the people and not the politicians is what changed this nation. At every single turning point the only reason it’s on the national agenda is because of everyday Australians. It won’t be easy. I think that people shouldn’t get complacent about where we are now. Between now and the ballot box, every single one of you is going to have start your own campaign. Take this extremely seriously. For someone like me, it doesn’t stop. 


For everything you need to know about the Voice to Parliament visit here.

This is an abridged version of In Conversation with Teela Reid. Watch the full discussion below.

Meet David Blunt, our new Fellow exploring the role ethics can play in politics

We’re thrilled to announce we’ve appointed Dr Gwilym David Blunt as an Ethics Centre Fellow.

A writer and commentator on global politics and philosophy, David has spent time as a Senior Lecturer in International Politics at City, University of London and as a Leverhulme Early Career Fellow at the University of Cambridge. Now based in Australia, David has published numerous books, appeared on ABC The Drum and Monocle Daily, and has written for The Conversation, ABC and International Affairs.

To welcome him, we sat down with David to discuss the important role ethics can play when it comes to politics, human rights and philanthropy.

What attracts you to the field of philosophy?

We are often told to ‘go with your gut’ when making decisions. This has always struck me as genuinely terrible advice. Our instincts can be conditioned by any number of prejudices or misconceptions. Philosophy is a way of interrogating these instincts and thinking systematically about hard questions.

How does your background in international politics and human rights shape your approach to philosophy and ethics?

International politics fascinates me because it is often treated as a place where ethics don’t apply. It’s seen as a place in which raw power determines the norms that govern politics, something that few would say is acceptable in domestic politics. Yet, this seems ridiculous.

Take the Russian invasion of Ukraine, many people around the world reacted in horror because wars of aggression are wrong. This is more than an intuition. Self-determination is grounded in an ethical judgement that people have a right to determine their collective destiny without interference. We might question the scope of this right, but most people will agree that at the very minimum it prohibits wars of aggression. There is clearly a place for ethics in world politics.

In fact it is more than a place, there is an urgency for ethics in world politics. The Covid-19 pandemic clearly showed how humanity as a whole faces shared challenges. The viruses that cause pandemics and the greenhouse gases that cause climate change don’t respect the arbitrary lines we draw on maps. We cannot hide behind closed borders and hope it all just goes away. These threats raise ethical questions about duties and responsibilities, where burdens lie, and what we owe to the future.

What kind of work will you be engaging with at The Ethics Centre?

The Ethics Centre is hosting me while I work on my next book which is on philanthropy. This is a topic that I’ve been interested in for a long time, but was on the side burner until the pandemic brought it back into focus. My work usually looks at the grimmer aspects of political philosophy, such as war, terrorism, and extreme inequality. So, it is nice to be working on something that gives some reasons to be hopeful about humanity. Although philanthropy raises a lot of interesting questions about justice, reputation laundering, and fairness.

I’ll also be writing some articles and assisting as a media spokesperson.

Which philosopher has most impacted the way you think?

It’s difficult to pick one, but I think it would have to be Philip Pettit, who isn’t a household name, but, along with Quentin Skinner, revived republicanism in political philosophy, which is what grounds my work. Now, don’t confuse republicanism with Donald Trump or right-wing MAGA politics or even with anti-monarchism. Republicanism is at its core a philosophy of freedom; it’s central claim is no person can be free if they are under the arbitrary power of someone else.

You have also done a lot of research on poverty and the distribution of wealth. What role do you think philanthropy and charity has or should have in that space?

I like to compare philanthropy with the façade of a building. It is something that beautifies, but it isn’t necessary to create a stable and functional structure. To continue this analogy, the parts of a building that keep it up, foundations and reinforced concrete, these are the province of justice. My worry with philanthropy is that it is taking up those structural roles, that it is subverting the role of justice. This isn’t a trivial matter because philanthropy is often characterised by the arbitrary power that republicans tend to worry about. Access to healthcare or education should not rest on the whim of a wealthy person, even if they are a good person, these are things that all people are owed as matter of right.

Your recent book Global Poverty, Injustice and Resistance argues our right to politically resist. We’re currently seeing some extreme cases of human right infringements around the world – what’s an example of resistance you’ve noticed that seems to be having a significant effect?

Resistance is such an interesting subject, because it covers such a wide range of activities. Most people tend to think of revolutions or mass civil disobedience as the paradigm examples of resistance, but I find the less visible forms of resistance more compelling.

I think the best example is illegal or irregular immigration for socio-economic reasons. This topic is one that generates a lot of extreme feelings in wealthy states like Australia and the United States, which is why I find it interesting. People who flee extreme poverty are voting with their feet against our current global political system, where many people are denied a reasonably dignified human life simply because they were born in the wrong country.

Many people’s first instinct is to say that illegal immigrants are doing something wrong, because they are breaking the law, but this goes back to what I said that the beginning of this interview: our instincts can be wrong.

We need to seriously start asking questions about why people cross borders illegally. It takes a lot for someone to leave their home to go to a distant land where they might know no one, have to learn new customs and language, and there is a chance they might die in the journey. We need to think about our complicity with the economic systems that produce avoidable poverty and exacerbate climate change, the push factors for this sort of migration. My hope is that this may help to create systemic change.

What are you reading, watching or listening to at the moment?

For work, I’m finishing up reading Paul Vallely’s massive Philanthropy: From Aristotle to Zuckerberg which is a good accessible examination of philanthropy and re-reading Will MacAskill’s What We Owe the Future for a short piece I’m working on. And I’m also revisiting some of the greatest hits of republicanism from Pettit and Skinner for a new YouTube series I’m doing.

For pleasure, I’m reading Donna Tartt’s The Goldfinch, which is really amazingly written. She’s one of those people who makes me actively jealous of their wordcraft. And I’m reading my wife I, Claudius before bed.

Watching, we are doing a nostalgia trip and rewatching the X-Files, which is fun even if the last few seasons are pretty bad. The sad thing about watching it in 2023 is that fringy, conspiracy theory stuff was just fun entertainment 30 years ago and now it has turned extremely sinister, which kind of drains a bit of the joy out of it.

Listening, I’m a big Last Podcast on the Left fan, because sometimes I just need to learn about cults, aliens, cryptids, and true crime. And since moving to Australia I’ve been getting into Australian music and have been bingeing King Gizzard and the Lizard Wizard.

Lastly, the big one – what does ethics mean to you?

Putting it simply, ethics is my ‘bullshit detector’; it helps me recognise my own bullshit, which keeps me from being complacent, and the bullshit of society at large, which seems to be really piling up.

Big Thinker: Ralph Waldo Emerson

Committed to individualism and credited as the father of transcendentalism, Ralph Waldo Emerson (1803-1882) was an American essayist, lecturer, philosopher and poet.  

Initially on a path to follow his father’s footsteps and serve in the Christian ministry, Emerson attended Harvard’s Divinity School to become a pastor. But as time went on and he delved deeper into his religious studies, he realised an unignorable sense of detachment and divergence from the traditional religious values he was immersed in. And so he left the Second Unitarian Church and decided to forge his own path.  

“To be yourself in a world that is constantly trying to make you something else is the greatest accomplishment.”

Emerson’s influential career began with public lectures in Boston that would inspire some of his most renowned essays and ideas. His lectures centred on human culture, English literature, biography and philosophy. He was known for popularising the major movement known as transcendentalism.  

The Father of Transcendentalism  

“Transcendental” was initially coined by philosopher Immanuel Kant in his theory of transcendental idealism. It’s a theory of perception that holds space and time, along with our five senses, are all subjective experiences and don’t exist outside of the human experience.  

Even though Kant coined the term, Emerson is regarded as the father of transcendentalism.  

Emerson’s transcendentalism, which became one of America’s first literature and philosophical movements, holds that we ought to be doubtful of knowledge we get from our five senses or even logic and reason; the only trustworthy source of knowledge manifests itself in our personal intuition and self-revelations. 

In one of his first lectures, “The Uses of Natural History”, Emerson planted the initial seed for the movement when he explained science as something innately human. He emphasised nature to be an extension of one’s self: “the whole of Nature is a metaphor or image of the human mind.”  

His book-length 1836 essay “Nature” is what officially and explicitly defined transcendentalism.   

In essence, transcendentalists believe nature is paramount: all their ideals are rooted in the natural world. They believe all things are inherently good, humans and nature alike. In much the same way, transcendentalists see the divinity – the “God” – in everything and everyone. As Emerson wrote, “I am part or particle of God.” Transcendentalists also believe in the human potential for achieving greatness and genius. 

Emerson is responsible for introducing a number of people to metaphysical concepts for the first time. A group he helped found in the late 1830’s called the Transcendental Club had dangerous conversations that critiqued societal institutions of the time, such as organised religion and slavery. Its members included prominent thinkers of the time, like Henry David Thoreau and Margaret Fuller, and allowed a space for transcendentalist ideas to grow.  


As the title of one of his most famous essays, “Self-Reliance” describes one of his principal philosophies: relying solely on ourselves. Emerson’s transcendentalism has been equated to romantic individualism because of his emphasis on the self. For understanding and greatness, Emerson believed we ought only to rely on ourselves and trust our intuition. In fact, he believed the only thing separating the common person from “greatness” is that the “greats” have the gall to admit precisely what they’re feeling when they feel it. As humans, much of our experiences and emotions are shared, and Emerson saw beauty in such commonalities.  

At the same time, he cited conformity as a major barrier to achieving greatness. He thought we should be comfortable and proud of being distinctly ourselves. He praised individuality and the pursuit of achieving “an original relation to the universe” by tuning inwards.   

The key to unlocking genius is listening to what Emerson called our “creative insight”. He felt such insight was decidedly divine, God’s way of individually speaking to us. This insight is necessary for anyone to accomplish anything meaningful, and so Emerson encouraged everyone to trust their own creative insight over societal ones. Listening to our divinity, our creative insight, yields a life lived authentically. 

“It is easy in the world to live after the world’s opinion; it is easy in solitude to live after our own; but the great man is he who in the midst of the crowd keeps with perfect sweetness the independence of solitude.”

It’s these transcendentalist ideologies that would eventually inspire philosopher Henry David Thoreau to reject society and go into the woods in order “to live deliberately and front only the essential facts of life”. And that same line of thinking is what inspired Christopher McCandless, an infamous American adventurer, to abandon family and escape to Fairbanks, Alaska in the 1990s. His story of living in the solitude of wilderness was later popularised in the film and novel Into the Wild.     

Although some find wisdom and beauty in Emerson’s fierce admiration of solitude and complete rejection of groupthink, others see privilege in his ideals. Not everyone is able to exercise free will; not everyone can afford to stray from the norm and escape their social circumstances. And so to some, his ideas are lofty and unattainable, less you have the power of class and money on your side.  

Beyond privilege, others see selfishness in his philosophies. By tuning inwards and considering only our own needs and desires, what is lost? What might we sacrifice when we neglect those around us? When we disregard even our loved ones? And yet, Emerson never said anything definitively: 

“But it is the fault of our own rhetoric that we cannot strongly state one fact without seeming to belie some other. I hold our actual knowledge is very cheap.”

Ethics explainer: Nihilism

“If nothing matters, then all the pain and guilt you feel for making nothing of your life goes away.” – Jobu Tupaki, Everything Everywhere All At Once 

Do our lives matter? 

Nihilism is a school of philosophical thought proposing that our existence fundamentally lacks inherent meaning. It rejects various aspects of human existence that are generally accepted and considered fundamental, like objective truth, moral truth and the value and purpose of life. Its origin is the Latin word ‘nihil’, which means ‘nothing’.  

The most common branches of nihilism are existential and moral nihilism, though there are many others, including epistemological, political, metaphysical and medical nihilism. 

Existential nihilism  

In popular use, nihilism usually refers to existential nihilism, a precursor to existentialist thought. This is the idea that life has no inherent meaning, value or purpose and it’s also often (because of this) linked with feelings of despair or apathy. Nihilists in media are usually portrayed as moody, brooding or radical types who have decided that we are insignificant specks floating around an infinite universe, and that therefore nothing matters.  

Nihilist ideas date as far back as Buddha; though the beginning of its uprising in western literature appeared in the early 19th century. This shift was largely a response to the diminishing moral authority of the church (and religion at large) and the rise of secularism and rationalism. This rejection led to the view that the universe had no grand design or purpose, that we are all simply cogs in the machine of the existence. 

Though he wasn’t a nihilist himself, Friedrich Nietzsche is the poster-child for much of contemporary nihilism, especially in pop culture and online circles. Nietzsche wrote extensively on it in the late 19th century, speaking of the crisis we find ourselves in when we realise that the world lacks the intrinsic meaning or value that we want or believed it to have. This is ultimately something that he wanted us to overcome.  

He saw humans responding to this crisis in two ways: passive or active nihilism.  

For Nietzsche, passive nihilists are those who resign themselves to the meaninglessness of life, slowly separating themselves from their own will or desires to minimise the suffering they face from the random chaos of the world. 

In media, this kind of pessimistic nihilism is sometimes embodied by characters who then act on it in a destructive way. For example, the antagonist, Jobu Topaki in Everything Everywhere All At Once comes to this realisation through her multi-dimensional awareness, which convinces her that because of the infinite nature of reality, none of her choices matter and so she attempts to destroy herself to escape the insignificance and meaninglessness she feels. 

Jobu Topaki, Everything Everywhere All At Once (2022)

Active nihilists instead see nihilism as a freeing condition, revealing a world where they are emboldened to create something new on top of the destruction of the old values and ways of thinking.  

Nietzsche’s idea of the active nihilist is the Übermensch (“superman”), a person who overcomes the struggle of nihilism by working to create their own meaning in the face of meaninglessness. They see the absurdity of life as something to be embraced, giving them the ability to live in a way that enforces their own values and “levels the playing field” of past values.  

Moral nihilism

Existential nihilism often gives way to moral nihilism, the idea that morality doesn’t exist, that no moral choices are preferable in comparison to others. Because, if our lives don’t have intrinsic meaning, if objective values don’t exist, then by what standard can we call actions right or wrong? We normally see this kind of nihilism embodied by anarchic characters in media. 

An infamous example is the Joker from the Batman franchise. Especially in renditions like The Dark Knight (2008) and Joker (2019), the Joker is portrayed as someone whose expectations of the world have failed him, whose tortuous existence has led him to believe that nothing matters, the world doesn’t care, and that in the face of that, we shouldn’t care about anything or anyone either. In his words, “everything burns” in the end, so he sees no problem in hastening that destruction and ultimately the destruction of himself. 

The Joker, 2019

“Now comes the part where I relieve you, the little people, of the burden of your useless lives.”

The Joker epitomises the populist understanding of nihilism and one of the primary ethical risks of this philosophical world view. For some people, viewing their lives as lacking inherent meaning or value causes a psychological spiral into apathy.  

This spiral can cause people to become self-destructive, reclusive, suicidal and otherwise hasten towards “nothingness”. In others, it can cause outwardly destructive actions because of their perception that since nothing matters in some kind of objective sense, they can do whatever they want (think American Psycho).  

Nihilism has particularly flourished in many online subcultures, fuelling the apathy of edgelords towards the plights of marginalised populations and often resulting in a tendency towards verbal and physical violence. One of the major challenges of nihilism, historically and today, is that it’s not obviously false. This is where we rely on philosophy to be able to justify why any morality should exist at all. 

Where to go from here

A common thread runs through many of the nihilist and existentialist writers about what we should do in the face of inherent meaninglessness: create it ourselves. 

Existentialists like Simone de Beauvoir and Jean-Paul Sartre talk about the importance of recognising the freedom that this kind of perspective gives us. And, equally, the importance of making sure that we make meaning for ourselves and for others through our life. 

For some people, that might be a return to religion. But there are plenty of other ways to create meaning in life: focusing on what’s subjectively meaningful to you or those you care about and fully embracing those things. Existence doesn’t need to have intrinsic meaning for us to care. 

A framework for ethical AI

Artificial intelligence has untold potential to transform society for the better. It also has equal potential to cause untold harm. This is why it must be developed ethically.

Artificial intelligence is unlike any other technology humanity has developed. It will have a greater impact on society and the economy than fossil fuels, it’ll roll out faster than the internet and, at some stage, it’s likely to slip from our control and take charge of its own fate.

Unlike other technologies, AI – particularly artificial general intelligence (AGI) – is not the kind of thing that we can afford to release into the world and wait to see what happens before regulating it. That would be like genetically engineering a new virus and releasing it in the wild before knowing whether it infects people.

AI must be carefully designed with purpose, developed to be ethical and regulated responsibly. Ethics must be at the heart of this project, both in terms of how AI is developed and also how it operates.

This sentiment is the main reason why many of the world’s top AI researchers, business leaders and academics signed an open letter in March 2023 calling for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, in order to “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts”.

Some don’t think a pause goes far enough. Eliezer Yudkowsky, the lead researcher at the Machine Intelligence Research Institute has called for a complete, worldwide and indefinite moratorium on training new AI systems. He argued that the risks posed by unrestrained AI are so great that countries ought to be willing to use military action to enforce the moratorium.

It is probably impossible to enforce a pause on AI development without backing it with the threat of military action. Few nations or businesses will willingly risk falling behind in the race to commercialise AI. However, few governments are likely to be willing to go to war force them to pause.

While a pause is unlikely to happen, the ethical challenge facing humanity is that the pace of AI development is significantly faster than the pace at which we can deliberate and resolve ethical issues. The commercial and national security imperatives are also hastening the development and deployment of AI before safeguards have been put in place. The world now needs to move with urgency to put these safeguards in place.

Ethical by design

At the centre of ethics is the notion that we must take responsibility for how our actions impact the world, and we should direct our action in ways that are beneficent rather than harmful.

Likewise, if AI developers wish to be rewarded for the positive impact that AI will have on the world, such as by deriving a profit from the increased productivity afforded by the technology, then they must also accept responsibility for the negative impacts caused by AI. This is why it is in their interest (and ours) that they place ethics at the heart of AI development.

The Ethics Centre’s Ethical by Design framework can guide the development of any kind of technology to ensure it conforms to essential ethical standards.This framework should be used by those developing AI, by governments to guide AI regulation, and by the general public as a benchmark to assess whether AI conforms to the ethical standards they have every right to expect.

The framework includes eight principles:

Ought before can

This refers to the fact that just because we can do something, it doesn’t mean we should. Sometimes the most ethically responsible thing is to not do something.

If we have reasonable evidence that a particular AI technology poses an unacceptable risk, then we should cease development, or at least delay until we are confident that we can reduce or manage that risk.

We have precedent in this regard. There are bans in place around several technologies, such as human genetic modification or biological weapons that are either imposed by governments or self-imposed by researchers because they are aware they pose an unacceptable risk or would violate ethical values. There is nothing in principle stopping us from deciding to do likewise with certain AI technologies, such as those that allow the production of deep fakes, or fully autonomous AI agents.


Most people agree we should respect the intrinsic value of things like humans, sentient creatures, ecosystems or healthy communities, among other things, and not reduce them to mere ‘things’ to be used for the benefit of others.

So AI developers need to be mindful of how their technologies might appropriate human labour without offering compensation, as has been highlighted with some AI image generators that were trained on the work of practising artists. It also means acknowledging that job losses caused by AI have more than an economic impact and can injure the sense of meaning and purpose that people derive from their work.

If the benefits of AI come at the cost of things with intrinsic value, then we have good reason to change the way it operates or delay its rollout to ensure that the things we value can be preserved.


AI should give people more freedom, not less. It must be designed to operate transparently so individuals can understand how it works, how it will affect them, and then make good decisions about whether and how to use it.

Given the risk that AI could put millions of people out of work, reducing incomes and disempowering them while generating unprecedented profits for technology companies, those companies must be willing to allow governments to redistribute that new wealth fairly.

And if there is a possibility that AGI might use its own agency and power to contest ours, then the principle of self-determination suggests that we ought to delay its development until we can ensure that humans will not have their power of self-determination diminished.


By its nature, AI is wide-ranging in application and potent in its effects. This underscores the need for AI developers to anticipate and design for all possible use cases, even those that are not core to their vision.

Taking responsibility means developing it with an eye to reducing the possibility of these negative cases becoming a reality and mitigating against them when they’re inevitable.

Net benefit

There are few, if any, technologies that offer pure benefit without cost. Society has proven willing to adopt technologies that provide a net benefit as long as the costs are acknowledged and mitigated. One case study is the fossil fuel industry. The energy generated by fossil fuels has transformed society and improved the living conditions of billions of people worldwide. Yet once the public became aware of the cost that carbon emissions impose on the world via climate change, it demanded that emissions be reduced in order to bring the technology towards a point of net benefit over the long term.

Similarly, AI will likely offer tremendous benefits, and people might be willing to incur some high costs if the benefits are even greater. But this does not mean that AI developers can ignore the costs nor avoid taking responsibility for them.

An ethical approach means doing whatever they can to reduce the costs before they happen and mitigating them when they do, such as by working with governments to ensure there are sufficient technological safeguards against misuse and social safety nets in place should the costs rise.


Many of the latest AI technologies have been trained on data created by humans, and they have absorbed the many biases built into that data. This has resulted in AI acting in ways that negatively discriminate against people of colour or those with disabilities. There is also a significant global disparity in access to AI and the benefits it offers. These are cases where the AI has failed the fairness test.

AI developers need to remain mindful of how their technologies might act unfairly and how the costs and benefits of AI might be distributed unfairly. Diversity and inclusion must be built into AI from the ground level through training data and methods, and AI must be continuously monitored to see if new biases emerge.


Given the potential benefits of AI, it must be made available to everyone, including those who might have greater barriers to access, such as those with disabilities, older populations, or people living with disadvantage or in poverty. AI has the potential to dramatically improve the lives of people in each of these categories, if it is made accessible to them.


Purpose means being directed towards some goal or solving some problem. And that problem needs to be more than just making a profit. Many AI technologies have wide applications, and many of their uses have not even been discovered yet. But this does not mean that AI should be developed without a clear goal and simply unleased into the world to see what happens.

Purpose must be central to the development of ethical AI so that the technology is developed deliberately with human benefit in mind. Designing with purpose requires honesty and transparency at all stages, which allows people to assess whether the purpose is worthwhile and achieved ethically.

The road to ethical AI

We should continue to press for AI to be developed ethically. And if technology companies are reluctant to pay careful attention to ethics, then we should call on our governments to impose sensible regulations on them.

The goal is not to hinder AI but to ensure that it operates as intended and that the benefits flow on to the greatest possible number. AI could usher in a fourth industrial revolution. It would pay for us to make this one even more beneficial and less disruptive than the past three.

As a Knowledge Partner in the Responsible AI Network, The Ethics Centre helps provide vision and discussion about the opportunity presented by AI.

The ethical dilemma of the 4-day work week

Ahead of an automation and artificial intelligence revolution, and a possible global recession, we are sizing up ways to ‘work smarter, not harder’. Could the 4-day work week be the key to helping us adapt and thrive in the future?

As the workforce plunged into a pandemic that upended our traditional work hours, workplaces and workloads, we received the collective opportunity to question the 9-5, Monday to Friday model that has driven the global economy for the past several decades.

Workers were astounded by what they’d gained back from working remotely and with more flexible hours. Not only did the care of elderly, sick or young people become easier from the home office, but also hours that were previously spent commuting shifted to more family and personal time. 

This change in where we work sparked further thought about how much time we spend working. In 2022, the largest and most successful trial of a four-day working week delivered impressive results. Some 92% of 61 UK companies who participated in a two-month trial of the shorter week declared they’d be sticking with the 100:80:100 model in what the 4 Day Week director Joe Ryle called a “major breakthrough moment” for the movement.  

Momentum Mental Health chief executive officer Debbie Bailey, who participated in the study, said her team had maintained productivity and increased output. But what had stirred her more deeply was a measurable “increase in work-life balance, happiness at work, sleep per night, and a reduction in stress” among staff. 

However, Bailey said, the shorter working week must remain viable for her bottom line, something she ensures through a tailor-made ‘Rules of Engagement’ in her team. “For example, if we don’t maintain 100 per cent outputs, an individual or the full team can be required to return to a 5-day week pattern,” she explained. 

Beyond staff satisfaction, a successful implementation of the 4-day week model could also boost the bottom line for businesses.

Reimagining a more ethical working environment, advocates say, can yield comprehensive social benefits, including balancing gender roles, elongated lifespans, increased employee well-being, improved staff recruitment and retention and a much-needed reduction in workers’ carbon footprint as Australia works towards net-zero by 2050. 

University of Queensland Business School’s associate professor Remi Ayoko says working parents with a young family will benefit the most from a modified work week, with far greater leisure time away from the keyboard offering more opportunity for travel and adventure further afield, as well as increased familial bonding and life experiences along the way.  

However, similar to remote work, the 4-day working week has not been without its criticisms. Workplace connectivity is one aspect that can fall by the wayside when implementing the model – a valuable culture-building part of work, according to the University of Auckland’s Helen Delaney and Loughborough University’s Catherine Casey. 

Some workers reported that “the urgency and pressure was causing “heightened stress levels,” leaving them in need of the additional day off to recover from work intensity. This raises the question of whether it is ethical for a workplace to demand a more robotic and less human-focussed performance.  

In November last year, Australian staff at several of Unilever’s household names, including Dove, Rexona, Surf, Omo, TRESemmé, Continental and Streets, trialed a 100:80:100 model in the workplace. Factory workers did not take part due to union agreements.  

To maintain productivity, Unilever staff were advised to cut “lesser value” activities during working hours, like superfluous meetings and the use of staff collaboration tool Microsoft Teams, in order to “free up time to work on items that matter most to the people we serve, externally and internally”. 

If eyebrows were raised by that instruction, they needed only look across the ditch at Unilever New Zealand, where an 18-month trial yielded impressive results. Some 80 staff took a third (34%) fewer sick days, stress levels fell by a third (33%), and issues with work-life balance tumbled by two-thirds (67%). An independent team from the University of Technology Sydney monitored the results. 

Keogh Consulting CEO Margit Mansfield told ABC Perth that she would advise business leaders considering the 4-day week to first assess the existing flexibility and autonomy arrangements in place – put simply, looking into where and when your staff actually want to work – to determine the most ethically advantageous way to shake things up. 

Mansfield says focussing on redesigning jobs to suit new working environments can be a far more positive experience than retrofitting old ones with new ways. It can mean changing “the whole ecosystem around whatever the reduced hours are, because it’s not just simply, well, ‘just be as productive in four days’, and ‘you’re on five if the job is so big that it just simply cannot be done’.” 

New modes of working, whether in shorter weeks or remote, are also seeing the workplace grappling with a trust revolution. On the one hand, the rise of project management software like Asana is helping managers monitor deliverables and workload in an open, transparent and ethical way, while on the other, controversial tracking software installed on work computers is causing many people, already concerned about their data privacy, to consider other workplaces. 

It is important to recognise that the relationship between employer and employee is not one-sided and the reciprocation of trust is essential for creating a work environment that fosters productivity, innovation and wellbeing.

While employees now anticipate flexibility to maintain a healthy work-life balance, employers also have expectations – one of which is that employees still contribute to the culture of the organisation. 

When employees are engaged and motivated they are more likely to contribute to the culture of the organisation which can inform the way the business interacts with society more broadly. Trust reciprocation is not just about meeting individual needs but also working together on a common purpose. By prioritising the well-being of their employees and empowering them to contribute to the culture of the organisation a virtuous cycle is being created. Whether this is a 4-day working week or a hybrid structure is for the employer and employee to explore. 

CEO of Microsoft, Satya Nadella says forming a new world working relationship based on trust between all parties can be far more powerful for a business than building parameters around workers. After all, “people come to work for other people, not because of some policy”.  

Thought experiment: "Chinese room" argument

If a computer responds to questions in an intelligent way, does that mean it is genuinely intelligent?

Since its release to the public in November 2022, ChatGPT has taken the world by storm. Anyone can log in, ask a series of questions, and receive very detailed and reasonable responses.

Given the startling clarity of the responses, the fluidity of the language and the speed of the response, it is easy to assume that ChatGPT “understands” what it’s reporting back. The very language used by ChatGPT, and the way it types out each word individually, reinforces the feeling that we are “chatting” with another intelligent being.

But this raises the question of whether ChatGPT, or any other large language model (LLM) like it, is genuinely capable of “understanding” anything, at least in the way that humans do. This is where a thought experiment concocted in the 1980s becomes especially relevant today.

“The Chinese room”

Imagine you’re a monolingual native English speaker sitting in a small windowless room surrounded by filing cabinets with drawers filled with cards, each featuring one or more Chinese characters. You also have a book of detailed instructions written in English on how to manipulate those cards.

Given you’re a native English speaker with no understanding of Chinese, the only thing that will make sense to you will be the book of instructions.

Now imagine that someone outside the room slips a series of Chinese characters under the door. You look in the book and find instructions telling you what to do if you see that very series of characters. The instructions culminate by having you pick out another series of Chinese characters and slide them back under the door.

You have no idea what the characters mean but they make perfect sense to the native Chinese speaker on the outside. In fact, the series of characters they originally slid under the door formed a question and the characters you returned formed a perfectly reasonable response. To the native Chinese speaker outside, it looks, for all intents and purposes, like the person inside the room understands Chinese. Yet you have no such understanding.

This is the “Chinese room” thought experiment proposed by the philosopher John Searle in 1980 to challenge the idea that a computer that simply follows a program can have a genuine understanding of what it is saying. Because Searle was American, he chose Chinese for his thought experiment. But the experiment would equally apply to a monolingual Chinese speaker being given cards written in English or a Spanish speaker given cards written in Cherokee, and so on.

Functionalism and Strong AI

Philosophers have long debated what it means to have a mind that is capable of having mental states, like thoughts or feelings. One view that was particularly popular in the late 20th century was called “functionalism”.

Functionalism states that a mental state is not defined by how it’s produced, such as requiring that it must be the product of a brain in action. It is also not defined by what it feels like, such as requiring that pain have a particular unpleasant sensation. Instead, functionalism says that a mental state is defined by what it does.

This means that if something produces the same aversive response that pain does in us, even if it is done by a computer rather than a brain, then it is just as much a mental state as it is when a human experiences pain.

Functionalism is related to a view that Searle called “Strong AI”. This view says that if we produce a computer that behaves and responds to stimuli in exactly the same way that a human would, then we should consider that computer to have genuine mental states. “Weak AI”, on the other hand, simply claims that all such a computer is doing is simulating mental states.

Searle offered the Chinese room thought experiment to show that being able to answer a question intelligently is not sufficient to prove Strong AI. It could be that the computer is functionally proficient in speaking Chinese without actually understanding Chinese.

ChatGPT room

While the Chinese room remained a much-debated thought experiment in philosophy for over 40 years, today we can all see the experiment made real whenever we log into Chat GPT. Large language models like ChatGPT are the Chinese room argument made real. They are incredibly sophisticated versions of the filing cabinet, reflecting the corpus of text upon which they’re trained, and the instructions, representing the probabilities used to decide how to pick which character or word to display next.

So even if we feel that ChatGPT – or a future more capable LLM – understands what it’s saying, if we believe that the person in the Chinese room doesn’t understand Chinese, and that LLMs operate in much the same way as the Chinese room, then we must conclude that it doesn’t really understand what it’s saying.

This observation has relevance for ethical considerations as well. If we believe that genuine ethical action requires the actor to have certain mental states, like intentions or beliefs, or that ethics requires the individual to possess certain virtues, like integrity or honesty – then we might conclude that a LLM is incapable of being genuinely ethical if it lacks these things.

A LLM might still be able to express ethical statements and follow prescribed ethical guidelines imposed by its creators – as has been the case in the creators of ChatGPT limiting its responses around sensitive topics such as racism, violence and self-harm – but even if it looks like it has its own ethical beliefs and convictions, that could be an illusion similar to the Chinese room.