The ethics of AI’s untaxed future

The ethics of AI’s untaxed future
Opinion + AnalysisScience + TechnologyBusiness + Leadership
BY Dia Bianca Lao 24 NOV 2025
“If a human worker does $50,000 worth of work in a factory, that income is taxed. If a robot comes in to do the same thing, you’d think we’d tax the robot at a similar level,” Bill Gates famously said. His call raises an urgent ethical question now facing Australia: When AI replaces human labour, who pays for the social cost?
As AI becomes a cheaper alternative to human labour, the question is no longer if it will dramatically reshape the workforce, but how quickly, and whether the nation’s labour market can adapt in time.
New technology always seems like the stuff of science fiction until its seamless transition from novelty to necessity. Today AI is past its infancy and is now shaping real-world industries. The simultaneous emergence of its diverse use cases and the maturing of automation technology underscores how rapidly it’s evolving, transforming this threat into reality sooner than we think.
Historically, automation tended to focus on routine physical tasks, but today’s frontier extends into cognitive domains. Unlike past innovations that still relied on human oversight, the autonomous nature of emerging technologies threatens to make human labour obsolete with its broader capabilities.
While history shows that technological revolutions have ultimately improved output, productivity, and wages in the long-term, the present wave may prove more disruptive than those before. In 2017, Bill Gates foresaw this looming paradigm shift and famously argued for companies to pay a ‘robot tax’ to moderate the pace at which AI impacts human jobs and help fund other employment types.
Without any formal measures, the costs of AI-driven displacement will likely mostly fall on workers and society, while companies reap the benefits with little accountability.
According to the World Economic Forum, while AI is predicted to create 69 million new jobs, 83 million existing jobs may be phased out by 2027, resulting in a net decrease of 14 million jobs or approximately 2% of current employment. They also projected that 23% of jobs globally will evolve in the next five years, driven by advancements in technology. While the full impact is not yet visible in official employment statistics, the shift toward reducing reliance on human labour through automation and AI is already underway, with entry-level roles and jobs involving logistics, manufacturing, admin, and customer service being the most impacted.
For example, Aurora’s self-driving trucks are officially making regular roundtrips on public roads delivering time- and temperature-sensitive freight in the U.S., while Meituan is making drone deliveries increasingly common in China’s major cities. We now live in a world where you can get your boba milk tea delivered by a drone in less than 20 minutes in places like Shenzhen. Meanwhile in Australia, Rio Tinto has also deployed fully autonomous trains and autonomous haul trucks across its Pilbara iron ore mines, increasing operational time and contributing to a 15% reduction in operating costs.
Companies have already begun recalibrating their workforce, and there is no stopping this train. In the past 12 months, CBA and Bankwest have cut hundreds of jobs across various departments despite rising profits. Forty-five of these roles were replaced by an AI chatbot handling customer queries, while the permanent closure of all Bankwest branches has seen the bank transition to a digital bank, with no intention of bringing back the lost positions. While some argue that redeployment opportunities exist or new jobs might emerge, details remain vague.
Is it possible to fully automate an economy and eliminate the need for jobs? Elon Musk certainly thinks so. It’s no wonder that a growing number of tech elite are investing heavily to replace human labour with AI. From copywriting to coding, AI has proven its versatility in speeding up productivity in all aspects of our lives. Its potential for accelerating innovation, improving living standards and economic growth is unparalleled, but at what cost?
What counts as good for the economy has historically benefited a select few, with technology frequently being a catalyst for this dynamic. For example, the benefits of the Industrial Revolution, including the creation of new industries and increased productivity, were initially concentrated in the hands of those who owned the machinery and capital, while the widespread benefits trickled down later. Without ethical frameworks in place, AI is positioned to compound this inequality.
Some proposals argue that if we make taxes on human labour cheaper while increasing taxes on AI machines and tools, this could encourage companies to view AI as complementary instead of a replacement for human workers. This levy could be a means for governments to distribute AI’s socioeconomic impacts more fairly, potentially funding retraining or income support for displaced workers.
If a robot tax is such a good idea, then why did the European Parliament reject it? Many argue that taxing productivity tools could hinder competitiveness. Without global coordination to implement this, if one country taxes AI and others don’t, it may create an uneven playing field and stifle innovation. How would policymakers even define how companies would qualify for this levy or measure how much to tax, when it’s hard to attribute profits derived from AI? Unlike human workers’ earnings, taxing AI isn’t as straightforward.
The challenge of developing policies that incentivise innovation while ensuring that its benefits and burdens are shared responsibly across society persists. The government’s focus on retraining and upskilling workers to help with this transition is a good start, but they cannot address all the challenges of automation fast enough. Relying solely on these programs risk overlooking structural inequities, such as the disproportionate impact on lower-income or older workers in certain industries, and long-term displacement, where entire job categories may vanish faster than workers can be retrained.
Our fiscal policies should adapt to the evolving economic landscape to help smooth this shift and fund social safety nets. A reduction in human labour’s share in production will significantly impact government revenue unless new measures of taxing capital are introduced.
While a blanket “robot tax” is impractical at this stage, incremental changes to existing taxation policies to target sectors that are most vulnerable to disruption is a possibility. Ideally, policies should distinguish the treatment between technologies that substitute for human labour, and those that complement them, to only disincentivise the former. While this distinction can be challenging, it offers a way to slow down job displacement, giving workers and welfare systems more time to adapt and generate revenue to help with the transition without hindering productivity.
As Microsoft’s CEO Satya Nadella warns, “With this empowerment comes greater human responsibility — all of us who build, deploy, and use AI have a collective obligation to do so responsibly and safely, so AI evolves in alignment with our social, cultural, and legal norms. We have to take the unintended consequences of any new technology along with all the benefits, and think about them simultaneously.”
The challenge in integrating AI more equitably into the economy is ensuring that its broad societal benefits are amplified, while reducing its unintended negative consequences. AI has the potential to fundamentally accelerate innovation for public good but only if progress is tied to equitable frameworks and its ethical adoption.
Australia already regulates specific harms of AI, protecting privacy and personal information through the Privacy Act 1988 and addressing bias through the Australian Privacy Principles (APPs). These examples show that targeted regulation is possible. However, the next step should include ethical guardrails for AI-driven job displacement, such as exploring more equitable taxation, redistribution policies, and accountability frameworks before it’s too late. This transformation will require joint collaboration from governments, companies, and global organisations to collectively build a resilient and inclusive AI-powered future.
The ethics of AI’s untaxed future by Dia Bianca Lao is one of the Highly Commended essays in our Young Writers’ 2025 Competition. Find out more about the competition here.

BY Dia Bianca Lao
Dia Bianca Lao is a marketer by trade but a writer at heart, with a passion for exploring how ethics, communication, and culture shape society. Through writing, she seeks to make sense of complexity and spark thoughtful dialogue.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Business + Leadership
BFSO Young Ambassadors: Investing in our future leaders
Opinion + Analysis
Business + Leadership
Our regulators are set up to fail by design
Opinion + Analysis
Business + Leadership, Relationships, Society + Culture
Extending the education pathway
Opinion + Analysis
Politics + Human Rights, Science + Technology
Ukraine hacktivism
Who gets heard? Media literacy and the politics of platforming

Who gets heard? Media literacy and the politics of platforming
Opinion + AnalysisSociety + CulturePolitics + Human Rights
BY Aubrey Blanche 20 NOV 2025
In an age where emotionally charged information travels faster than ever, the ability to critically evaluate what we read, watch, and share is the cornerstone of responsible citizenship. It’s essential that we’re able to strengthen our media literacy skills to detect bias, maintain trust and safeguard the integrity of public discourse.
Since the rise of generative AI, we have become inundated with content. It’s easier to generate and disseminate information than ever before. When content is cheap, it’s curation and discernment that becomes ever more critical.
And this matters – how information travels influences policy, the bounds of acceptable discourse, and ultimately how our society functions. This means that each of us, whether a social media user, producer of a major national news program, or just someone chatting with a friend, has an obligation to ensure that the truth we share contributes to human flourishing.
What is verifiable?
The reality is that a huge amount of the information we have access to is, well, fake. Media literacy equips people to recognise bias, detect misinformation, and understand the motives behind content creation. Without these skills, we and those around us become vulnerable to manipulation, whether through sensational headlines, deepfakes, or algorithm-driven echo chambers.
Verifying information before sharing is a critical part of media literacy. Every click and share amplifies a message, whether true or false. For example, deepfakes of Scott Morrison and other politicians have been used to perpetuate investment scams. When inaccurate information spreads unchecked, it can fuel polarisation, erode trust in institutions, and even endanger lives. Taking a moment to analyse the source, check for corroboration in other trusted places, and question the credibility of a claim is a small act with enormous impact. It transforms passive consumers into active participants in a healthier information ecosystem.
Whose truth?
What we see as objective is intrinsically shaped by the voices we are exposed to, and how often. This is as true for our social media feeds as it is for the nightly news. The messages that reach us are all in some way biased, meaning that they possess some kind of embedded agenda. But what we most often mean when we talk about bias, is a systematic and repeated filtering or skewing of information to conform to a particular or narrow agenda or worldview.
Because of this, we should be wary of potential bias when issues are covered by individuals or organisations with financial or political interests at stake. For example, jurisdictions around the world are currently wrestling with questions of how training large language models (LLMs) relates to fair use of copyrighted work. In Australia, the government recently ruled out a special exception for AI models to be trained on Australian works without explicit permission or payment. While there was public consultation conducted, prominent voices didn’t all agree with protecting the labor and output of creators as a priority.
Before the federal government made the determination, powerful members of the technology industry were consulted by journalists for their views on how the interests of AI labs and creators should be considered. These voices included those whose financial holdings include investments in the type of AI companies (and their methods) being discussed. Platforming voices with these interests has important implications for setting the terms of the debate as potentially more pro-technology, rather than encouraging a balance of perspectives.
While this specific issue is critical because it’s an issue that affects all of us, it also illustrates a way that we can practice media responsibly.
When we are sharing information, we should consider the interests and ideological alignment of those who are sharing.
Where possible, we should seek to provide a balanced set of perspectives, and ideally one in which any conflicts of interest are clear and disclosed.
Who gets platformed?
There is no free marketplace of ideas. The question of what voices and perspectives are platformed and held up as truth, whether in the media or on our feeds, greatly impacts our our own narratives of events. For example, since October 2023, more than 67,000 Gazans have been killed by Israeli forces. While the genocide has received significant media coverage, the perspectives of people impacted haven’t been equitably represented in mainstream media sources. Recently, a study found that only one Palestinian guest had been booked to share their views on major US Sunday news shows (which set the national agenda) in the last 2 years. In the same period, 48 Israeli guests had been given airtime.
If we assume that Israeli guests do not have a monopoly on the truth, this pattern looks alarming. While this study didn’t speak to the views of the guests, a reasonable person would assume that such a skew in the identities and affiliations represented present a rather one-sided view of events on the ground.
In this case, the platforming also speaks to the relative power of the perspectives. Despite the Palestinian community having the greatest lived experience of harm, their voices are effectively silenced. As we consider from whom we share information, we should always consider the following questions:
- Which individuals or groups have greater access to institutions – media or otherwise – to share their experience?
- In the case of conflict, are opposing forces equally powerful (eg in terms of financial resources, alliances, etc)?
- Who is marginalised, and what is the impact of not platforming that voice?
In today’s media environment, we are flooded with information. This means that the responsibly each of us must take in our sphere of influence has increased proportionally. In order to act as responsible members of our community, we must question which voices we’re highlighting.

BY Aubrey Blanche
Aubrey Blanche is a responsible governance executive with 15 years of impact. An expert in issues of workplace fairness and the ethics of artificial intelligence, her experience spans HR, ESG, communications, and go-to-market strategy. She seeks to question and reimagine the systems that surround us to ensure that all can build a better world. A regular speaker and writer on issues of responsible business, finance, and technology, Blanche has appeared on stages and in media outlets all over the world. As Director of Ethical Advisory & Strategic Partnerships, she leads our engagements with organisational partners looking to bring ethics to the centre of their operations.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Health + Wellbeing, Politics + Human Rights
Constructing an ethical healthcare system
Big thinker
Politics + Human Rights, Society + Culture
Big Thinker: Audre Lorde
Opinion + Analysis
Politics + Human Rights
Lessons from Los Angeles: Ethics in a declining democracy
Opinion + Analysis
Relationships, Society + Culture
Joker broke the key rule of comic book movies: it made the audience think
Big Thinker: Thomas Hobbes

Thomas Hobbes (1588-1679) was an English philosopher, best known for his explanation of the role of government as an insurer of security, which has had an enduring influence on understandings of political philosophy.
Living through the displacement of the English Civil War (1642-1651), Hobbes was grappling with the question of how societies could keep peace and ensure stability, amid conflict and self-interest. The period was marked by social upheaval with the collapse of royal authority, clashes between the government and monarchy and insecurity, which led to him thinking about the manifestations of power, the human condition and the role of government.
His approach to understanding human behaviour was methodical and scientific, being deeply influenced by the scientific revolution of the time. Hobbes believed that human societies, like physical systems, could be understood through cause and effect and that understandings of order and stability were derived from the predictability of human behaviour and power structures.
The state of nature
It was from this historical and intellectual backdrop that Hobbes produced his most famous work, Leviathan (1651). His masterwork Leviathan: The Matter, Forme and Power of a Commonwealth Ecclesiasticall and Civil, garnered him fame for creating what would later become known as the Social Contract Theory, a framework that explains and justifies the exchange that free, equal and rational citizens make in surrendering certain freedoms in return for collective order and protection. This same contract also serves to provide legitimacy to governments and their use of power and authority over citizens.
In Leviathan and his other work, Hobbes disagrees with Aristotle that humans are naturally suited to life as citizens within a state. Hobbes instead argues that humans are not equipped to be rational citizens, as we are easily swayed, often short sighted and highly competitive. He believes these characteristics make humans more predisposed to violence and war rather than political order, with no natural self-restraint. Hobbes states that in this state of nature, without government and order, the life of man is “poor, nasty, brutish and short”, largely due to the insecurity and conflict. Everyone is free and equal, without any rules or restrictions to their actions and coupled with a self interested nature and limited resources, life in this state is a constant struggle.
The social contract
To escape this constant struggle in this state of nature, Hobbes argues that people, through reason, collectively agree to create a social contract. Hobbes believes that political order is only formed when human beings voluntarily give up some of their rights and freedoms, in exchange for order and security from a common authority, the leviathan (a ruler).
Hobbes uses the Leviathan as a metaphor for a powerful ruler or government that embodies the collective will of the people, possessing absolute authority to maintain peace and prevent society from descending back into chaos. Hobbes conception of a social contract and the role of the sovereign or leviathan, refers to these key characteristics:
- The sovereign or leviathan’s power must be absolute – only one authority, as divided power invites factionalism.
- The contract is driven by the purpose of security.
- Individuals cannot revoke the contract once it’s been made, as it would risk bringing back chaos.
- The contract is between the people and the sovereign is not party to this agreement. However, if the sovereign fails to maintain peace and security, the contract loses legitimacy and people return to the state of nature.
This framework for the social contract theory by Hobbes, was adopted by John Locke and Jean-Jaques Rousseau. However, they differed from Hobbes by offering a more optimistic view of human behaviour and the role of government. Critical responses tended to focus on a lack of accountability measures on the leviathan who has expansive absolute power. Locke for example took a liberal view, involving limited government and argued the leviathan was mutually obligated within a social contract, rather than subjects being expected to obey unconditionally to avoid the collapse of civil society.
Hobbes’s ideas continue to shape how government and authority is understood. While very few modern states reflect the vision of an absolute sovereign, the core principles of the social contract theory remain central to political thought. The consent of citizens to be governed in exchange for protection and the rule of law persists. However, in modern liberal democracies, the power of governments is placed under greater scrutiny through constitutions, elections and other checks and balances. Hobbes’s theory provides the foundation for understanding why societies form governments, even as modern democracies reinterpret his ideas to prioritise liberty, representation, and accountability alongside security and order.

BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Politics + Human Rights
He said, she said: Investigating the Christian Porter Case
Opinion + Analysis
Business + Leadership, Politics + Human Rights
We are on the cusp of a brilliant future, only if we choose to embrace it
Opinion + Analysis
Politics + Human Rights
The rights of children
Opinion + Analysis
Business + Leadership, Politics + Human Rights
Berejiklian Conflict
Should we be afraid of consensus? Pluribus and the horrors of mainstream happiness

Should we be afraid of consensus? Pluribus and the horrors of mainstream happiness
Opinion + AnalysisSociety + Culture
BY Joseph Earp 12 NOV 2025
Partway through Dostoyevsky’s The Brothers Karamazov, the author neatly summarises one of the more troubling questions that undercuts civilisation: is it ethical for widespread happiness to come at the expense of the discontent of select individuals? Or, to put it simply: do the needs of the many outweigh the needs of the few?
“Let’s assume that you were called upon to build the edifice of human destiny so that men would finally be happy”, the troubled Ivan Karamazov says. “If you knew that, in order to attain this, you would have to torture just one single creature, would you agree to do it?”
That one question has been probed and explored time and time again in the years since Karamazov was published, most notably in Ursula LeGuin’s sci-fi story The Ones Who Walk Away from Omelas, in which the happiness of a flourishing city depends totally on the torture of a young girl. But it has perhaps never been as thoroughly – not to mention amusingly – explored to the extent that it is in Pluribus, the new sci-fi show from Breaking Bad creator Vince Gilligan.
In Gilligan’s re-telling, near-global happiness is an external force: a virus of sorts, which turns the world’s population into a hive mind of mutually contented drones. The one miserable, unlucky individual whose perpetual unhappiness sets her apart from the mainstream: Carol Sturka (Rhea Seehorn), who appears to be immune to the virus. Or rather, temporarily immune, given much of the early episodes of the show follows the rising, blackly comic threat of the hive mind searching for ways to make Carol one of them.
“Once you see how wonderful it is….” a member of the hive mind tells Carol early on, speaking to her directly from her television set. The hive promises happiness, peace and the end of all human conflict. What they also represent: the tyranny of mainstream thought, an enforced and established consensus where majority rules, no matter how discontented outliers might be. And where happiness itself is dangerously considered the main goal of all human existence.
The problem with happiness
One of Gilligan’s masterstrokes is to set Carol out of the mainstream even before the virus spreads. When we meet her, she’s a beloved romantasy author who can’t stand either the mindless content that she churns out, or the horde of rabid fans who adore her. In the first episode, her business/life partner Helen (Miriam Shor) mocks Carol’s dislike of adulation, teasing that she’s a perpetual miser, illogically rejecting the good time that’s being offered to her.
But when the mainstream offers mere happiness, we should reject it, as both Gilligan and Dostoyevsky seem to suggest. A life of contentment and fitting in with the crowd isn’t an ethical good in and of itself. As the rise of increasingly deranged social media slop proves – the kind of quick dopamine fix provided by TikTok – there’s so much more to life than merely being entertained.
The person who sits in their room all day watching Instagram reels could conceivably be very happy, but wouldn’t we all suggest that they might want more? Should they in fact pursue something greater and more important than simply having a good time?
This is the threat posed by Pluribus’ hive. In an ideal world, happiness should be a kind of side effect of good, not something to be pursued blindly for its own sake. As in, happiness should be what happens when we achieve virtuous behaviours – when we care for others, or pursue a flourishing life. And it certainly should not be enforced by the mainstream at the expense of individual freedoms and wants.
Even as Pluribus’ plot progresses, and it is revealed that Carol’s unhappiness is a very literal threat to the hive – when she lashes out at them, members of the hive abruptly die – the needs of those who sit in the mainstream should never be held above the deep unhappiness of those who also must operate within the world. Not only because enforcing the desires of the collective onto the individual is a harm in itself, but because sometimes – maybe even often – the desires of the collective aren’t particularly desirable.
In praise of conflict
Pluribus slowly encourages us to be suspicious of the idea that the hive are actually as happy as they claim to be. Do we really think that happiness is only a flood of dopamine throughout our body? If so, then as the philosopher Robert Nozick once asked, wouldn’t we therefore choose to step into a machine that did nothing but probe our dopamine receptors for all eternity, living an artificial life where we submit to being little more than switches that can be flipped in order to produce joy?
It doesn’t seem like many of us consider happiness only that. Happiness is what happens when we go to the other side of hardship; when we set ourselves goals and then achieve them. Ultimately, it’s a response to conflict and unhappiness, not the absence of conflict and unhappiness altogether.
Enforced mainstream happiness isn’t just ethically harmful for those who have it enforced upon them; it might be harmful to those who actively want it.
In the age of AI, these issues have never been more timely. In fact, Gilligan himself seems aware of this: each episode of Pluribus ends with a message, hidden in the credits, that reads, “this show was made by humans”.
We live in an era where tech companies constantly promise us that AI will bring ease, contentment, and the ability to fit in with our co-workers, friends and family – with the collective. Even if AI can do our jobs for us, or write tricky text messages to our loved ones, decreasing our discomfort, why would we even want that?
Now more than ever, we should follow Carol’s lead and become perpetual sourpusses. As it turns out, being a grump might be one of the most ethical things of all.

BY Joseph Earp
Joseph Earp is a poet, journalist and philosophy student. He is currently undertaking his PhD at the University of Sydney, studying the work of David Hume.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Relationships, Society + Culture
Where are the victims? The ethics of true crime
Opinion + Analysis
Relationships, Society + Culture
Five dangerous ideas to ponder over the break
Opinion + Analysis
Business + Leadership, Society + Culture
The Ethics Centre: A look back on the highlights of 2018
Opinion + Analysis
Society + Culture
Critical thinking in the digital age
Standing up against discrimination

Standing up against discrimination
Opinion + AnalysisPolitics + Human RightsRelationships
BY Mariam Sawan 11 NOV 2025
A fantastic opportunity arrived when Courage to Care NSW and The Ethics Centre launched a pivotal program for young people across our state, proudly funded by Multicultural NSW’s Compact Grant program. The focus? One of the most pressing struggles Australians continue to face or witness: discrimination.
Over three days, Year 9 and 10 students from across the Illawarra gathered at the Wollongong Youth Centre to take part in the Common Ground program. Together, we learnt not only to recognise discrimination in its many forms but also to challenge and overcome it, helping to build a community where everyone feels safe and accepted, regardless of culture, background, or ability.
Before participating in the program, I understood discrimination only as something “bad” – a simple wrong we were all taught to avoid, without ever questioning why it happens or what sustains it.
Growing up with a multicultural background, I have faced many stereotypical insults, but I never really saw them as a big deal. I knew discrimination existed, but I only saw it at face value – people saying hurtful things to make themselves feel better. Through a series of activities, case studies and creative programs, Common Ground showed me just how complex it is. While I knew discrimination could involve gender, religion, disability, and age, I had assumed it only appeared in big, obvious ways like bullying. I had not considered the smaller, everyday forms such as jokes, exclusion, or subtle assumptions. I learnt how hidden these forms of discrimination can be, shifting my mindset completely. Now I am more empathetic and aware, thinking about the meaning behind a joke rather than dismissing it and have the confidence and practical skills needed to fight what can only be described as a virus of hatred.
What stood out most was how effortlessly the organisers drew us into a topic we’d all heard about countless times before. But this time, it felt different. We were guided by “value cards” that highlighted the program’s principles, especially the three C’s: curiosity, carefulness, and courage. These sparked deep discussions about what those values meant to each of us, encouraging us to explore the issue of discrimination on a personal level.
I had previously thought curiosity was only about seeking knowledge, but the program taught me it is about listening and understanding people’s experiences without judgment. Carefulness is not just about thinking before speaking; it is also about considering how my actions affect others. Courage is having the confidence to try something new and to challenge discrimination respectfully. Hearing how other students approached these values showed me there is more than one way to make a difference and that small actions can matter just as much as big ones. My perspective changed completely.
The activities Common Ground provided were equally eye-opening. An online game showed how misinformation spreads like wildfire on social media, shaping perceptions and fuelling prejudice. Another exercise asked us to “buy privileges” for a child, using only a limited budget. The unfairness of unequal resources became painfully clear as some children could afford opportunities while others missed out. In our final group project, we created videos exploring real experiences of discrimination and brainstorming ways to combat it.
The third and final day brought together students from across the Wollongong LGA for a mix of learning, competition, and celebration. Two teams tied for the People’s Choice Award, with powerful presentations on racial and religious discrimination. The overall winners, however, tackled racial discrimination in a striking way. Their victory came with $1,000 in prize money, which they generously donated to Illawarra Multicultural Services, an organisation that supports migrants and refugees with housing, jobs, and community support.
By the end of the program, the message was clear. We, as young people, had not only learned how to recognise discrimination – even in its subtlest forms – but also how to respond with confidence, safety, and integrity.
The Common Ground program didn’t just teach us about injustice; it empowered us to become catalysts for change, fostering communities built on diversity, inclusiveness, and support.
Common Ground is a workshop program for Year 9-10 students that empowers them to stand up against discrimination. We’re taking expressions of interest for schools in South West Sydney to join this free program in Term 2, 2026. Register your school’s interest today by contacting us at learn@ethics.org.au.

BY Mariam Sawan
Mariam Sawan is a high school student who enjoys learning about how people can make a positive impact in their community and understanding people’s points of view. After participating in the Common Ground program on discrimination, she was inspired to write the article “Standing Up Against Discrimination”. Through her writing, she hopes to encourage others to treat everyone with fairness and respect and to feel empowered to speak up against injustice.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Big thinker
Politics + Human Rights, Relationships
Big Thinker: Germaine Greer
Opinion + Analysis
Business + Leadership, Politics + Human Rights
A foot in the door: The ethics of internships
Opinion + Analysis
Politics + Human Rights, Relationships
Why victims remain silent and then find their voice
Explainer
Relationships
Ethics Explainer: Respect
Meet Aubrey Blanche: Shaping the future of responsible leadership

Meet Aubrey Blanche: Shaping the future of responsible leadership
Opinion + AnalysisBusiness + LeadershipScience + Technology
BY The Ethics Centre 4 NOV 2025
We’re thrilled to introduce Aubrey Blanche, our new Director of Ethical Advisory & Strategic Partnerships, who will lead our engagements with organisational partners looking to operate with the highest standards of ethical governance and leadership.
Aubrey is a responsible governance executive with 15 years of impact. An expert in issues of workplace fairness and the ethics of artificial intelligence, her experience spans HR, ESG, communications, and go-to-market strategy. She seeks to question and reimagine the systems that surround us to ensure that all can build a better world. A regular speaker and writer on issues of ethical business, finance, and technology, she has appeared on stages and in media outlets all over the world.
To better understand the work she’ll be doing with The Ethics Centre, we sat down with Aubrey to discuss her views on AI, corporate responsibility, and sustainability.
We’ve seen the proliferation of AI impact the way in which we work. What does responsible AI use look like to you – for both individuals and organisations?
I think that the first step to responsibility in AI is questioning whether we use it at all! While I believe it is and will be a transformative technology, there are major downsides I don’t think we talk about enough. We know that it’s not quite as effective as many people running frontier AI labs aim to make us believe, and it uses an incredible amount of natural resources for what can sometimes be mediocre returns.
Next, I think that to really achieve responsibility we need partnerships between the public and private sector. I think that we need to ensure that we’re applying existing regulation to this technology, whether that’s copyright law in the case of training, consumer protection in the case of chatbots interacting with children, or criminal prosecution regarding deepfake pornography. We also need business leaders to take ethics seriously, and to build safeguards into every stage from design to deployment. We need enterprises to refuse to buy from vendors that can’t show their investments in ensuring their products are safe.
And last, we need civil society to actively participate in incentivising those actors to behave in ways that are of benefit to all of society (not just shareholders or wealthy donors). That means voting for politicians that support policies that support collective wellbeing, boycotting companies complicit in harms, and having conversations within their communities about how these technologies can be used safely.
In a time where public trust is low in businesses, how can they operate fairly and responsibly?
I think the best way that businesses can build responsibility is to be more specific. I think people are tired of hearing “We’re committed to…”. There’s just been too much greenwashing, too much ethics washing, and too many “commitments” to diversity that haven’t been backed up by real investment or progress. The way through that is to define the specific objectives you have in relation to responsibility topics, publish your specific goals, and regularly report on your progress – even if it’s modest.
And most importantly, do this even when trust is low. In a time of disillusionment, you’ll need to have the moral courage to do the right thing even when there is less short-term “credit” for it.
How can we incentivise corporations to take responsible action on environmental issues?
I think that regulation can be a powerful motivator. I’m really excited that the Australian Accounting Standards Board is bringing new requirements into force that, at least for large companies, will force them to proactively manage climate risks and their impacts. While I don’t think it’s the whole answer, a regulatory “push” can be what’s needed for executives to see that actively thinking about climate in the context of their operations can be broadly beneficial to operations.
What are you most excited about sinking your teeth into at The Ethics Centre?
There’s so much to be excited about! But something that I’ve found wildly inspiring is working with our Young Ambassadors – early career professionals in banking and financial services who are working with us to develop their ethical leadership skills. While I have enjoyed working with our members – and have spent the last 15 years working with leaders in various areas of corporate responsibility – there nothing quite like the optimism you get when learning from people who care so much and who show us what future is possible.
Lastly – the big one, what does ethics mean to you?
A former boss of mine once told me that leadership is not about making the right choice when you have one: it’s about making the best choice you can when you have terrible ones and living with that choice. I think in many cases that’s what ethics is. It gives us a framework not to do the right thing when the answer is clear, but to align ourselves as closely as we can with our values and the greater good when our options are messy, complicated, or confusing.
Personally, I’ve spent a deep amount of time thinking about my values, and if I were forced to distill them down to two, I would wholeheartedly choose justice and compassion. I have found that when I consider choices through those frames, I both feel more like myself and like I’ve made choices that are a net good in the world. And I’ve been lucky enough to spend my career in roles where I got to live those values – that’s a privilege I don’t take for granted, and one of the reasons I’m so thrilled to be in this new role with The Ethics Centre.

BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Business + Leadership
How can Financial Advisers rebuild trust?
Explainer
Business + Leadership, Relationships
Ethics Explainer: Moral injury
Opinion + Analysis
Business + Leadership, Society + Culture
Banking royal commission: The world of loopholes has ended
Opinion + Analysis
Business + Leadership
How a Shadow Values Review can improve your organisation
Ask an ethicist: How much should politics influence my dating decisions?

Ask an ethicist: How much should politics influence my dating decisions?
Opinion + AnalysisRelationships
BY Daniel Finlay 3 NOV 2025
I’ve noticed on dating apps now that many people are displaying their political views on their profile. Is this something I should take into account when talking to someone? How picky should I be about politics when it comes to dating?
Dating is hard even under the best circumstances. Finding the balance between bearing your soul and maintaining a semblance of mysterious allure already feels like a circus feat – before we add in the complexity of having several of these conversations at once on our phones with people we’ve never met before.
If you do manage to wade through that surface mire of app dating, you’ll still be left with some of the harder decisions. A common focus is how we present ourselves. And that’s important – we understandably want to feel like we’re showing an accurate snippet of our identity.
But something to consider is the way that we often consciously or unconsciously judge, categorise, make assumptions about or dismiss people based on small aspects of their presentations of themselves.
Political ideologies, as indicated on dating profiles, usually reflect at least some of our deeply held beliefs. It’s tempting, then, at least for those of us who feel strongly, to use these little markers as a litmus test for our attention. In the age of online dating, we don’t want to waste our time vaguely flirting with someone who actually hates everything we stand for.
But…
Some open-mindedness, perspective-taking and empathy go a long way to breaking down the social barriers that encourage us into echo chambers.
I noticed this firsthand recently when talking to my friends about dating. I told them I had changed my profile to more explicitly reflect some of my political values. We mostly agreed that while it might drive some people away, it was likely to be people I’d be uninterested in regardless.
Then I told them that I often find myself having an adverse reaction to profiles that indicate someone is “Apolitical” or “Not Political” because I see this as apathetic and conflicting with my own strongly held beliefs. Instead of agreeing, they responded to this with variations of “But I’m not political!”.
This didn’t garner the response that I expected because I didn’t realise the way that my close friends thought about and categorised themselves. It turns out they too identified with those labels, not because they don’t have any political opinions, but because they don’t regard them as political. I hadn’t ever asked for their self-reflection before, and so I assumed that I knew how they thought about themselves.
Reframing
For lots of people, “politics” feels far away and unreachable: in rooms with suits, on tv or across the world, seats and benches in inaccessible buildings, smiles and virtue signals with little positive tangible effect.
But this doesn’t mean that someone who actively engages in the broader aspects of politics doesn’t have anything in common with the self-described “apolitical”. Neither does it mean that people on opposing sides of the spectrum have nothing in common.
Recognising this takes reflection on our own ability to remain open-minded, intellectually charitable, and curious.
We have to ask ourselves: is it short-sighted to dismiss people based off a single word? Are we indulging in an echo chamber by looking for people who present exactly like us?
The answer to these questions is both simpler and more complicated than might appear: It’s short-sighted and completely understandable.
As in my situation, you’re incredibly likely to be surrounded by some people who are indifferent to what they consider politics. This doesn’t necessarily make them bad people. What is foundationally important to these relationships are their underlying values and principles. Someone might not be able to identify with a particular movement or phrase or title, but do they care about the same things you do? Do they value the same things you do, for the same reasons?
Granted, dating apps don’t lend themselves to the conversation needed to get to these underlying questions. Organic friendships or even romantic relationships allow more time and space for getting to know what someone values and the principles they stick to in a wider context. Conversely, modern dating often encourages a sense of speed and abundance at odds with this sense of intellectual charity – which brings us to the more complicated answer.
Be discerning, not cynical
As much as we love our friends, we’ll inevitably hold a slightly higher bar for intimate relationships. Those extra levels of intimacy – the extra reliance, co-habitation, psychological, physical and financial vulnerability, etc – all apply a pressure that means a solid foundation of mutual values and principles is even more important for the relationship to be long-lasting (or even to get on the first date).
Political labels are one of many ways we can signal and filter all of these possibilities, and while they’re useful, taking a page out of Aristotle’s book can help prevent us from falling into an echo chamber.
Aristotle spoke of virtues, and specifically about finding the golden mean between two extremes. In the case of dating, we’re trying to avoid the extremes of cynicism and naïvety. Being cynical means a lack of open-mindedness and intellectual charity – it’s assuming the worst of someone based on little evidence, and this can happen a lot when we rely on small markers on profiles to tell us the whole story of a person. We see an identifier that’s usually at odds with our own, and we dismiss them.
Naivety, conversely, is an overabundance of optimism and lack of critical thinking, often resulting in a complete misunderstanding of someone’s character and motivations and leading to disappointment or apparent betrayal.
Being discerning, a comfortable middle ground, is the virtuous person’s dating goal. We want to be able to make quick and accurate judgements based on limited information. In doing so, the aim is to filter out likely harm while remaining open to meeting people a bit different from ourselves.

BY Daniel Finlay
Daniel is a philosopher, writer and editor. He works at The Ethics Centre as Youth Engagement Coordinator, supporting and developing the futures of young Australians through exposure to ethics.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Relationships
How to give your new year’s resolutions more meaning
Opinion + Analysis
Relationships, Society + Culture
What Harry Potter teaches you about ethics
Big thinker
Relationships
Big Thinker: Simone Weil
Opinion + Analysis
Relationships
Agree to disagree: 7 lessons on the ethics of disagreement
The ‘good ones’ aren’t always kind

The ‘good ones’ aren’t always kind
Opinion + AnalysisRelationshipsSociety + Culture
BY Isha Desai 27 OCT 2025
I’m sitting on a low brick wall at a party next to my date. Twenty-something boys and girls mill around, drink in hand, most of them in couples. One man clocks the boy sitting next to me, approaching us with a wide grin: “he’s a good one”.
The man is talking to me. Minutes later a girl rushes over, taking my hand in hers with a squeeze: “he’s one of the nice ones”. It would take another four months of dating before I realise that being a ‘good guy’ is very different from being a kind one.
The terms ‘good guy’ or ‘nice guy’ have been in my consciousness for two decades: a blanket seal of approval given to people (typically men) who display surface-level qualities of respect, decency and likeability.
In The Will to Change: Men, Masculinity, and Love, bell hooks characterises this as a mask. The ‘good guy’ mold can distort participation in oppressive patriarchal systems. One of the largest ethical implications of this term is that it paints men as a binary. They are either a ‘good guy’ or they are a ‘bad guy’. It creates cognitive dissonance when a ‘good guy’ is complicit in the exact structures they claim to reject. The implication of this is a lack of accountability, a sense of confusion and feeling attacked when these men are presented with information that misaligns with their internalised and reaffirmed sense of self.
I always wondered, why is simply being ‘good’ heralded as praise for men? As if the expectation is that they are bad, and when they surprise us with respect they jump to a pedestal as “one of the good ones”.
In October 2024, Graham Norton was joined by Saoirse Ronan, Paul Mescal, Eddie Redmayne and Denzel Washington on his panel talk show. When discussing the concept of using a phone as a self-defence tool, Paul Mescal quipped, “Who’s actually going to think about that? If someone attacked me, I’m not going to go – phone.” Mescal humorously reached into his back pocket as the audience burst into laughter. The men added various comments until Saoirse Ronan cut through their voices, “That’s what girls have to think about all the time. Am I right ladies?” The audience quickly changed tone, cheering her for speaking up whilst the men nodded quietly.
This twenty-second exchange went viral. Publications from Vogue, The Guardian and the BBC all praised Ronan’s truthful outspokenness. However, many drew attention to the men on the show, in particular Paul Mescal. An often-characterised soft boi, Paul Mescal is known for his sensitivity, emotional depth and embracing of feminine traits. He later praised that Saoirse Ronan was “spot on” for calling out women’s safety. But it served as an important reminder that the societally termed ‘good bloke’ is not exempt from bad moments.
Australian philosopher Kate Manne shows us the worst consequences of the ‘golden boy’ trope. In Down Girl: The Logic of Misogyny, she introduces the term ‘himpathy’, used to describe excessive sympathy towards male perpetrators of sexual violence. She describes the reluctance to believe women who testify against established ‘golden boys’, citing the 2015 People v Turner case as her primary study. In 2015, Chanel Miller (formerly Emily Doe) accused Standford freshman Brock Turner of five counts of felony sexual assault. In this case, testimony from a female friend that Brock Turner was “caring, sweet and respectful to her” corroborated the Judge’s assessment of Turner’s character.
Manne reveals himpathy’s dangerous ethical implication: “Good guys aren’t rapists. Brock Turner is a good guy. Therefore, Brock Turner is not a rapist”. The case culminated in six months of jail time and three years of probation; however, Turner was released from jail after three months on good behaviour.
In the manosphere, ‘Nice Guy Syndrome’ has also been used to describe people who are nice with the aim of obtaining or maintaining a sexual relationship with another person. In this case, being ‘good’ is currency for an ulterior agenda where the person exhibiting ‘nice guy’ qualities builds a sense of entitlement that they are owed a romantic or sexual relationship. When the other person rejects them, the ‘nice guy’ can become disdainful or irrationally angry because they were not given what they are ‘owed’. Whilst the ‘good guy’ mold and ‘nice guy syndrome’ are inextricably linked, many individuals equate being good with being kind, when they are sometimes two very different things.
When engaging with an average well-intentioned man, the ethical implications are often nuanced. Dr Glenn R. Schiraldi outlines childhood adversity including neglect, abandonment or abuse as root causes of the insecurity that leads to being passive and overly dependent on others/women for approval. This can create the ‘good guy’ who would rather maintain a likeable façade than engage in conflict.
I’ve often sat with friends after hearing stories where a ‘good guy’ didn’t have the emotional maturity to initiate a hard conversation for fear of appearing unlikeable. And we always came back to the same questions. Having good intentions should not be disincentivised, but where does being good fall and being kind succeed? What does it mean to be kind?
The first time the difference between being good and being kind was articulated to me was in The Imperfects Podcast, where psychologist Dr Emily Musgrove framed it as choosing truth versus harmony. When we want to do the ‘good’ thing, we choose the option that will keep the relationship in harmony. However, in the long term, we don’t achieve harmony through continually sacrificing the hard truth over having a harmonious relationship. Sometimes, delivering a hard truth is kinder than maintaining short term harmony.
I was in my early twenties when I learnt that being kind meant you might have to let someone down. I was in my mid-twenties when I realised that a man being ‘good’ to me didn’t mean he was being ‘kind’ to me. This principle applies to everyone but is one that prevails amongst men that care more about having a ‘good guy’ reputation than leading with integrity.
The fizziness of my cider travels straight to my brain as my legs dangle over the concrete pavement. I giggle, laugh and tipsily dance until the early hours of the morning, meeting his friends for the first time. What no one had told me was how he would keep important secrets from me for fear of hurting my feelings, which would only hurt me more. How he would withdraw when he wasn’t happy with me and how I would respond in frustration, confused and demanding answers. How he would carry antiquated views that would never come to full light because after all, he was a good guy.
We need to eliminate the ‘good guy’ trope as a seal of approval. We need to end the binary that people are either good or bad and start operating on the foundation that everyone is a person with the potential to be good and bad in moments. Instead of being ‘nice’, we should strive to be authentic, truthful and kind, even in the moments where it doesn’t make us look good.
The ‘good ones’ aren’t always kind by Isha Desai is the winning essay in our Young Writers’ 2025 Competition (18-30 age category). Find out more about the competition here.

BY Isha Desai
Isha Desai is a writer, researcher and analyst, graduated from the University of Sydney in Politics and International Relations. She was the 2024 Indo Pacific Fellow for Young Australians in International Affairs (YAIA) and currently works in social impact policy at Penguin Random House ANZ.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Explainer
Relationships
Ethics Explainer: Pragmatism
Opinion + Analysis
Relationships
We already know how to cancel. We also need to know how to forgive
Opinion + Analysis
Relationships, Society + Culture
Greer has the right to speak, but she also has something worth listening to
Opinion + Analysis
Politics + Human Rights, Relationships
Why we should be teaching our kids to protest
Love and the machine

When we think about love, we picture something inherently human. We envision a process that’s messy, vulnerable, and deeply rooted in our connection with others, fuelled by an insatiable desire to be understood and cared for. Yet today, love is being reshaped by technology in ways we never imagined before.
With the rise of apps such as Blush, Replika, and Character.AI, people are forming personal relationships with artificial intelligence. For some, this may sound absurd, even dystopian. But for others, it has become a source of comfort and intimacy.
What strikes me is how such behaviour is often treated as a fun novelty or dismissed as a symptom of loneliness, but this outlook can miss the deeper picture.
Many may misunderstand forming attachments with AI as another harmless, emerging trend, sweeping its profound ethical dimensions under the rug. In reality, this phenomenon forces us to rethink what love is and what humans require from relationships to flourish.
It is not difficult to see the appeal. AI companions offer endless patience, unconditional affirmations and availability at any hour, which human relationships struggle to live up to. Additionally, the World Health Organisation has declared loneliness a “global public health concern” with 1 in 6 people affected worldwide. Mark Zuckerberg, the founder of Meta, framed AI therapy and companionship as remedies to our society’s growing modern disconnection. In recent surveys, 25% of young adults also believe that AI partners could potentially replace real-life romantic relationships.
One of the main ethical concerns is the commodification of connection and intimacy. Unlike human love, built from intrinsically valuable interactions, AI relationships are increasingly shaped by what sociologist George Ritzer calls McDonaldization: the pursuit of calculability, predictability, control, and efficiency. These apps are not designed to nurture a user’s social skills as many believe, but to keep consumers emotionally hooked.
Concerns of a dangerous slippery slope arise as intimacy becomes transactional. Chatbot apps often operate on subscription models where users can “unlock” more customisable or sexual features by paying a fee. By monetising upgrades for further affection, companies profit from users’ loneliness and vulnerability. What appears as love is in fact a business scheme that brings profit, ultimately benefiting large corporations instead of their everyday consumers.
In this sense, we notice one of humanity’s most cherished experiences being corporatised into a carefully packaged product.
Beyond commodification lies the insidious risk of emotional dependency and withdrawal from real-life interactions. Findings from OpenAI and the MIT Media Lab revealed that heavy users of ChatGPT, especially those engaging in emotionally intense conversations, tend to experience increased loneliness long-term and fewer offline social relationships. Dr Andrew Rogoyski of the Surrey Institute for People-Centred AI suggested we are “poking around with our basic emotional wiring with no idea of the long-term consequences.”
A Cornell University study also found that usage of voice-based chatbots initially mitigated loneliness. However, these benefits were reduced significantly with high usage rates, which correlated with higher isolation, increased emotional dependency, and reduced in-person engagement. While AI might temporarily cushion feelings of seclusion, a lasting overreliance seems to exacerbate it.
The misunderstanding further deepens as AI relationships are portrayed as private and inconsequential. What’s wrong with someone choosing to find comfort in an AI partner if it harms no one? However, this risks framing love as a personal preference rather than ongoing relational interactions that shape our character and community.
If we refer to the principles of virtue ethics, Aristotle’s idea of eudaimonia (a flourishing, well lived life) relies on developing virtues like empathy, patience, and forgiveness. Human connections promote personal growth, with their inevitable misunderstandings, disappointments, and the need to forgive. A chatbot like Blush has its responses built upon a Large Language Model to mirror inputs and infinitely affirm them. It may always say “the right thing,” but over time, this inhibits our character development.
It is still undeniably important to acknowledge the potential benefits of AI chatbots. For individuals who, due to physical or psychological reasons, are not in a position to form real world relationships, chatbots can provide an accessible stepping-stone to an emotional outlet. There’s no need to fear or avoid these platforms entirely, but we must reflect consciously upon their deeper ethical implications. Chatbots can supplement our relationships and offer support, but they should never be misunderstood as a replacement for genuine human love.
Decades from now, it might be common to ask whether your neighbour’s partner is human or AI. By then, the foundations of human connection would have shifted in irreversible ways. If love is indeed at the heart of what makes us human, we should at least realise that although programmed chatbots can say “I love you,” only human love teaches us what it truly means.
Love and the machine by Ariel Bai is the winning essay in our Young Writers’ 2025 Competition (13-17 age category). Find out more about the competition here.

BY Ariel Bai
Ariel is a year 10 student currently attending Ravenswood. Passionate about understanding people and the world around her, she enjoys exploring contemporary and social issues through her writing. Her interest in current global trends and human experiences prompted her to craft this piece.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Health + Wellbeing, Relationships
Women must uphold the right to defy their doctor’s orders
Opinion + Analysis
Health + Wellbeing, Relationships, Science + Technology
How to put a price on a life – explaining Quality-Adjusted Life Years (QALY)
Opinion + Analysis
Science + Technology, Society + Culture
The terrible ethics of nuclear weapons
Opinion + Analysis
Politics + Human Rights, Relationships
A critical thinker’s guide to voting
Teachers, moral injury and a cry for fierce compassion

Teachers, moral injury and a cry for fierce compassion
Opinion + AnalysisHealth + WellbeingBusiness + Leadership
BY Lee-Anne Courtney 20 OCT 2025
I first came across the term moral injury during a work break, scrolling through Bond University’s research page, my casual employer at the time.
It sounded vaguely religious and a bit dramatic, and it sparked my curiosity. The article explained that moral injury is a term given to a form of psychological trauma experienced when someone is exposed to events that violate or transgress their deeply held beliefs of right and wrong, leading to biopsychosocial suffering.
First used in relation to military service, moral injury was coined by military psychiatrist Jonathan Shay (2014) when he realised that therapy designed to treat post-traumatic stress disorder (PTSD) wasn’t effective for all service members. Shay used the term moral injury to describe a deeper wound carried by his patients, not from fear for their physical safety, but from violating their own morals or having them violated by leaders. The suffering experienced from exposure to violence was compounded by the damage to their identity and feeling like they’d lost their worth or goodness in the eyes of society and themselves. Insight into this hidden cost of trauma was an epiphany for me.
I’m a secondary teacher by profession, but I’ve not been employed in the classroom for over 10 years. In fact, I haven’t been employed doing anything other than the occasional casual project since that time. Why? Was it burnout, stress, compassion fatigue, lack of resilience, or the ever-handy catch-all, poor mental health? No matter which explanation I considered, the result was the same: a growing sense of personal failure and creeping doubt about whether I’m cut out for any kind of paid work. Being introduced to the term moral injury was like turning a kaleidoscope; all the same colours tumbled into a dazzlingly different pattern. It gave me a fresh lens through which to view my painful experience of leaving the teaching profession. I needed to know more about moral injury, not only for myself, but also for my colleagues, the students and the profession that I love.
If just a glimpse of moral injury gave me hope that my distress in my role as a teacher wasn’t simply personal failure, imagine the impact a deep, shared understanding could have on our education system and wider community.
Within the month, I had written and submitted a research proposal to empirically investigate the impact of moral injury on teacher wellbeing in Australia.
I discovered moral injury, while well-studied in various workforce populations, had received limited but significant empirical attention in the teaching profession. My research clearly showed that moral injury has a serious negative impact on teachers’ wellbeing and professional function. Crucially, it offered insight into why many teachers were experiencing intense psychological distress and a growing urge to leave the profession, even when poor working conditions and eroding public respect were the subject of policy and practice reform.
What makes an event or situation potentially morally injurious is when it transgresses a deeply held value or belief about what is right or wrong. Research exploring moral injury in the teaching practice suggests that teachers hold shared values and beliefs about what good teaching means. Education researchers such as Thomas Albright, Lisa Gonzalves, Ellis Reid and Meira Levinson affirm that teachers aim to guide all students in gaining knowledge and skills while shaping their thinking and behaviour with an awareness of right and wrong, promoting social justice and challenging injustice. Researcher Yibing Quek highlights the development of respectful, critical thinking as a core educational goal that supports students in navigating life’s challenges. Scholars such as Erin Sugrue, Rachel Briggs, and L. Callid Keef-Perry emphasise that the goals of education are only achievable through relationships rooted in deep care, where teachers are responsive to students’ complex learning and wellbeing needs and attuned to ethical dilemmas requiring both compassion and justice. Though ethical dilemmas are inherent in teaching, researcher Dana Cohen-Lissman argues that many are generated from externally imposed policies, potentially leading to moral injury among teachers.
Studies assert that teachers experience moral injury when systemic barriers and practice arrangements keep them from aligning their actions with their professional identity, educational goals, and core teaching values. Teachers exposed to the shortcomings of education and other systems often feel they are complicit in the harm inflicted on students. Researchers have consistently identified neoliberalism, social and educational inequities, racism, and student trauma as key factors contributing to experiences that lead to moral injury for teachers. These systemic problems place crippling pressure on individual teachers, both in the classroom and in leadership, to achieve the stated goals of education, despite policies that provide insufficient and unevenly distributed resources to do so.
Furthermore, the high-stakes accountability demanded of individual employees fails to recognise the collective, community-oriented, interdependent nature of the work of teaching. The problem is that the gap between what is expected of teachers and what they can do is often blamed on them, and over time, they start to believe it of themselves. Naturally, these morally injurious experiences cause emotional distress, hinder job performance, and gradually erode teachers’ wellbeing and job satisfaction.
Experiencing moral distress, witnessing harm to children, feeling betrayed by policymakers and the public, powerless to make meaningful change, and working without the rewards of service, teachers face difficult choices. They can respond to moral injury by quietly resisting, speaking out to demand understanding and systemic change or take what they believe to be the only ethical choice – leave the profession of teaching. But what if, instead of facing these difficult choices, teachers resorted to reductive moral reasoning – simplifying the complex ethical dilemmas piling up around them or ignoring them altogether – so they wouldn’t be disturbed by them? Numbing ethical sensitivity just to keep a job creates even more problems.
Moral injury goes beyond personal failure or inadequacy; it acknowledges the broader systemic conditions that place teachers in situations where their ethical commitments increasingly clash with the neoliberal forces currently shaping education. Moral injury offers a language of lament, an explanation for the anguish experienced in the practice of teaching. An understanding of moral injury in the education sector invites society, collectively, to offer teachers fierce compassion and moral repair by restructuring the social systems that create these conditions. Adding moral injury to the discourse around teacher shortages may even help teachers offer fierce compassion to themselves. It did for me.

BY Lee-Anne Courtney
Lee-Anne Courtney is a secondary teacher with over 30 years of experience in the education sector. She is conducting PhD research with a team at Bond University aimed at uncovering the impact of moral injury on teacher wellbeing in Australia.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Business + Leadership
Leading ethically in a crisis
Opinion + Analysis
Business + Leadership
In the court of public opinion, consistency matters most
Opinion + Analysis
Business + Leadership
Is there such a thing as ethical investing?
Opinion + Analysis
Health + Wellbeing, Relationships










