Using AI for work: Person using ChatGPT on laptop. Ethical AI use concept.

Ask an ethicist: Should I use AI for work?

Typing on laptop using ChatGPT. AI ethics question: Should I use AI for work? Person using chatbot.

My workplace is starting to implement AI usage in a lot of ways. I’ve heard so many mixed messages about how good or bad it is. I don’t know whether I should use it, or to what extent. What should I do?

Artificial intelligence (AI) is quickly becoming unavoidable in our daily lives. Google something, and you’ll be met with an “AI overview” before you’re able to read the first result. Open up almost any social media platform and you’ll be met with an AI chat bot or prompted to use their proprietary AI to help you write your message or create an image. 

Unsurprisingly, this ubiquity has rapidly extended to the workplace. So, what do you do if AI tools are becoming the norm but you’re not sure how you feel about it? Maybe you’re part of the 36% of Australians who aren’t sure if the benefits of AI outweigh the harms. Luckily, there’s a few ethical frameworks to help guide your reasoning. 

Outcomes

A lot of people care about what AI is going to do for them, or conversely how it will harm them or those they care about. Consequentialism is a framework that tells us to think about ethics in terms of outcomes – often the outcomes of our actions, but really there are lots of types of consequentialism. 

Some tell us to care about the outcomes of rules we make, beliefs or attitudes we hold, habits we develop or preferences we have (or all of the above!). The common thread is the idea that we should base our ethics around trying to make good things happen.  

This might seem simple enough, but ethics is rarely simple.  

AI usage is having and is likely to have many different competing consequences, short and long-term, direct and indirect.  

Say your workplace is starting to use AI tools. Maybe they’re using email and document summaries, or using AI to create images, or using ChatGPT like they would use Google. Should you follow suit? 

If you look at the direct consequences, you might decide yes. Plenty of AI tools give you an edge in the workplace or give businesses a leg up over others. Being able to analyse data more quickly, get assistance writing a document or generate images out of thin air has a pretty big impact on our quality of life at work. 

On the other hand, there are some potentially serious direct consequences of relying on AI too. Most public large language model (LLM) chatbots have had countless issues with hallucinations. This is the phenomenon where AI perceives patterns that cause it to confidently produce false or inaccurate information. Given how anthropomorphised chatbots are, which lends them an even higher degree of our confidence and trust, these hallucinations can be very damaging to people on both a personal and business level. 

Indirect consequences need to be considered too. The exponential increase in AI use, particularly LLM generative AI like ChatGPT, threatens to undo the work of climate change solutions by more than doubling our electricity needs, increasing our water footprint, greenhouse gas emissions and putting unneeded pressure on the transition to renewable energy. This energy usage is predicted to double or triple again over the next few years. 

How would you weigh up those consequences against the personal consequences for yourself or your work? 

Rights and responsibilities

A different way of looking at things, that can often help us bridge the gap between comparing different sets of consequences, is deontology. This is an ethical framework that focuses on rights (ways we should be treated) and duties (ways we should treat others). 

One of the major challenges that generative AI has brought to the fore is how to protect creative rights while still being able to innovate this technology on a large scale. AI isn’t capable of creating ‘new’ things in the same way that humans can use their personal experiences to shape their creations. Generative AI is ‘trained’ by giving the models access to trillions of data points. In the case of generative AI, these data points are real people’s writing, artwork, music, etc. OpenAI (creator of ChatGPT) has explicitly said that it would be impossible to create these tools without the access to and use of copyrighted material. 

In 2023, the Writers Guild of America went on a five-month strike to secure better pay and protections against the exploitation of their material in AI model training and subsequent job replacement or pay decreases. In 2025, Anthropic settled for $1.5 billion in a lawsuit over their illegal piracy of over 500,000 books used to train their AI model.

Creative rights present a fundamental challenge to the ethics of using generative AI, especially at work. The ability to create imagery for free or at a very low cost with AI means businesses now have the choice to sidestep hiring or commissioning real artists – an especially fraught decision point if the imagery is being used with a profit motive, as it is arguably being made with the labour of hundreds or thousands of uncompensated artists. 

What kind of person do you want to be?

Maybe you’re not in an office, though. Maybe your work is in a lab or field research, where AI tools are being used to do things like speed up the development of life-changing drugs or enable better climate change solutions 

Intuitively, these uses might feel more ethically salient, and a virtue ethics point of view could help make sense of that. Virtue ethics is about finding the valuable middle ground between extreme sets of characteristics – the virtues that a good person, or the best version of yourself, would embody. 

On the one hand, it’s easy to see how this framework would encourage use of AI that helps others. A strong sense of purpose, altruism, compassion, care, justice – these are all virtues that can be lived out by using AI to make life-changing developments in science and medicine for the benefit of society. 

On the other hand, generative AI puts another spanner in the works. There is an increasing body of research looking at the negative effects of generative AI on our ability to think critically. Overreliance and overconfidence in AI chatbots can lead to the erosion of critical thinking, problem solving and independent decision making skills. With this in mind, virtue ethics could also lead us to be wary of the way that we use particular kinds of AI, lest we become intellectually lazy or incompetent.  

The devil in the detail

AI, in all its various capacities, is revolutionising the way we work and is clearly here to stay. Whether you opt in or not is hopefully still up to you in your workplace, but using a few different ethical frameworks, you can prioritise your values and principles and decide whether and what type of AI usage feels right to you and your purpose. 

Whether you’re looking at the short and long-term impacts of frequent AI chatbot usage, the rights people have to their intellectual property, the good you can do with AI tools or the type of person you want to be, maintaining a level of critical reflection is integral to making your decision ethical.  

copy license

What happens when the progressive idea of cultural ‘safety’ turns on itself?

In mid-August, controversy enveloped the Bendigo Writers Festival. Just days before it began, festival organisers sent a code of conduct to its speakers – a code that drove more than 50 authors to make the difficult decision to pull out.

The code was intended to ensure the event’s safety, with a requirement to “avoid language or topics that could be considered inflammatory, divisive, or disrespectful”. Yet distressed speakers argued it made them feel culturally unsafe. Speakers on panels presented by La Trobe university were also required to employ a contested definition of antisemitism.

The incident is the most recent in a series of controversies in which progressive writers and artists have faced restrictions and cancellations, with organisations citing “safety” as the reason. They include libraries cancelling invited speakers and asking writers to avoid discussing Gaza, Palestine and Israel.

How did speech rules developed and promoted by the progressive left – rules promoting cultural safety and safe spaces – become tools that could be wielded against it?

Applying ‘safety’ to speech

Over recent years, “safety” – including in terms like “safe spaces” and “cultural safety” – has become a commonly raised ethical concern. Safe-speech norms often arise in the context of public deliberation, education and political speech.

Safe spaces” are places where marginalised groups are protected against harassment, oppression and discrimination, including through speech like microaggressions, unthinking stereotypes and misgendering. Within safe spaces, protected groups are encouraged and empowered to speak about their experiences and needs.

Similarly, “cultural safety” refers to environments where there is no challenge or denial of people’s identities, allowing them to be genuinely heard. This can be crucial for First Nations communities, especially in health and legal contexts.

Safe-speech norms are therefore complex. They involve the freedom to speak, but also freedom from speech.

This way of thinking takes a broad view of the kinds of speech that can be interpreted as harmful or violent. Harmful speech does not just include hate speech and incitement. It also includes speech with unintended consequences and speech that challenges a person’s perceived identity.

These safe-speech norms, increasingly adopted in universities and other broadly progressive organisations, should be distinguished from “psychological safety”. This earlier concept refers to creating environments – such as workplaces – where it is safe to speak up, including to raise concerns or ask questions.

While psychological safety is a general norm protecting all parties, the more recent safe-speech norms protect specific marginalised groups. They aim to push back against larger systemic forces like racism or misogyny that would otherwise render those groups oppressed or unsafe. In some cases, the prioritisation of safety has led to deplatforming of speakers at universities.

With this special focus on oppressed minorities and heightened sensitivity to speech’s negative impacts, applying these norms has become a familiar part of progressive social justice efforts (sometimes pejoratively called “wokism”). Now, conservatives and others are employing the language of cultural safety to close down discussion of topics such as the war in Gaza.

Are safe-speech norms controversial?

By constraining what can be said, safe speech norms impinge on other potentially relevant ethical norms, such as those of public deliberation. These “town square” norms aim to encourage a diversity of views and allow space for a robust dialogue between different perspectives.

Public deliberation norms might be defended as part of the human right to free speech, or because arguing and deliberating with other people can be an important way of respecting them.

Alternatively, public deliberation can be defended by appeal to democracy, which requires more than merely casting votes. Citizens must be able to hear and voice different perspectives and arguments.

Those in favour of free speech and the public square will look suspiciously at safe-speech norms, worrying that they give rise to the well known risks of political censorship. Thinkers like Jonathan Haidt and Greg Lukianoff explicitly criticise “safetyism”, arguing the prioritisation of emotional safety inappropriately coddles young people.

Supporters of safe-speech norms might respond in different ways to these objections. One response might be that safety doesn’t intrude very much on dialogue anyway (at least, not on the type of dialogue worth having). Another response might be to challenge the value of public debate itself, seeing any system that does not explicitly work to support the marginalised as inherently oppressive.

Yet another response might be to query whether writers festivals are an apt place for public debate. Most speakers want an enjoyable experience and to promote their book (even when such books explore contentious ideas). Many in the audience will be supportive of the authors’ ideas and positions. Some will even be fans. Maybe it’s not so bad if most festival sessions are “love-ins”.

Prohibited vs. protected

In order to protect and empower specific marginalised groups, safe-speech norms both support and restrain speech. So long as the views of these protected groups are relatively aligned with each other, these norms work coherently. The speech that is being prohibited doesn’t overlap with the speech that is being protected.

But what happens when members of two marginalised groups have stridently opposed views and the words they use to decry injustice are called unsafe by their opponents? Once this happens, the speech that one group needs to be protected is the same speech that the other group needs prohibited.

Perhaps it was inevitable that the internal contradictions of safe-speech norms would eventually create such problems. In Australia, like many countries, this was triggered by the October 7 Hamas atrocity and Israel’s unrelenting and brutal military response. Jews and Palestinians are both vulnerable minorities who face the well known bigotries of antisemitism and Islamophobia respectively. They both can reasonably demand the protection of safe-speech norms.

However, is each side interested in respecting the other’s right to such norms? Author and academic Randa Abdel-Fattah has reportedly alleged on social media, that if you are a Zionist, “you have no claim or right to cultural safety”. In turn, she says she has been harrassed and threatened over her views on the war in Gaza, and public institutions hosting her “have been targeted with letters defaming me and demanding I be disinvited”.

Perhaps the time has come to acknowledge that safe speech norms were never as straightforward or innocuous as they first appeared. They require a form of censorship that not only involves choosing political sides, but inevitably making fine-grained judgements between which opposing minority deserves protection at the expense of the other.

Indeed, safe speech norms may themselves be exclusionary. The US organisation “Third Way” advocates for moderation and centre left policies. In a recent memo it said research among focus groups had consistently found ordinary people interpreted key terms from progressive political language as alienating and arrogant.

According to Third Way, the term “safe space” (among others) communicates the sense that, “I’m more empathetic than you, and you are callous to hurting others’ feelings.”

With all this in mind, I find it hard to disagree with author Waleed Aly’s recent reflection that “in arenas dedicated to public debate, safety makes a poor organising principle”. Efforts to support and include marginalised voices are laudable. However, safe-speech norms are a deeply problematic – and perhaps ultimately contradictory – tool to use in pursuing that worthy goal.

 

This article was originally published in The Conversation.

copy license

AI and rediscovering our humanity

With each passing day, advances in artificial intelligence (AI) bring us closer to a world of general automation.

In many cases, this will be the realisation of utopian dreams that stretch back millennia – imagined worlds, like the Garden of Eden, in which all of humanity’s needs are provided for without reliance on the ‘sweat of our brows’. Indeed, it was with the explicit hope that humans would recover our dominion over nature that, in 1620, Sir Francis Bacon published his Novum Organum. It was here that Bacon laid the foundations for modern science – the fountainhead of AI, robotics and a stack of related technologies that are set to revolutionise the way we live. 

It is easy to underestimate the impact that AI will have on the way people will work and live in societies able to afford its services. Since the Industrial Revolution, there has been a tendency to make humans accommodate the demands of industry. In many cases, this has led to people being treated as just another ‘resource’ to be deployed in service of profitable enterprise – often regarded as little more than ‘cogs in the machine’. In turn, this has prompted an affirmation of the ‘dignity of labour’, the rise of Labor unions and with the extension of the voting franchise in liberal democracies, to legislation regulating working hours, standards of safety, etc. Even so, in an economy that relies on humans to provide the majority of labour required to drive a productive economy, too much work still exposes people to dirt, danger and mind-numbing drudgery.  

We should celebrate the reassignment of such work to machines that cannot ‘suffer’ as we do. However, the economic drivers behind the widescale adoption of AI will not stop at alleviating human suffering arising out of burdensome employment. The pressing need for greater efficiency and effectiveness will also lead to a wholesale displacement of people from any task that can be done better by an expert system. Many of those tasks have been well-remunerated, ‘white collar’ jobs in professions and industries like banking, insurance, and so on. So, the change to come will probably have an even larger effect on the middle class rather than working class people. And that will be a very significant challenge to liberal democracies around the world. 

Change to the extent I foresee, does not need to be a source of disquiet. With effective planning and broad community engagement, it should be possible to use increasingly powerful technologies in a constructive manner that is for the common good. However, to achieve this, I think we will need to rediscover what is unique about the human condition. That is, what is it that cannot be done by a machine – no matter how sophisticated? It is beyond the scope of this article to offer a comprehensive answer to this question. However, I can offer a starting point by way of an example. 

As things stand today, AI can diagnose the presence of some cancers with a speed and effectiveness that exceeds anything that can be done by a human doctor. In fact, radiologists, pathologists, etc are amongst the earliest of those who will be made redundant by the application of expert systems. However, what AI cannot do replace a human when it comes to conveying to a patient news of an illness. This is because the consoling touch of a doctor has a special meaning due to the doctor knowing what it means to be mortal. A machine might be able to offer a convincing simulation of such understanding – but it cannot really know. That is because the machine inhabits a digital world whereas we humans are wholly analogue. No matter how close a digital approximation of the analogue might be, it is never complete. So, one obvious place where humans might retain their edge is in the area of personal care – where the performance of even an apparently routine function might take on special meaning precisely because another human has chosen to care. Something as simple as a touch, a smile, or the willingness to listen could be transformative. 

Moving from the profound to the apparently trivial, more generally one can imagine a growing preference for things that bear the mark of their human maker. For example, such preferences are revealed in purchases of goods made by artisanal brewers, bakers, etc. Even the humble potato has been affected by this trend – as evidenced by the rise of the ‘hand-cut chip’.  

In order to ‘unlock’ latent human potential, we may need to make a much sharper distinction between ‘work’ and ‘jobs’.

That is, there may be a considerable amount of work that people can do – even if there are very few opportunities to be employed in a job for that purpose. This is not an unfamiliar state of affairs. For many centuries, people (usually women) have performed the work of child-rearing without being employed to do so. Elders and artists, in diverse communities, have done the work of sustaining culture – without their doing so being part of a ‘job’ in any traditional sense. The need for a ‘job’ is not so that we can engage in meaningful work. Rather, jobs are needed primarily in order to earn the income we need to go about our lives. 

And this gives rise to what may turn out to be the greatest challenge posed by the widescale adoption of AI. How, as a society, will we fund the work that only humans can do once the vast majority of jobs are being done by machines?  

copy license

We can raise children who think before they prompt

We may not be able to steer completely clear of AI, or we may not want to, but we can help our kids to understand what it is and isn’t good for, and make intentional decisions about when and how to use it.  

ChatGPT and other artificial “help” is already so widely used that even parents and educators who worry about the ways it might interfere with children’s learning and development, seem to accept that it’s a tool their kids will have to learn to use. 

In her forthcoming book The Human Edge, critical thinking specialist Bethan Winn says that because AI is already embedded in our world, the questions to ask now are around which human skills we need to preserve and strengthen, and where we draw the line between assistance and dependence. 

By taking time to “play, experiment, test hypotheses, and explore”, Winn suggests we can equip our kids and ourselves with the tools to think critically. This will help us “adapt intelligently” and set our own boundaries, rather than defaulting lazily and unthinkingly to what “most people” seem okay with. 

What we view as “good”, what decisions we make, and encourage, and discourage, our children to make, will depend on what we value. One of the reasons corporations and governments have been so quick to embrace AI is that they prize efficiency, productivity and profit; and fear falling behind. But in the private sphere, we can make different decisions based on different values. 

If, for example, we value learning and creativity, the desire to build up skills and knowledge will help us to resist using AI to brainstorm and create on our behalf. We’ll need to help our kids to see that how they produce something can matter just as much as what they produce, because it’s natural to value success too. We’ll also need to make learning fun and satisfying, and discourage short-term wins over long-term gains. 

Myself and my husband are quick to share cautionary tales – from the news, books, podcasts, our own experiences and those of our friends – about less than ideal uses of AI. He tells funny stories about the way candidates he interviews misuse it, I tell funny stories about how disastrously I’d misquote people if I relied on generated transcripts. I also talk about why I’m not tempted to rely on AI to write for me – I want to keep using my brain, developing my skills, choosing my sources; I want to arrive at understanding and insight, not generate it, even if that takes time and energy. (I also can’t imagine prompting would be nearly as fun).

Concern for the environment can also offer incentive to use our brains, or other less energy-intensive tools, before turning to AI. And if we value truth and accuracy, the reality that AI often presents information that’s false as fact will provide incentive to  think twice before using it, or strong motivation to verify its claims when we do. Just because an “AI overview” is the first result in an internet search, doesn’t mean we can’t scroll past it to a reputable source. Tech companies encourage one habit, but we can choose to develop another. 

And if we’ve developed a habit of keeping a clear conscience, and if we value honesty and integrity, we’ll find it easier to resist using AI to cheat, no matter how easy or “normal” it becomes. We’ll also be concerned by the unethical ways in which large language models have been trained using people’s intellectual property without their consent. 

As our kids grow more independent, they might not retain the same values, or make the same decisions, as we do. But if they’ve formed a habit of using their values to guide their decisions, there’s a good chance they’ll continue it.

In addition to hoping my children adopt values that will make them wise, caring, loving, human beings, I hope they’ll understand their unique value, and the unique value all humans have. It’s the existential threat AI poses, when it seems to outperform us not only intellectually but relationally, that might be the most concerning one of all. 

In a world where chatbots are designed to flatter, befriend, even seduce, we can’t assume the next generation will value human relationships – even human life and human rights – in the way previous generations did. Already, some prefer a chatbot’s company to that of their own friends and family. 

Parents teaching their children about values is nothing new. Nor is contradicting our speech with our actions in ways our children are bound to notice. We know we should set the example we want our kids to follow, but how often do we fall short of our own standards? In our defense, we’re only human. 

We’re “only” human. In other words, we’re not divine. And AI is neither human nor divine. Whether or not we agree that humans are made in the image of God – are “the work of his hands” – I hope we can agree that we’re more valuable than the work of our hands, no matter how incredible that work might be. 

Of all the opportunities AI affords us and our children, the prompt to consider what it means to be human, to ask ourselves deep questions about who we are and why we’re here, may be the most exciting one of all. 

copy license

Our regulators are set up to fail by design

Our society is built on trust. Most of the time, we trust institutions and the government to do what they say they will. But when they break that trust – by not keeping their promises or acting unfairly – that’s when things start to fall apart. The system stops working for the people it’s supposed to serve.

As a result, we trust regulators to protect the things that matter in our society most. Whether it’s holding institutions to account, or ensuring our food, water and transport are safe, a regulator’s role is to ensure society’s safety net.  

But when something goes wrong, the finger usually points straight at the regulator. And while it’s tempting to blame regulators about why things have failed, new policy research from former Chairman of ASIC, James Shipton, suggests we’re asking the wrong question. 

The real issue isn’t just who’s doing the job, it’s how the whole system is built.

Shipton is working towards optimising regulation by improving regulatory design, strategy and governance. As a Fellow of The Ethics Centre, he has engaged with industry to develop a better understanding of regulators and the regulated. This work aims to crystalise the purpose of regulation and create a pathway where that purpose is most likely to be achieved.  

Shipton’s paper, The Regulatory State: Faults, Flaws and False Assumptions, takes the entire regulatory system in Australia into account. His core message is simple but urgent: our regulators are set up to fail by design. 

Right now, most regulators operate in a system that lacks clear direction, support, and accountability. Many don’t have a clearly defined purpose in law. That means the people enforcing the rules aren’t always sure what they’re meant to achieve.  

This confusion creates a dangerous “expectations gap” where the public thinks regulators are responsible for outcomes they were never actually empowered to deliver. When regulators fall short, they wear the blame, even when the system itself is broken. 

Shipton identifies twelve major flaws in our regulatory system and while they might sound technical, they have real-world consequences. He starts with the concept that our regulators are monopolies by design. Each regulator is the only body responsible for its area – there’s no competition, no pressure to innovate, and very little incentive to improve. In the private sector, companies that fail lose customers and reputations, and customers are free to go elsewhere. In regulation, there’s no alternative. 

The heart of Shipton’s argument is this: credibility is key. It’s not enough for a regulator to have legal authority, they need public trust. And that trust only comes when the system they work within is built for clarity, accountability, and ethical responsibility. 

For example, in aviation, everyone from pilots to engineers shares a common goal: safety. The whole sector becomes a partner in regulation. But in most industries, that kind of alignment doesn’t exist, often because the system hasn’t been designed to make it happen.

Shipton stresses that design matters. Regulators need clear goals, realistic expectations, regular performance reviews, and laws that actually match the industries they oversee. We don’t need another inquiry into regulatory failure. We need to ask why failure keeps happening in the first place. And the answer, Shipton says, is clear: the entire regulatory architecture in Australia needs redesigning from the ground up. 

This doesn’t mean tearing everything down. It means recognising that public trust is earned through structure. It means giving regulators the tools, support, and clarity they need to do their job well and making sure they’re accountable for how they use that power. 

If we want fairness, safety, and integrity in the things that matter most, we need a regulatory system we can trust. And as Shipton makes clear, trust starts with design. 

 

James Shipton is a Senior Fellow, Melbourne Law School, The University of Melbourne, Fellow, The Ethics Centre, and Visiting Senior Practitioner, Commercial Law Centre, the University of Oxford. 

copy license

David and Margaret spent their careers showing us exactly how to disagree

When David Stratton – critic, TV presenter and hero to a generation of movie lovers – died last week at 85, he was immediately honoured as one of this country’s true soldiers of cinema: a relentless advocate who spent his life championing the artform he loved.  

Cinema had a loyal, passionate and fiercely intelligent friend in David Stratton. He was a man who worked hard to make loving movies seem serious and worthwhile – so much more than just a hobby. 

But over the course of his long and varied career, Stratton didn’t just kindly, patiently and honestly explain his passions. Along with his onscreen co-host Margaret Pomeranz, he also taught us a deeply valuable ethical lesson, time and time again: a lesson in the fine art of disagreement.  

What do Lars Von Trier and Vin Diesel have in common?

Pomeranz and Stratton were, from the very start of their time together, opposites. Pomeranz, who began her career in television as a producer, and was encouraged to move in front of the camera by Stratton, prized a curiously outrageous form of entertainment far more than Stratton.  

Stratton loved to laugh, make no mistake, but he drew a line at anything he considered tacky. Pomeranz, by contrast, loved that stuff. When they butted heads, it was over films like Team America: World Police (Pomeranz loved it; Stratton hated it); Sex And The City 2 (Pomeranz said it contained a “jacket she’d kill for” and gave it three stars; Stratton called it “offensive”).  

These differences in opinion weren’t just a casual “let’s agree to disagree” partings of ways. Once, memorably, Pomeranz gave Lars Von Trier’s Dancer in The Dark five stars, while Stratton gave it zero. When Pomeranz stood up for Vin Diesel, a performer Stratton hated, Stratton lightly poked fun at her, saying she wanted Diesel to “save her.” Possibly their biggest disagreement was over the classic Australian film Romper Stomper, starring Russell Crowe as a wild-eyed neo-Nazi. Stratton not only thought the film was terrible, he thought it was actively ethically harmful. Pomeranz gave it five stars. 

Sometimes these disagreements got a little heated. Stratton could be dismissive; Pomeranz seemed occasionally exasperated with him. But the pair never lost respect for one another, no matter how far apart their tastes pulled them – and, importantly, they never started throwing barbs at each other. Their disagreement was localised to the thing they were disagreeing about, not ad hominem snipes at the other’s character.  

Pomeranz herself acknowledged this, in a recent tribute written to honour her friend and colleague. “I think it’s extraordinary that, over all the time that David and I worked together, we never had a falling out”, she wrote. Disagreements between the two were common. But true breaks in the relationship – true threats to their working together – never were. 

The power of disagreement

Sometimes, disagreement is cast as an impediment to societal functioning. We can all be guilty of occasionally speaking as though disagreement is the enemy – as though for us to all flourish, we should all get along, all the time. That’s not to say that there are some matters where disagreement should be encouraged – the power of disagreement is not a free card to put every matter up for debate, no matter how harmful. 

But the history of philosophy shows us there’s power in sometimes parting opinion. Plato, for instance, presented almost all of his arguments in the form of debates, with characters going back and forth amongst each other on what is the correct behaviour. Plato’s “dialogues”, and thus, his entire ethical worldview, were fashioned out of disagreement.  

It is in disagreement, after all, that we get to honour one of the beautiful things about our world – difference, uniqueness, and the full richness of human experience.

It would be a very boring, and perhaps even insidious, world if we all thought the same thing. After all, a forced unity of opinion is one of the hallmarks of fascism.  

Disagreements, if handled and conducted well, can also guide us away from extremes. In some matters, truth lies in the middle of two poles. So it went on The Movie Show at least – I am not convinced we always agreed with our favourite from the pair. As viewers, our own tastes fluctuated between the extremes of Pomeranz and Stratton. In their disagreements, we could pick and choose elements of their tastes, and construct our own. 

And again, these were debates that never descended into name-calling, or anger. In this, Pomeranz and Stratton taught us another ethical lesson – that we can treat someone disagreeing with us as someone offering us kindness. Having to justify and argue for our own positions helps us better understand them. And it helps us better understand the world around us; the people around us.  

Laying my own cards on the table, I’ve always been more of a Pomeranz person (I love Von Trier, Romper Stomper, and Team America). But that’s just the thing. No matter how much I, a viewer raised on The Movie Show, found myself grumpily disagreeing with Stratton, it never made me dislike him. And when he passed, the loss I felt was not just the loss of a man I had always admired. It was the loss of a defender of art and a good sparring partner – no matter that it was one-sided sparring, through the TV. Disagreement done well is a gift. And no one more generously gave out that gift than David Stratton.

copy license

That’s not me: How deepfakes threaten our autonomy

In early 2025, 60 female students from a Melbourne high school had fake, sexually explicit images of themselves shared around their school and community.

Less than a year prior, a teenage boy from another Melbourne high school created and spread fake nude photos of 50 female students and was let off with only a warning. 

These fake photos are also known as deepfakes, a type of AI-augmented photo, video or audio that fabricates someone’s image. The harmful uses of this kind of technology are countless as the technology becomes more accessible and more convincing: porn without consent, financial loss through identity fraud, the harm to a political campaign or even democracy through political manipulation. 

While these are significant harms, they also already exist without the aid of deepfakes. Deepfakes add something specific to the mix, something that isn’t necessarily being accounted for both in the reaction to and prevention of harm. This technology threatens our sense of autonomy and identity on a scale that’s difficult to match. 

An existential threat

Autonomy is our ability to think and act authentically and in our best interests. Imagine a girl growing up with friends and family. As she gets older, she starts to wonder if she’s attracted to women as well as men, but she’s grown up in a very conservative family and around generally conservative people who aren’t approving of same-sex relations. The opinions of her family and friends have always surrounded her, so she’s developed conflicting beliefs and feelings, and her social environment is one where it’s difficult to find anyone to talk to about that conflict. 

Many would say that in this situation, the girl’s autonomy is severely diminished because of her upbringing and social environment. She may have the freedom of choice, but her psychology has been shaped by so many external forces that it’s difficult to say she has a comprehensive ability to self-govern in a way that looks after her self-interests. 

Deepfakes have the capacity to threaten our autonomy in a more direct way. They can discredit our own perceptions and experiences, making us question our memory and reality. If you’re confronted with a very convincing video of yourself doing something, it can be pretty hard to convince people it never happened – videos are often seen as undeniable evidence. And more frighteningly, it might be hard to convince yourself; maybe you just forgot… 

Deepfakes make us fear losing control of who we are, how we’re perceived, what we’re understood to have said, done or felt.

Like a dog seeing itself in the mirror, we are not psychologically equipped to deal with them. 

This is especially true when the deepfakes are pornographic, as is the case for the vast majority of deepfakes posted to the internet. Victims of these types of deepfakes are almost exclusively women and many have commented on the depth of the wrongness that’s felt when they’re confronted with these scenes: 

“You feel so violated…I was sexually assaulted as a child, and it was the same feeling. Like, where you feel guilty, you feel dirty, you feel like, ‘what just happened?’ And it’s bizarre that it makes that resurface. I genuinely didn’t realise it would.”  

Think of the way it feels to be misunderstood, to have your words or actions be completely misinterpreted, maybe having the exact opposite effect you intended. Now multiply that feeling by the possibility that the words and actions were never even your own, and yet are being comprehended as yours by everyone else. That is the helplessness that comes with losing our autonomy.

The courage to change the narrative

Legislation is often seen as the goal for major social issues, a goal that some relationships and sex education experts see as a major problem. The government is a slow beast. It was only in 2024 that the first ban on non-consensual visual deepfakes was enacted, and only in 2025 that this ban was extended to the creation, sharing or threatening of any sexually explicit deepfake material. 

Advocates like Grace Tame have argued that outlawing the sharing of deepfake pornography isn’t enough: we need to outlaw the tools that create it. But these legal battles are complicated and slow. We need parallel education-focused campaigns to support the legal components.  

One of the major underlying problems is a lack of respectful relationships and consent education. Almost 1 in 10 young people don’t think that deepfakes are harmful because they aren’t real and don’t cause physical harm. Perspective-taking skills are sorely needed. The ability to empathise, to fully put yourself in someone else’s shoes and make decisions based on respect for someone’s autonomy is the only thing that can stamp out the prevalence of disrespect and abuse. 

On an individual level, making a change means speaking with our friends and family, people we trust or who trust us, about the negative effects of this technology to prevent misuse. That doesn’t mean a lecture, it means being genuinely curious about how the people you know use AI. And it means talking about why things are wrong. 

We desperately need a culture, education and community change that puts empathy first. We need a social order that doesn’t just encourage but demands perspective taking, to undergird the slow reform of law. It can’t just be left to advocates to fight against the tide of unregulated technological abuse – we should all find the moral courage to play our role in shifting the dial. 

copy license

Economic reform must start with ethics

With inflation tamed, interest rates falling and wages rising, the government of Anthony Albanese has worked itself into a position where it can now develop a range of longer-term economic initiatives.

With this in mind, the government will convene an Economic Roundtable next week to consider ideas about how to best to achieve sustainable prosperity. 

I suspect that the Roundtable will focus on a range of ‘big ticket’ ideas for innovation and reform in areas such as tax, energy, infrastructure, industrial relations and the like. If all goes well, equal attention will be given to areas of social policy that have a major impact on the economy – including in areas such as childcare, mental health, social welfare, etc. Although not falling under a narrow heading of ‘economic policy’, all of these areas have a significant impact on the productive capacity of the Australian economy. In other words, a strong economy depends as much on sound ‘social’ policies as it does on sound ‘economic’ policies. 

However, there is a deeper ethical dimension that we hope will also be taken into account. Over a period of four years, Deloitte Access Economics has been exploring the link between ethics and the economy. Most recently, its work has zeroed in on the connection between ethics and productivity. Their findings are as follows: 

The relationship between ethics and productivity is increasingly recognised in economic literature and international practice. 

There is capacity for trust and ethical behaviour to: 

  • Boost worker wellbeing and mental health, which are directly linked to labour productivity. 
  • Improve business performance, with higher ethical standards leading to stronger returns on investment. 
  • Reduce red tape, by lowering the perceived need for regulation in high-trust environments. 
  • Enable economic reform, by building public support for complex policy changes. 
  • Accelerate the uptake of technology, such as artificial intelligence, where trust remains a key adoption barrier. 

This has remarkable implications for our nation’s prosperity (in both economic and social terms): 

  • A 10% improvement in ethical behaviour yields a 2.7% wage increase and a 1% gain in mental health, worth over $23 billion across the economy.
  • A standard deviation increase in business governance is associated with a 7% increase in return on assets. 
  • Countries with higher social trust experience 15-19% fewer regulatory procedures to start a business. 
  • Aligning Australia’s trust levels with global leaders could lift GDP by $45 billion, or $1,800 per person. 

There should be no mystery in this, and the effects are clear and simple. Much of it comes down to the possibility of reform (of any kind). Identifying areas of reform is relatively easy. The difficulty relies in their adoption. This is because all reform is subject to resistance from those who fear that they will be left worse off. In turn, strong resistance creates friction that either slows or prevents reform – inevitably leading to sub-optimal outcomes for society as a whole. The number of people who fear being left worse off is often greater than the number of people who will actually be adversely affected. Even when people recognise that they are likely to benefit from reform – they will still oppose it in the belief that the ‘people in charge’ cannot be trusted to ensure that the benefits and burdens are fairly distributed. In other words, it all boils down to questions of trust. And as economists have known since the dawn of their ‘dismal science’ – high trust=low cost and low trust=high cost. 

Yet, trust itself is a function of ethical alignment. Ethics, and the trust it engenders, reduces ‘friction’. Thus, trust is a catalyst and enabler of productive reform. To put it simply: 

Good ethics → improved trust → greater prosperity. 

There are some deep paradoxes in this outcome. The most challenging of these is that although ethics produces a demonstrable economic dividend, it only has maximum effect if people act for non-instrumental reasons. In other words, ‘you don’t get the dividend if you do it for the dividend.’ 

We have long believed that the whole of Australia would benefit if, as a society, we invested more in revitalising our ‘ethical infrastructure’ alongside the physical and technical infrastructure that typically receives all of the attention and funding.

The evidence is clear that good ethical infrastructure enhances the ‘dividend’ earned from these more typical investments – while bad ethical infrastructure only leads to sub-optimal outcomes.

I doubt that the link between ethics, trust and prosperity will capture any headlines when the Economic Roundtable is convened. But wouldn’t it be great if it could at least be noted as a vital enabler of any reform that hopes to succeed. 

copy license

Give them a piggy bank: Why every child should learn to navigate money with ethics

“Mum, why can we buy toys, but others can’t?”

My son’s question seemed simple, but behind it lay deeper questions about scarcity and fairness. And like many parents, I found myself unprepared to explain a system even most adults struggle to fully understand. 

Growing up in a remote village in northern Vietnam, I experienced scarcity first-hand. Only one child per family could attend school while the rest worked. These were choices shaped by poverty and silence. Worse, no one asked why. And that’s what drew me to economics: to find not just answers, but better questions and solutions. 

What if we taught kids not just how economies work, but why they work that way? What if, from a young age, they learned that the economy isn’t just numbers, but a system shaped by values, power, and people? It could transform their worldview and their future.  

Australia is facing a quiet crisis in economic and financial literacy. More than 1 in 6 Australian 15-year-olds fail basic financial literacy. The issue is particularly stark among young women: only 15% meet standards, compared to 28% of boys. Meanwhile, enrolments in Year 12 economics have dropped by nearly 70% since the 1990s. The Reserve Bank of Australia warns that a lack of economic understanding not only limits personal well-being but also weakens national participation. The National Australia Bank has reported increasing financial vulnerability among youth with low literacy levels.  

But this isn’t just a gap in knowledge, it’s a growing divide in civic understanding. 

Why financial and economic literacy must be taught with ethics

Too often, financial and economic literacy are conflated. In truth, they are distinct yet deeply complementary. Financial literacy teaches individuals how to manage money. Economic literacy explains the systems shaping those decisions. One is practical, the other structural. 

What’s missing is ethics – understanding who the economy should serve. This requires critical thinking about values, justice, and responsibility. It involves teaching children that every economic decision is a moral one, and that their choices can help shape a fairer world.  

By teaching financial and economic literacy alongside ethics, we not only teach survival skills, but cultivate thoughtful participants in a fairer economy.

This approach encourages them to assess trade-offs, consider long-term impacts, and understand the values reflected in their choices. It sharpens understanding of the hidden costs of our financial choices: the underpaid worker behind a “cheap” shirt, the personal data exchanged for a “free” app, and so on. In learning to not just ask “Can I afford this?” but “What does this cost others?”, students can develop both agency and empathy. 

Three timeless economic lessons every child should learn  

1. Choices and scarcity aren’t just a constraint, they’re questions of justice

Economics begins with scarcity: we live in a world of limited resources, so choices must be made. But helping children make smart trade-offs is often where financial literacy stops. 

Ethical economics asks a harder question: Why are some people forced to make impossible choices while others never have to choose at all?

Our resources are not evenly distributed, and how they are distributed reveals the underlying values of our economic systems. Furthermore, the mechanics of limited choices reveal the moral concerns of our society, issues ethical economics serve to investigate. 

When we teach children only to manage scarcity, to see limited choice as inevitable, we risk normalising injustice. But when we teach them to understand and question it – Who sets the rules? Who is left out? Why? – we nurture civic responsibility and moral courage. 

2. Incentives and transactions: the ethics beneath every exchange

Children learn the logic of trade early: stickers for chores, screen time for good behaviour, lunchbox swaps at school. These are their first lessons in transactional incentives, one of economics’ most powerful tools. 

But incentives are not morally neutral. They reflect what we value and who we reward.  

When we teach economics as just transactions, kids learn to see the world only through profit and loss. Ethics, however, reminds us that not all trades are fair or impassioned, and not all incentives are neutral. Behind every transaction is a judgement. Behind every incentive is a set of assumptions. Without accounting for context, incentives risk rewarding privilege while penalising disadvantage. 

Teaching children to recognise this helps them move beyond “getting what you deserve” to asking: Who is allowed to participate?

This teaches not just how to respond to incentives, but how to question what they promote and whom they serve. 

3. Markets need morality

Markets are often framed as natural forces: efficient, self-correcting, and impartial. But they’re not. 

Scottish economist Adam Smith’s “invisible hand” theory describes how individual self-interest can lead to collective benefit. Yet Smith, also a moral philosopher, warned that markets only work when anchored in trust, justice and social responsibility. 

English economist and philosopher John Maynard Keynes also argued that when markets fail, as they do during crises like the Great Depression, governments have an ethical obligation to intervene. During COVID-19, children saw this play out in real time: their parents receiving stimulus payments, rent relief or panic buying. But did they understand why?  

Well, they should. That’s the real lesson: markets need rules, and rules need values. Who gets what, and why? These questions encourage children to investigate systemic motives and hold them accountable to their ethical obligations. Teaching students that both markets and governments are designed by people and reflect our collective choices helps them understand they can shape these systems too. 

Raising ethical citizens, not just economic agents 

Teaching economic literacy without ethics risks raising informed consumers but disengaged citizens. But when we teach children that every economic choice reflects a set of values, we equip them with something far more powerful than a calculator; we give them a moral compass. 

As Nobel Laureate Esther Duflo reminds us, ethical economic education is not about ideology. It is about humility, empathy and evidence. It is about empowering people to improve lives, not only their own, but others’. 

So yes, give a child a piggy bank, and they may save for life. But teach them how economies work, who they serve, and what they exclude, and they will reimagine those systems with care. That is what it means to raise not just capable earners, but ethical citizens. And that is what we owe the next generation.

copy license

The ethical honey trap of nostalgia

Marvel’s in trouble. The once box office-dominating brand has hit every branch on the way down as of late, from cancelled shows to thinning audiences. So, consider it a sign of the times that the folks behind the behemoth have attempted to enliven their cinematic futures not by looking forward, but instead casting their gaze way back. 

The Fantastic Four, the third attempt to make the family of superheroes work on the big screen, jettisons the usual digital sheen of superhero properties in favour of old-fashioned nostalgia. Set in an alternate, 1950s-style world, the film’s visual style and production values have been heralded as a bold break from the norm – which is funny, given how deliberately retrotted they are.  

Not only has ‘50s fantasy been done before in, y’know, the ‘50s, but we’ve already had multiple recycles of that trend, from the glossy world of Mad Men to the screwball textures of films like Down With Love. Marvel’s “new” style is not just a throwback. It’s a throwback of a throwback.  

The antidote to irony

The film’s not alone in that open nostalgia, either. James Gunn’s Superman reboot attempts to make the gung-ho, gee-wizz character work by similarly leaning into an old-school fantasy, where journalism is heralded as truth-telling, and unabashed optimism is the name of the day. Then there’s Lena Dunham’s new rom-com series Too Much, which offsets its decidedly modern story with near constant references to Jane Austen-inflected romance, openly lusting after a bygone world where manners ruled the day, and sincerity was the default response.  

The Fantastic Four: First Steps, 2025, Marvel

This sudden rush of nostalgia can be read, above everything else, as a society trying to shake itself free of cynicism and layers of irony. The 90s saw post-modern go mainstream, with every cinematic hero affecting a wise-cracking, subversive mode, and over the decade and a bit after, that irony only got more layered. Eventually, you end up with someone like Deadpool, who can’t take a breath without acknowledging that he’s a character in a movie. 

It was the writer David Foster Wallace who saw where this was all going to take us, writing over 20 years ago that “irony and ridicule are entertaining and effective, and that, at the same time, they are agents of a great despair and stasis in US”. Irony reduces and destroys, and is inherently oppositional in nature – not just aesthetically, but ethically, leaving us paralysed by our own smartness and subversion, without any energy left over to actually do anything. 

And our need to do something is only greater than ever – as the world gets darker and more dangerous, it makes sense that we would turn back to the forward thinking and action of the ‘50s. The Fantastic Four and Superman come from a time where heroes actually believed that the world was a good place, and fought to make it even better. Being nostalgic for them means being nostalgic for proactive ethical agents, who, rather than spending their time brooding and moaning, fought for something. In short, it means being nostalgic for hope. 

Nostalgia’s trap

But if we seek to distance ourselves from the dangerous trap of irony, then we should be careful to not run too open-armed into nostalgia. Nostalgia, after all, is its own kind of trap. Rather than making something genuinely new – something that responds to the world as it actually is – nostalgia is inherently regressive.

Philosopher Mark Fisher wrote about exactly this, seeing nostalgia as part of the “slow cancellation of the future”; the eradication of possibility and free-thinking. No wonder that conservatives the world over, Donald Trump chief among them, have aggressively nostalgic worldviews: nostalgia, when used improperly, can be a form of ethical despair and stasis too.  

The nostalgia trap is laid bare in another film released this year – Danny Boyle’s 28 Years Later. In the film, an isolated community trying to get by in a post-apocalyptic world, find their strength in visions of nostalgic Britain. In one montage, early in the film, two of our heroes making their way across a dangerous landscape are intercut with images of British soldiers at Agincourt, all scored to a Rudyard Kipling poem – Kipling being that hero of early British imperialism. The heroes in Boyle’s film try to survive and draw their strength from remembering how things used to be. And in turn, they end up trapped, locked into regressive and cruel worldviews that damage them. 

That thesis is made shockingly clear in the last five minutes of the film, where the young hero (spoiler alert), finds himself saved by a roaming gang all made up of thugs dressed like disgraced pedophile Jimmy Savile. The young hero accepts them, open-armed. The kind of nostalgia that he has been raised on has no room for nuance. It’s essentially flattening, making a sort of caricature of England’s past. 

In this way, Boyle criticises the nullifying effects of nostalgia: it casts the entire past as something to return to, without remembering its horrors and its shortcomings. If all we ever want to be is what we once were, then we will never truly change.

Nostalgia might be useful to energise us, but only if it’s used selectively and critically – if we aim to be better than what we once were, not merely the same.

As with so many things, we find our way forward through the mid-ground. Not just recreating the 50s, but taking what we want from it – the hope, the energy, the sense of optimism – and jettisoning what we don’t. The way to avoid the cancellation of the future, after all, is by believing that the future exists – that there’s still room, even now, to use the past to create something that has never occurred before.

copy license