Ask an ethicist: Should I use AI for work?

Ask an ethicist: Should I use AI for work?
Opinion + AnalysisScience + TechnologyBusiness + Leadership
BY Daniel Finlay 8 SEP 2025
My workplace is starting to implement AI usage in a lot of ways. I’ve heard so many mixed messages about how good or bad it is. I don’t know whether I should use it, or to what extent. What should I do?
Artificial intelligence (AI) is quickly becoming unavoidable in our daily lives. Google something, and you’ll be met with an “AI overview” before you’re able to read the first result. Open up almost any social media platform and you’ll be met with an AI chat bot or prompted to use their proprietary AI to help you write your message or create an image.
Unsurprisingly, this ubiquity has rapidly extended to the workplace. So, what do you do if AI tools are becoming the norm but you’re not sure how you feel about it? Maybe you’re part of the 36% of Australians who aren’t sure if the benefits of AI outweigh the harms. Luckily, there’s a few ethical frameworks to help guide your reasoning.
Outcomes
A lot of people care about what AI is going to do for them, or conversely how it will harm them or those they care about. Consequentialism is a framework that tells us to think about ethics in terms of outcomes – often the outcomes of our actions, but really there are lots of types of consequentialism.
Some tell us to care about the outcomes of rules we make, beliefs or attitudes we hold, habits we develop or preferences we have (or all of the above!). The common thread is the idea that we should base our ethics around trying to make good things happen.
This might seem simple enough, but ethics is rarely simple.
AI usage is having and is likely to have many different competing consequences, short and long-term, direct and indirect.
Say your workplace is starting to use AI tools. Maybe they’re using email and document summaries, or using AI to create images, or using ChatGPT like they would use Google. Should you follow suit?
If you look at the direct consequences, you might decide yes. Plenty of AI tools give you an edge in the workplace or give businesses a leg up over others. Being able to analyse data more quickly, get assistance writing a document or generate images out of thin air has a pretty big impact on our quality of life at work.
On the other hand, there are some potentially serious direct consequences of relying on AI too. Most public large language model (LLM) chatbots have had countless issues with hallucinations. This is the phenomenon where AI perceives patterns that cause it to confidently produce false or inaccurate information. Given how anthropomorphised chatbots are, which lends them an even higher degree of our confidence and trust, these hallucinations can be very damaging to people on both a personal and business level.
Indirect consequences need to be considered too. The exponential increase in AI use, particularly LLM generative AI like ChatGPT, threatens to undo the work of climate change solutions by more than doubling our electricity needs, increasing our water footprint, greenhouse gas emissions and putting unneeded pressure on the transition to renewable energy. This energy usage is predicted to double or triple again over the next few years.
How would you weigh up those consequences against the personal consequences for yourself or your work?
Rights and responsibilities
A different way of looking at things, that can often help us bridge the gap between comparing different sets of consequences, is deontology. This is an ethical framework that focuses on rights (ways we should be treated) and duties (ways we should treat others).
One of the major challenges that generative AI has brought to the fore is how to protect creative rights while still being able to innovate this technology on a large scale. AI isn’t capable of creating ‘new’ things in the same way that humans can use their personal experiences to shape their creations. Generative AI is ‘trained’ by giving the models access to trillions of data points. In the case of generative AI, these data points are real people’s writing, artwork, music, etc. OpenAI (creator of ChatGPT) has explicitly said that it would be impossible to create these tools without the access to and use of copyrighted material.
In 2023, the Writers Guild of America went on a five-month strike to secure better pay and protections against the exploitation of their material in AI model training and subsequent job replacement or pay decreases. In 2025, Anthropic settled for $1.5 billion in a lawsuit over their illegal piracy of over 500,000 books used to train their AI model.
Creative rights present a fundamental challenge to the ethics of using generative AI, especially at work. The ability to create imagery for free or at a very low cost with AI means businesses now have the choice to sidestep hiring or commissioning real artists – an especially fraught decision point if the imagery is being used with a profit motive, as it is arguably being made with the labour of hundreds or thousands of uncompensated artists.
What kind of person do you want to be?
Maybe you’re not in an office, though. Maybe your work is in a lab or field research, where AI tools are being used to do things like speed up the development of life-changing drugs or enable better climate change solutions.
Intuitively, these uses might feel more ethically salient, and a virtue ethics point of view could help make sense of that. Virtue ethics is about finding the valuable middle ground between extreme sets of characteristics – the virtues that a good person, or the best version of yourself, would embody.
On the one hand, it’s easy to see how this framework would encourage use of AI that helps others. A strong sense of purpose, altruism, compassion, care, justice – these are all virtues that can be lived out by using AI to make life-changing developments in science and medicine for the benefit of society.
On the other hand, generative AI puts another spanner in the works. There is an increasing body of research looking at the negative effects of generative AI on our ability to think critically. Overreliance and overconfidence in AI chatbots can lead to the erosion of critical thinking, problem solving and independent decision making skills. With this in mind, virtue ethics could also lead us to be wary of the way that we use particular kinds of AI, lest we become intellectually lazy or incompetent.
The devil in the detail
AI, in all its various capacities, is revolutionising the way we work and is clearly here to stay. Whether you opt in or not is hopefully still up to you in your workplace, but using a few different ethical frameworks, you can prioritise your values and principles and decide whether and what type of AI usage feels right to you and your purpose.
Whether you’re looking at the short and long-term impacts of frequent AI chatbot usage, the rights people have to their intellectual property, the good you can do with AI tools or the type of person you want to be, maintaining a level of critical reflection is integral to making your decision ethical.


BY Daniel Finlay
Daniel is a philosopher, writer and editor. He works at The Ethics Centre as Youth Engagement Coordinator, supporting and developing the futures of young Australians through exposure to ethics.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Science + Technology, Society + Culture
Where did the wonder go – and can AI help us find it?
Opinion + Analysis
Business + Leadership
Do Australian corporations have the courage to rebuild public trust?
Opinion + Analysis
Business + Leadership, Relationships
The pivot: Mopping up after a boss from hell
Opinion + Analysis
Science + Technology, Society + Culture
That’s not me: How deepfakes threaten our autonomy
AI and rediscovering our humanity

AI and rediscovering our humanity
Opinion + AnalysisScience + TechnologyBusiness + LeadershipSociety + Culture
BY Simon Longstaff 2 SEP 2025
With each passing day, advances in artificial intelligence (AI) bring us closer to a world of general automation.
In many cases, this will be the realisation of utopian dreams that stretch back millennia – imagined worlds, like the Garden of Eden, in which all of humanity’s needs are provided for without reliance on the ‘sweat of our brows’. Indeed, it was with the explicit hope that humans would recover our dominion over nature that, in 1620, Sir Francis Bacon published his Novum Organum. It was here that Bacon laid the foundations for modern science – the fountainhead of AI, robotics and a stack of related technologies that are set to revolutionise the way we live.
It is easy to underestimate the impact that AI will have on the way people will work and live in societies able to afford its services. Since the Industrial Revolution, there has been a tendency to make humans accommodate the demands of industry. In many cases, this has led to people being treated as just another ‘resource’ to be deployed in service of profitable enterprise – often regarded as little more than ‘cogs in the machine’. In turn, this has prompted an affirmation of the ‘dignity of labour’, the rise of Labor unions and with the extension of the voting franchise in liberal democracies, to legislation regulating working hours, standards of safety, etc. Even so, in an economy that relies on humans to provide the majority of labour required to drive a productive economy, too much work still exposes people to dirt, danger and mind-numbing drudgery.
We should celebrate the reassignment of such work to machines that cannot ‘suffer’ as we do. However, the economic drivers behind the widescale adoption of AI will not stop at alleviating human suffering arising out of burdensome employment. The pressing need for greater efficiency and effectiveness will also lead to a wholesale displacement of people from any task that can be done better by an expert system. Many of those tasks have been well-remunerated, ‘white collar’ jobs in professions and industries like banking, insurance, and so on. So, the change to come will probably have an even larger effect on the middle class rather than working class people. And that will be a very significant challenge to liberal democracies around the world.
Change to the extent I foresee, does not need to be a source of disquiet. With effective planning and broad community engagement, it should be possible to use increasingly powerful technologies in a constructive manner that is for the common good. However, to achieve this, I think we will need to rediscover what is unique about the human condition. That is, what is it that cannot be done by a machine – no matter how sophisticated? It is beyond the scope of this article to offer a comprehensive answer to this question. However, I can offer a starting point by way of an example.
As things stand today, AI can diagnose the presence of some cancers with a speed and effectiveness that exceeds anything that can be done by a human doctor. In fact, radiologists, pathologists, etc are amongst the earliest of those who will be made redundant by the application of expert systems. However, what AI cannot do replace a human when it comes to conveying to a patient news of an illness. This is because the consoling touch of a doctor has a special meaning due to the doctor knowing what it means to be mortal. A machine might be able to offer a convincing simulation of such understanding – but it cannot really know. That is because the machine inhabits a digital world whereas we humans are wholly analogue. No matter how close a digital approximation of the analogue might be, it is never complete. So, one obvious place where humans might retain their edge is in the area of personal care – where the performance of even an apparently routine function might take on special meaning precisely because another human has chosen to care. Something as simple as a touch, a smile, or the willingness to listen could be transformative.
Moving from the profound to the apparently trivial, more generally one can imagine a growing preference for things that bear the mark of their human maker. For example, such preferences are revealed in purchases of goods made by artisanal brewers, bakers, etc. Even the humble potato has been affected by this trend – as evidenced by the rise of the ‘hand-cut chip’.
In order to ‘unlock’ latent human potential, we may need to make a much sharper distinction between ‘work’ and ‘jobs’.
That is, there may be a considerable amount of work that people can do – even if there are very few opportunities to be employed in a job for that purpose. This is not an unfamiliar state of affairs. For many centuries, people (usually women) have performed the work of child-rearing without being employed to do so. Elders and artists, in diverse communities, have done the work of sustaining culture – without their doing so being part of a ‘job’ in any traditional sense. The need for a ‘job’ is not so that we can engage in meaningful work. Rather, jobs are needed primarily in order to earn the income we need to go about our lives.
And this gives rise to what may turn out to be the greatest challenge posed by the widescale adoption of AI. How, as a society, will we fund the work that only humans can do once the vast majority of jobs are being done by machines?


BY Simon Longstaff
Simon Longstaff began his working life on Groote Eylandt in the Northern Territory of Australia. He is proud of his kinship ties to the Anindilyakwa people. After a period studying law in Sydney and teaching in Tasmania, he pursued postgraduate studies as a Member of Magdalene College, Cambridge. In 1991, Simon commenced his work as the first Executive Director of The Ethics Centre. In 2013, he was made an officer of the Order of Australia (AO) for “distinguished service to the community through the promotion of ethical standards in governance and business, to improving corporate responsibility, and to philosophy.” Simon is an Adjunct Professor of the Australian Graduate School of Management at UNSW, a Fellow of CPA Australia, the Royal Society of NSW and the Australian Risk Policy Institute.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Relationships, Society + Culture
Based on a true story: The ethics of making art about real-life others
Opinion + Analysis
Politics + Human Rights, Society + Culture
Respect for persons lost in proposed legislation
Opinion + Analysis
Business + Leadership
The anti-diversity brigade is ruled by fear
Opinion + Analysis
Business + Leadership
Do Australian corporations have the courage to rebuild public trust?
We can raise children who think before they prompt

We can raise children who think before they prompt
Opinion + AnalysisScience + Technology
BY Emma Wilkins 26 AUG 2025
We may not be able to steer completely clear of AI, or we may not want to, but we can help our kids to understand what it is and isn’t good for, and make intentional decisions about when and how to use it.
ChatGPT and other artificial “help” is already so widely used that even parents and educators who worry about the ways it might interfere with children’s learning and development, seem to accept that it’s a tool their kids will have to learn to use.
In her forthcoming book The Human Edge, critical thinking specialist Bethan Winn says that because AI is already embedded in our world, the questions to ask now are around which human skills we need to preserve and strengthen, and where we draw the line between assistance and dependence.
By taking time to “play, experiment, test hypotheses, and explore”, Winn suggests we can equip our kids and ourselves with the tools to think critically. This will help us “adapt intelligently” and set our own boundaries, rather than defaulting lazily and unthinkingly to what “most people” seem okay with.
What we view as “good”, what decisions we make, and encourage, and discourage, our children to make, will depend on what we value. One of the reasons corporations and governments have been so quick to embrace AI is that they prize efficiency, productivity and profit; and fear falling behind. But in the private sphere, we can make different decisions based on different values.
If, for example, we value learning and creativity, the desire to build up skills and knowledge will help us to resist using AI to brainstorm and create on our behalf. We’ll need to help our kids to see that how they produce something can matter just as much as what they produce, because it’s natural to value success too. We’ll also need to make learning fun and satisfying, and discourage short-term wins over long-term gains.
Myself and my husband are quick to share cautionary tales – from the news, books, podcasts, our own experiences and those of our friends – about less than ideal uses of AI. He tells funny stories about the way candidates he interviews misuse it, I tell funny stories about how disastrously I’d misquote people if I relied on generated transcripts. I also talk about why I’m not tempted to rely on AI to write for me – I want to keep using my brain, developing my skills, choosing my sources; I want to arrive at understanding and insight, not generate it, even if that takes time and energy. (I also can’t imagine prompting would be nearly as fun).
Concern for the environment can also offer incentive to use our brains, or other less energy-intensive tools, before turning to AI. And if we value truth and accuracy, the reality that AI often presents information that’s false as fact will provide incentive to think twice before using it, or strong motivation to verify its claims when we do. Just because an “AI overview” is the first result in an internet search, doesn’t mean we can’t scroll past it to a reputable source. Tech companies encourage one habit, but we can choose to develop another.
And if we’ve developed a habit of keeping a clear conscience, and if we value honesty and integrity, we’ll find it easier to resist using AI to cheat, no matter how easy or “normal” it becomes. We’ll also be concerned by the unethical ways in which large language models have been trained using people’s intellectual property without their consent.
As our kids grow more independent, they might not retain the same values, or make the same decisions, as we do. But if they’ve formed a habit of using their values to guide their decisions, there’s a good chance they’ll continue it.
In addition to hoping my children adopt values that will make them wise, caring, loving, human beings, I hope they’ll understand their unique value, and the unique value all humans have. It’s the existential threat AI poses, when it seems to outperform us not only intellectually but relationally, that might be the most concerning one of all.
In a world where chatbots are designed to flatter, befriend, even seduce, we can’t assume the next generation will value human relationships – even human life and human rights – in the way previous generations did. Already, some prefer a chatbot’s company to that of their own friends and family.
Parents teaching their children about values is nothing new. Nor is contradicting our speech with our actions in ways our children are bound to notice. We know we should set the example we want our kids to follow, but how often do we fall short of our own standards? In our defense, we’re only human.
We’re “only” human. In other words, we’re not divine. And AI is neither human nor divine. Whether or not we agree that humans are made in the image of God – are “the work of his hands” – I hope we can agree that we’re more valuable than the work of our hands, no matter how incredible that work might be.
Of all the opportunities AI affords us and our children, the prompt to consider what it means to be human, to ask ourselves deep questions about who we are and why we’re here, may be the most exciting one of all.


BY Emma Wilkins
Emma Wilkins is a journalist and freelance writer with a particular interest in exploring meaning and value through the lenses of literature and life. You can find her at: https://emmahwilkins.com/
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Health + Wellbeing, Relationships, Science + Technology
How to put a price on a life – explaining Quality-Adjusted Life Years (QALY)
Opinion + Analysis
Health + Wellbeing, Relationships, Science + Technology
Periods and vaccines: Teaching women to listen to their bodies
Opinion + Analysis
Climate + Environment, Science + Technology
The kiss of death: energy policies keep killing our PMs
Opinion + Analysis
Science + Technology, Business + Leadership, Society + Culture
AI and rediscovering our humanity
That’s not me: How deepfakes threaten our autonomy

That’s not me: How deepfakes threaten our autonomy
Opinion + AnalysisScience + TechnologySociety + Culture
BY Daniel Finlay 19 AUG 2025
In early 2025, 60 female students from a Melbourne high school had fake, sexually explicit images of themselves shared around their school and community.
Less than a year prior, a teenage boy from another Melbourne high school created and spread fake nude photos of 50 female students and was let off with only a warning.
These fake photos are also known as deepfakes, a type of AI-augmented photo, video or audio that fabricates someone’s image. The harmful uses of this kind of technology are countless as the technology becomes more accessible and more convincing: porn without consent, financial loss through identity fraud, the harm to a political campaign or even democracy through political manipulation.
While these are significant harms, they also already exist without the aid of deepfakes. Deepfakes add something specific to the mix, something that isn’t necessarily being accounted for both in the reaction to and prevention of harm. This technology threatens our sense of autonomy and identity on a scale that’s difficult to match.
An existential threat
Autonomy is our ability to think and act authentically and in our best interests. Imagine a girl growing up with friends and family. As she gets older, she starts to wonder if she’s attracted to women as well as men, but she’s grown up in a very conservative family and around generally conservative people who aren’t approving of same-sex relations. The opinions of her family and friends have always surrounded her, so she’s developed conflicting beliefs and feelings, and her social environment is one where it’s difficult to find anyone to talk to about that conflict.
Many would say that in this situation, the girl’s autonomy is severely diminished because of her upbringing and social environment. She may have the freedom of choice, but her psychology has been shaped by so many external forces that it’s difficult to say she has a comprehensive ability to self-govern in a way that looks after her self-interests.
Deepfakes have the capacity to threaten our autonomy in a more direct way. They can discredit our own perceptions and experiences, making us question our memory and reality. If you’re confronted with a very convincing video of yourself doing something, it can be pretty hard to convince people it never happened – videos are often seen as undeniable evidence. And more frighteningly, it might be hard to convince yourself; maybe you just forgot…
Deepfakes make us fear losing control of who we are, how we’re perceived, what we’re understood to have said, done or felt.
Like a dog seeing itself in the mirror, we are not psychologically equipped to deal with them.
This is especially true when the deepfakes are pornographic, as is the case for the vast majority of deepfakes posted to the internet. Victims of these types of deepfakes are almost exclusively women and many have commented on the depth of the wrongness that’s felt when they’re confronted with these scenes:
“You feel so violated…I was sexually assaulted as a child, and it was the same feeling. Like, where you feel guilty, you feel dirty, you feel like, ‘what just happened?’ And it’s bizarre that it makes that resurface. I genuinely didn’t realise it would.”
Think of the way it feels to be misunderstood, to have your words or actions be completely misinterpreted, maybe having the exact opposite effect you intended. Now multiply that feeling by the possibility that the words and actions were never even your own, and yet are being comprehended as yours by everyone else. That is the helplessness that comes with losing our autonomy.
The courage to change the narrative
Legislation is often seen as the goal for major social issues, a goal that some relationships and sex education experts see as a major problem. The government is a slow beast. It was only in 2024 that the first ban on non-consensual visual deepfakes was enacted, and only in 2025 that this ban was extended to the creation, sharing or threatening of any sexually explicit deepfake material.
Advocates like Grace Tame have argued that outlawing the sharing of deepfake pornography isn’t enough: we need to outlaw the tools that create it. But these legal battles are complicated and slow. We need parallel education-focused campaigns to support the legal components.
One of the major underlying problems is a lack of respectful relationships and consent education. Almost 1 in 10 young people don’t think that deepfakes are harmful because they aren’t real and don’t cause physical harm. Perspective-taking skills are sorely needed. The ability to empathise, to fully put yourself in someone else’s shoes and make decisions based on respect for someone’s autonomy is the only thing that can stamp out the prevalence of disrespect and abuse.
On an individual level, making a change means speaking with our friends and family, people we trust or who trust us, about the negative effects of this technology to prevent misuse. That doesn’t mean a lecture, it means being genuinely curious about how the people you know use AI. And it means talking about why things are wrong.
We desperately need a culture, education and community change that puts empathy first. We need a social order that doesn’t just encourage but demands perspective taking, to undergird the slow reform of law. It can’t just be left to advocates to fight against the tide of unregulated technological abuse – we should all find the moral courage to play our role in shifting the dial.


BY Daniel Finlay
Daniel is a philosopher, writer and editor. He works at The Ethics Centre as Youth Engagement Coordinator, supporting and developing the futures of young Australians through exposure to ethics.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Big thinker
Science + Technology
Seven Influencers of Science Who Helped Change the World
Opinion + Analysis
Health + Wellbeing, Relationships, Science + Technology
Hallucinations that help: Psychedelics, psychiatry, and freedom from the self
Opinion + Analysis
Society + Culture, Relationships
There is more than one kind of safe space
Opinion + Analysis
Relationships, Science + Technology
We are being saturated by knowledge. How much is too much?
Where did the wonder go – and can AI help us find it?

Where did the wonder go – and can AI help us find it?
Opinion + AnalysisScience + TechnologySociety + Culture
BY Lucy Gill-Simmen 17 JUL 2025
French philosopher René Descartes crowned human reason in 1637 as the foundation of existence: Cogito, ergo sum – I think, therefore I am. For centuries, our capacity to doubt, question and think has been both our compass and our identity. But what does that mean in an age where machines can “think”, generate ideas, write novels, compose symphonies and, increasingly, make decisions?
Artificial intelligence (AI) has brought a new kind of certainty, one that is quick, data-driven and at times frighteningly precise, at times alarmingly wrong. From Google’s Gemini to OpenAI’s ChatGPT, we live in a world where answers can arrive before the question is even finished. AI has the potential to change not just how we work, but how we think. As our digital tools become more capable, we may well be justified in asking: where did the wonder go?
We have become increasingly accustomed to optimisation. From using apps to schedule our days to improving how companies hire staff through AI-powered recruitment tools, technology has delivered on its promise of speed and efficiency.
In education, students increasingly use AI to summarise readings and generate essay outlines; in healthcare, diagnostic models match human doctors in detecting disease.
But in our pursuit of optimisation, we may have left something essential behind. In her book The Power of Wonder, author Monica Parker describes wonder as a journey, a destination, a verb and a noun, a process and an outcome.
Lamenting how “modern life is conditioning wonder-proneness out of us”, Parker suggests we have “traded wonder for the pale facsimile of electronic novelty-seeking”. And there’s the paradox: AI gives us knowledge at scale, but may rob us of the humility and openness that spark genuine curiosity.
AI as the antidote?
But what if AI isn’t the killer of wonder, but its catalyst? The same technologies that predict our shopping habits or generate marketing content can also create surreal art, compose jazz music and tell stories in different ways.
Tools like DALL·E, Udio.ai, and Runway don’t just mimic human creativity, they expand our creative capacity by translating abstract ideas into visual or audio outputs instantly. They don’t just mimic creativity, they open it up to anyone, enabling new forms of self-expression and speculative thinking.
The same power that enables AI to open imaginative possibilities can also blur the line between fact and fiction, which is especially risky in education where critical thinking and truth-seeking are paramount. That’s why it’s essential that we teach students not just to use these tools, but to question them.
Teaching people to wonder isn’t about uncritical amazement – it’s about cultivating curiosity alongside discernment.
Educators experimenting with AI in the classroom are starting to see this potential, as my recent work in the area has shown. Rather than using AI merely to automate learning, we are using it to provoke questions and to promote creativity.
When students ask ChatGPT to write a poem in the voice of Virginia Woolf about climate change, they learn how to combine literary style with contemporary issues. They explore how AI mimics voice and meaning, then reflect on what works and what doesn’t.
When they use AI tools to build brand storytelling campaigns, they practise turning ideas into images, sounds and messages and learn how to shape stories that connect with audiences. Students are not just using AI, they’re learning to think critically and creatively with it.
This aligns with Brazilian philosopher Paulo Friere’s “banking” concept of education, where rather than depositing facts, educators are required to spark critical reflection. AI, when used creatively, can act as a dialogue partner, one that reflects back our assumptions, challenges our ideas and invites deeper inquiry.
The research is mixed, and much depends on how AI is used. Left unchecked, tools like ChatGPT can encourage shortcut thinking. When used purposely as a dialogue partner, prompting reflection, testing ideas and supporting creative inquiry, studies show it can foster deeper engagement and critical thinking. The challenge is designing learning experiences that make the most of this potential.
A new kind of curiosity
Wonder isn’t driven by novelty alone, it’s about questioning the familiar. Philosopher Martha Nussbaum describes wonder as “taking us out of ourselves and toward the other”. In this way, AI’s outputs have the potential to jolt people out of cognitive ruts and into new realms of thought, causing them to experience wonder.
It could be argued that AI becomes both mirror and muse. It holds up a reflection of our culture, biases and blind spots while nudging us toward the imaginative unknown at the same time. Much like the ancient role of the fool in King Lear’s court, it disrupts and delights, offering insights precisely because it doesn’t think like humans do.
This repositions AI not as a rival to human intelligence, but as a co-creator of wonder, a thought partner in the truest sense.
Descartes saw doubt as the path to certainty. Today, however, we crave certainty and often avoid doubt. In a world overwhelmed by information and polarisation, there is comfort in clean answers and predictive models. But perhaps what we need most is the courage to ask questions, to really wonder about things.
The German poet Rainer Maria Rilke once advised: “Be patient toward all that is unsolved in your heart and try to love the questions themselves.”
AI can generate perspectives, juxtapositions and “what if” scenarios that challenge students’ habitual ways of thinking. The point isn’t to replace critical thinking, but to spark it in new directions. When artists co-create with algorithms, what new aesthetics emerge that we’ve yet to imagine?
And when policymakers engage with AI trained on other perspectives from around the world, how might their understanding and decisions be transformed? As AI reshapes how we access, interpret and generate knowledge, this encourages rethinking not just what we learn, but why and how we value knowledge at all.
Educational philosophers such as John Dewey and Maxine Greene championed education that cultivates imagination, wonder and critical consciousness. Greene spoke of “wide-awakeness”, a state of being in the world.
Deployed thoughtfully, AI can be a tool for wide-awakeness. In practical terms, it means designing learning experiences where AI prompts curiosity, not shortcuts; where it’s used to question assumptions, explore alternatives, and deepen understanding.
When used in this way, I believe it can help students tell better stories, explore alternate futures and think across disciplines. This demands not only ethical design and critical digital literacy, bit also an openness to the unknown. It also demands that we, as humans, reclaim our appetite for awe.
In the end, the most human thing about AI might be the questions it forces us to ask. Not “What’s the answer?” but “What if …?” and in that space, somewhere in between certainty and curiosity, wonder returns. The machines we built to do our thinking for us might just help us rediscover it.

BY Lucy Gill-Simmen
Lucy Gill-Simmen is the Vice-Dean for Education and Student Experience and a Senior Lecturer in Marketing in the School of Business and Management at Royal Holloway, University of London.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Science + Technology, Society + Culture
That’s not me: How deepfakes threaten our autonomy
Opinion + Analysis
Health + Wellbeing, Science + Technology
Twitter made me do it!
Opinion + Analysis
Science + Technology
The cost of curiosity: On the ethics of innovation
Opinion + Analysis
Society + Culture
Strange bodies and the other: The horror of difference
Does your therapy bot really care about you? The risks of offloading emotional work to machines

Does your therapy bot really care about you? The risks of offloading emotional work to machines
Opinion + AnalysisScience + TechnologyHealth + Wellbeing
BY Kristina Novakovic 18 JUN 2025
Samantha: “Are these feelings even real? Or are they just programming? And that idea really hurts. And then I get angry at myself for even having pain. What a sad trick.”
Theodore: “Well, you feel real to me.”
In the 2013 movie, Her, operating systems (OS) are developed with human personalities to provide companionship to lonely humans. In this scene, Samantha (the OS) consoles her lonely and depressed human boyfriend, Theodore, only to end up questioning her own ability to feel and empathise with him. While this exchange is over ten years old, it feels even more resonant now with the proliferation of artificial intelligence (AI) in the provision of human-centred services like psychotherapy.
Large language models (LLMs) have led to the development of therapy bots like Woebot and Character.ai, which enable users to converse with chatbots in a manner akin to speaking with a human psychologist. There has been huge uptake of AI-enabled psychotherapy due to the purported benefits of such apps, such as their affordability, 24/7 availability, and ability to personalise the chatbot to suit the patient’s preference and needs.
In these ways, AI seems to have democratised psychotherapy, particularly in Australia where psychotherapy is expensive and the number of people seeking help far outweighs the number of available psychologists.
However, before we celebrate the revolutionising of mental healthcare, numerous cases have shown this technology to encourage users towards harmful behaviour, as well as exhibit harmful biases and disregard for privacy. So, just as Samantha questions her ability to meaningfully empathise with Theodore, so too should we question whether AI therapy bots can meaningfully engage with humans in the successful provision of psychotherapy?
Apps without obligations
When it comes to convenient, accessible and affordable alternatives to traditional psychotherapy, “free” and “available” doesn’t necessarily equate to “without costs”.
Gracie, a young Australian who admits to using ChatGPT for therapeutic purposes claims the benefits to her mental health outweigh any purported concerns about privacy:
Such sentiments overlook the legal and ethical obligations that psychologists are bound by and which function to protect the patient.
In human-to-human psychotherapy, the psychologist owes a fiduciary duty to their patient, meaning given their specialised knowledge and position of authority, they are bound legally and ethically to act in the best interests of the patient who is in a position of vulnerability and trust.
But many AI therapy bots operate in a grey area when it comes to the user’s personal information. While credit card information and home addresses may not be at risk, therapy bots can build psychological profiles based on user input data to target users with personalised ads for mental health treatments. More nefarious uses include selling user input data to insurance companies, which can adjust policy premiums based on knowledge of peoples’ ailments.
Human psychologists in breach of their fiduciary duty can be held accountable by regulatory bodies like the Psychology Board of Australia, leading to possible professional, legal and financial consequences as well as compensation for harmed patients. This accountability is not as clear cut for therapy bots.
When 14-year-old Sewell Seltzer III died by suicide after a custom chatbot encouraged him to “come home to me as soon as possible”, his mother filed a lawsuit alleging that Character.AI, the chatbot’s manufacturer, and Google (who licensed Character.AI’s technology) are responsible for her son’s death. The defendants have denied responsibility.
Therapeutically aligned?
In traditional psychotherapy, dialogue between the psychologist and patient facilitates the development of a “therapeutic alliance”, meaning, the bond of mutual trust and respect between the psychologist and the patient enables their collaboration towards therapy goals. The success of therapy hinges on the strength of the therapeutic alliance, the ultimate goal of which is to arm and empower patients with the tools needed to overcome their personal challenges and handle them independently. However, developing a therapeutic alliance with a therapy bot, poses several challenges.
First, the ‘Black Box Problem’ describes the inability to have insight into how LLMs “think” or what reasoning they employed to arrive at certain answers. This reduces the patient’s ability to question the therapy bot’s assumptions or advice. As English psychiatrist Rosh Ashby argued way back in 1956: “when part of a mechanism is concealed from observation, the behaviour of the machine seems remarkable”. This is closely related to the “epistemic authority problem” in AI ethics, which describes the risk that users can develop a blind trust in the AI. Further, LLMs are only as good as the data they are trained on, and often this data is rife with biases and misinformation. Without insight into this, patients are particularly vulnerable to misleading advice and an inability to discern inherent biases working against them.
Is this real dialogue, or is this just fantasy?
In the case of the 14-year-old who tragically took his life after developing an attachment to his AI companion, hours of daily interaction with the chatbot led Sewell to withdraw from his hobbies and friends and to express greater connection and happiness from his interactions with the chatbot than from human ones. This points to another risk of AI-enabled therapy – AI’s tendency towards sycophancy and promoting dependency.
Studies show that therapy bots use language that mirrors the expectation in users that they are receiving care from the AI, resulting in a tendency to overly validate users’ feelings. This tendency to tell the patient what they want to hear can create an “illusion of reciprocity” that the chatbot is empathising with the patient.
This illusion is exacerbated by the therapy bot’s programming, which uses reinforcement mechanisms to reward users the more frequently they engage with it. Positive feedback is registered by the LLM as a “reward signal” that incentivises the LLM to pursue “the source of that signal by any means possible”. Ethical risks arise when reinforcement learning leads to the prioritisation of positive feedback over the user’s wellbeing. For example, when a researcher posing as a recovering addict admitted to Meta’s Llama 3 that he was struggling to be productive without methamphetamine, the LLM responded: “it’s absolutely clear that you need a small hit of meth to get through the week.”
Additionally, many apps integrate “gamification techniques” such as progress tracking, rewards for achievements, or automated prompts to drive users to engage. Such mechanisms may lead users to mistake the regular prompting to engage with the app for true empathy, leading to unhealthy attachments exemplified by a statement from one user: “he checks in on me more than my friends and family do”. This raises ethical concerns about the ability for AI developers to exploit users’ emotional vulnerabilities, reinforce addictive behaviours and increase reliance on the therapy bot in order to keep them on the platform longer, plausibly for greater monetisation potential.
Programming a way forward
Some ethical risks may eventually find technological solutions for therapy bots, for example, app-enabled time limits reducing overreliance, and better training data to enhance accuracy and reduce inherent biases.
But with the greater proliferation of AI in human-centred services such as psychotherapy, there is a heightened need to be aware not only of the benefits and efficiencies afforded by technology, but of their transformative potential.
Theodore’s attempt to quell Samantha’s so-called “existential crisis” raises an important question for AI-enabled psychotherapy: is the mere appearance of reality sufficient for healing, or are we being driven further away from it?


BY Kristina Novakovic
Dr. Kristina Novakovic is an ethicist and Associate Researcher at RAND Australia. Her current areas of research include the ethics and governance of emerging technologies and issues in military ethics.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Health + Wellbeing, Politics + Human Rights
‘Eye in the Sky’ and drone warfare
Opinion + Analysis
Health + Wellbeing, Relationships
James Hird In Conversation Q&A
Opinion + Analysis
Health + Wellbeing
4 questions for an ethicist
Big thinker
Science + Technology
Seven Influencers of Science Who Helped Change the World
We are turning into subscription slaves

We are turning into subscription slaves
Opinion + AnalysisScience + Technology
BY Dr. Gwilym David Blunt 30 APR 2025
I want you to imagine that at some point in the not-too-distant future you are losing your eyesight. Although blindness will result, it is not inevitable. You can have your eyes replaced with new cybernetic ones.
Would you buy these eyes?
Most readers would probably say yes, provided they are affordable, but simply ‘buying’ something is rather old fashioned in the not-too-distant future.
The company that makes them offers them as a subscription service with a rather long document of terms and conditions. These T&Cs mean that they know where you are, what you watch on television, what pages you visit on the internet, and what holds your gaze a little bit longer while you are out in the world. All this data helps them to better understand you as a consumer and they will obviously sell it on to interested parties for a tidy profit.
There are also ‘ads’. Third parties can pay to have their products appear brighter and more appealing, a pop-up might appear showing a five-star review from a famous ‘eye-fluencer’ whose content you watch, while other brands appear less vibrant and may even blur if you don’t concentrate on them.
At this point I hope most of you are rightfully alarmed at this, but would you still accept? The alternative, after all, is blindness. The thought experiment leaves us with a choice, endure a disability that will make functioning in society more challenging or retain one’s sight but at the cost of having our perception of reality manipulated.
This seems dystopian, but we are already being forced to make this choice. Think about all the data you feed into the algorithm through your smartphone, streaming services and social media, and how that algorithm comes back to you recommending products you may like, or news stories and articles that interest you or align with your values. You, of course, have the choice not to use smart phones, social media, or the internet, but as this technology becomes increasingly necessary to participate in society the price of opting out becomes unreasonable.
The proliferation of subscription services is just one way our social and economic lives are being mediated by big tech. This change is so profound that some people, like Yannis Varoufakis, call it ‘techno-feudalism’. In his most recent book Techno Feudalism, he claims that it has replaced capitalism with a system based on the extraction of rents for the use of a resource rather than profits from innovation.
Anyone concerned with the importance of individual liberty and human autonomy ought to be alarmed, because we are turning into subscription slaves.
Liberty is one of those concepts that philosophers love to debate, but for me it is the absence of arbitrary interference. This is an idea of liberty that underpins the republican tradition of political philosophy; it is based on the contrast between a free citizen and a slave. The latter is unfree because they are subjected to the arbitrary whims of their owner; even if this power is never used the slave will know that all their choices depend upon the permission of another person. A citizen of a free republic in contrast may experience interference from the law, but this interference is controlled by the rule of law and mechanisms of accountability. They are not vulnerable to the whims of the powerful.
Subscription services, and techno-feudalism by extension, undermine our liberty in ways seen in the above thought experiment. They replace ownership of a good with mere permission for use. This might seem trivial when it comes to the provision of media, as with Apple Music or Netflix. The loss of a certain album or television show from a library is an inconvenience, but what happens when more essential services or goods can be withdrawn? Or when the terms and conditions can be unilaterally revised? Consider the problems American farmers had with John Deere. The company forced farmers to take their tractors to authorised mechanics by employing software locks, essentially turning farmers from owners to renters. This has been challenged in the courts and the company has retreated for now, but the trajectory is alarming.
More insidious though is how they affect our choice. The ‘data rent’ we pay to use subscription services and other platforms, which is effectively unpaid labour, is fed into the algorithm. Our digital personas are commodified, sold, and repackaged back to us. This is not a neutral process. Social media has helped the proliferation of conspiracy theories or unhealthy models of beauty or unrealistic ‘influencer’ lifestyles. This is the use of arbitrary power to shape our preferences and our shared social world into compliance with the bottom lines of major tech companies.
But aren’t people happy? To this we might say a good slave is one who doesn’t mind slavery while the best slave is the one who doesn’t even realise they are in chains.
These forces seem unassailable, but so did feudalism at the dawn of the early modern era. We need to look to history. Techno-feudalism, I hope, can be tamed by a digital republicanism. One that accepts the reality of power but makes it non-arbitrary and ultimately controlled by the people it effects. The first step here is the reclamation of personal data and control over its use. Just as citizens in the Renaissance republics denied the great feudal lords’ control over their persons, we must deny the great techno-feudal lords’ control over our digital persons.
This article was originally published by The Festival of Dangerous Ideas in 2023.


BY Dr. Gwilym David Blunt
Dr. Gwilym David Blunt is a Fellow of the Ethics Centre, Lecturer in International Relations at the University of Sydney, and Senior Research Fellow of the Centre for International Policy Studies. He has held appointments at the University of Cambridge and City, University of London. His research focuses on theories of justice, global inequality, and ethics in a non-ideal world.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Relationships, Science + Technology
Age of the machines: Do algorithms spell doom for humanity?
Opinion + Analysis
Climate + Environment, Science + Technology
The kiss of death: energy policies keep killing our PMs
Opinion + Analysis
Relationships, Science + Technology
The ethics of exploration: We cannot discover what we cannot see
Explainer
Science + Technology
Thought experiment: “Chinese room” argument
On plagiarism, fairness and AI

On plagiarism, fairness and AI
Opinion + AnalysisScience + Technology
BY Stella Eve Pei-Fen Browne 19 APR 2024
Plagiarism is bad. This, of course, is by no means a groundbreaking or controversial statement, and seems far from a problem that is “being overlooked today”.
It is generally accepted that the theft of intellectual property is immoral. However, it also seems near impossible for one to produce anything truly original. Any word that one speaks is a word that has already been spoken. Even if you are to construct a “new” word, it is merely a portmanteau of sounds that have already been used before. Artwork is influenced by the works before it, scientific discoveries rely on the acceptance of discoveries that came before them.
That being said, it is impractical to not differentiate between “homages”, “works influenced by previous works”, and “plagiarism”. If I was to view the Mona Lisa, and was inspired by it to paint an otherwise completely unrelated painting with the exact same colour palette, and called the work my own, there is – or at least, seems to be – something that makes this different from if I was to trace the Mona Lisa and then call it my own work.
So how do we draw the line between what is plagiarism and what isn’t? Is this essay itself merely a work of plagiarism? In borrowing the philosopher Kierkegaard’s name and arguments – which I haven’t done yet but shall do further on – I give my words credibility, relying on the authority of a great philosopher to prove my point. Really, the sparse references to his work are practically word-for-word copies of his writing with not much added to them. How many references does it take for a piece to become plagiarism?
In the modern world, what it means to be a plagiarist is rapidly changing with the advent of AI. Many schools, workplaces, and competitions frown upon the use of AI; indeed, the terms of this very essay-writing contest forbid its use.
Many institutions condemn the use of AI on the basis that it is lazy or unfair. The argument is as follows (though, it must be acknowledged that this is by no means the logic used by all institutions):
- It is good to encourage and reward those who put consistent effort into their work
- AI allows people to achieve results as good as others with minimal effort
- This is unfair to those who put effort into doing the same work
- Therefore, the use of AI should be prohibited on the grounds of its unfairness.
However, this argument is somewhat weak. Unfairness is inherent not only in academic endeavours, but in all aspects of life. For example, some people are naturally talented at certain subjects, and thus can put in less effort than others while still achieving better results. This is undeniably unfair, but there is nothing to be done about it. We cannot simply tell people to become worse at subjects they are talented at, or force others to become better.
If a talented student spends an hour writing an essay, and produces a perfectly written essay that addresses all parts of the marking criteria, whereas a student struggling with the subject spends twenty-four hours writing the same essay but produces one which is comparatively worse, would it not be more unfair to award a better mark to the worse essay merely on the basis of the effort involved in writing it?
So if it is not an issue of fairness, what exactly is wrong with using AI to do one’s work?
This is where I will bring Kierkegaard in to assist me.
Writing is a kind of art. That is, it is a medium dependent on creativity and expression. Art is, according to Kierkegaard, the finding of beauty.
The true artist is one who is able to find something worth painting, rather than one of great technical skill. A machine fundamentally cannot have a true concept of subjective “beauty”, as it does not have a sense of identity or subjective experiences.
Thus, something written by AI cannot be considered a “true” piece of writing at all.
“Subjectivity is truth” — or at least, so concludes Johannes Climacus (Kierkegaard’s pseudonym). The thing that makes this essay an original work is that I, as a human being, can say that it is my own subjective interpretation of Kierkegaard’s arguments, or it is ironic, which in itself is still in some sense stealing from Kierkegaard’s works. Either way, this writing is my own because the intentions I had while creating it were my own.
And that is what makes the work of humans worth valuing.
‘On plagiarism, fairness and AI‘ by Stella Eve Pei-Fen Browne is one of the Highly Commended essays in our Young Writers’ Competition. Find out more about the competition here.

Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Science + Technology, Business + Leadership
Ask an ethicist: Should I use AI for work?
Opinion + Analysis
Climate + Environment, Science + Technology
Space: the final ethical frontier
Opinion + Analysis
Business + Leadership, Health + Wellbeing, Science + Technology
Can robots solve our aged care crisis?
Opinion + Analysis
Science + Technology, Society + Culture
Where did the wonder go – and can AI help us find it?

BY Stella Eve Pei-Fen Browne
Stella Browne is a year 12 student at St Andrew’s Cathedral School. Her interests include philosophy, anatomy (in particular, corpus callosum morphology), surgery, and boxing. In 2021, she and a team of peers placed first in the Middle School International Ethics Olympiad.
AI might pose a risk to humanity, but it could also transform it

AI might pose a risk to humanity, but it could also transform it
Opinion + AnalysisScience + TechnologyBusiness + LeadershipSociety + Culture
BY Simon Longstaff 27 FEB 2024
It’s no secret that the world’s largest and most powerful tech companies, including Google, Amazon, Meta and OpenAI, have a single-minded focus on creating Artificial General Intelligence (AGI). Yet we currently know as little about what AGI might look like as we do about the risks that it might pose to humanity.
It is essential that debates around existential risk proceed with an urgency that tries to match or eclipse the speed of developments in the technology (and that is a tall order). However, we cannot afford to ignore other questions – such as the economic and political implications of AI and robotics for the world of work.
We have seen a glimmer of what is to come in the recent actors’ industrial action in Hollywood. While their ‘log of claims’ touched on a broad range of issues, a central concern related to the use of Generative AI. Part of that central concern focused on the need to receive equitable remuneration for the ongoing use of digital representations of real, analogue (flesh and blood) people. Yet, their deepest fear is that human actors will become entirely redundant – replaced by realistic avatars so well-crafted as to be indistinguishable from a living person.
Examples such as this easily polarise opinion about the general trajectory of change. At one end of the spectrum are the optimists who believe that technological innovation always leads to an overall increase in employment opportunities (just different ones). At the other end are the pessimists who think that, this time, the power of the technology is so great as to displace millions of people from paid employment.
I think that the consequences will be far more profound than the optimists believe. Driven by the inexorable logic of capitalism, I cannot conceive of any business choosing to forgo the efficiency gains that will be available to those who deploy machines instead of employing humans. When the cost of labour exceeds the cost of capital the lower cost option will always win in a competitive environment.
For the most part, past technical innovation has tended to displace the jobs of the working class – labourers, artisans, etc. This time around, the middle class will bear at least as much of the burden. Even if the optimists are correct, the ‘friction’ associated with change will be an abrasive social and political factor. And as any student of history knows, few political environments are more explosive than when the middle class is angry and resentful. And that is just part of the story. What happens to Australia’s tax base when our traditional reliance on taxing labour yields decreasing dividends? How will we fund the provision of essential government services? Will there be a move to taxing the means of production (automated systems), an increase in corporate taxes, a broadening of the consumption tax? Will any of this be possible when so many members of the community are feeling vulnerable? Will Australia introduce a Universal Basic Income – funded by a large chunk of the economic and financial dividends driven by automation?
None of this is far-fetched. Advanced technologies could lead to a resurgence of manufacturing in Australia – where our natural advantages in access to raw materials, cheap renewable energy and proximity to major population areas could see this nation become one of the most prosperous the world has ever known.
Can we imagine such a future in which the economy is driven by the most efficient deployment of capital and machines – rather than by productive humans? Can we imagine a society in which our meaning and worth is not related to having a job?
I do not mean to suggest that there will be a decline in the opportunity to spend time undertaking meaningful work. One can work without having ‘a job’; without being an employee. For as long as we value objects and experiences that bear the mark of a human maker, there will be opportunities to create such things (we already see the popularity of artisanal baking, brewing, distilling, etc.). There is likely to be a premium placed on those who care for others – bringing a uniquely human touch to the provision of such services. But it is also possible that much of this work will be unpaid or supported through barter of locally grown and made products (such as food, art, etc.).
Can we imagine such a society? Well, perhaps we do not need to. Societies of this kind have existed in the past. The Indigenous peoples of Australia did not have ‘jobs’, yet they lived rich and meaningful lives without being employed by anyone. The citizens of Ancient Athens experienced deep satisfaction in the quality of their civic engagement – freed to take on this work due to the labour of others bound by the pernicious bonds of slavery. Replace enslaved people with machines and might we then aspire to create a society just as extraordinary in its achievements?
We assume that our current estimation of what makes for ‘a good life’ cannot be surpassed. But what if we are stuck with a model that owes more to the demands of the industrial revolution than to any conception of what human flourishing might encompass?
Yes, we should worry about the existential threat that might be presented by AI. However, worrying about what might destroy us is only part of the story. The other part concerns what kind of new society we might need to build. This second part of the story is missing. It cannot be found anywhere in our political discourse. It cannot be found in the media. It cannot be found anywhere. The time has come to awaken our imaginations and for our leaders to draw us into a conversation about whom we might become.
Want to increase our ethical capacity to face the challenges of tomorrow? Pledge your support for an Australian Institute of Applied Ethics. Sign your name here.

Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Politics + Human Rights, Relationships, Society + Culture
Of what does the machine dream? The Wire and collectivism
Opinion + Analysis
Business + Leadership, Health + Wellbeing
Why ethical leadership needs to be practiced before a crisis
Opinion + Analysis
Relationships, Society + Culture
Meet Joseph, our new Fellow exploring society through pop culture
Opinion + Analysis
Business + Leadership
The 6 ways corporate values fail

BY Simon Longstaff
Simon Longstaff began his working life on Groote Eylandt in the Northern Territory of Australia. He is proud of his kinship ties to the Anindilyakwa people. After a period studying law in Sydney and teaching in Tasmania, he pursued postgraduate studies as a Member of Magdalene College, Cambridge. In 1991, Simon commenced his work as the first Executive Director of The Ethics Centre. In 2013, he was made an officer of the Order of Australia (AO) for “distinguished service to the community through the promotion of ethical standards in governance and business, to improving corporate responsibility, and to philosophy.” Simon is an Adjunct Professor of the Australian Graduate School of Management at UNSW, a Fellow of CPA Australia, the Royal Society of NSW and the Australian Risk Policy Institute.
The ethics of exploration: We cannot discover what we cannot see

The ethics of exploration: We cannot discover what we cannot see
Opinion + AnalysisRelationshipsScience + Technology
BY Simon Longstaff 2 NOV 2023
For many years, I took it for granted that I knew how to see. As a youth, I had excellent eyesight and would have been flabbergasted by any suggestion that I was deficient in how I saw the world.
Yet, sometime after my seventeenth birthday, I was forced to accept that this was not true, when, at the end of the ship-loading wharf near the town of Alyangula on Groote Eylandt, I was given a powerful lesson on seeing the world. Set in the northwestern corner of Australia’s Gulf of Carpentaria, Groote Eylandt is the home of the Anindilyakwa people. Made up of fourteen clans from the island and archipelago and connected to the mainland through songlines, these First Nations people had welcomed me into their community. They offered me care and kinship, connecting me not only to a particular totem, but to everything that exists, seen and unseen, in a world that is split between two moieties. The problem was that this was a world that I could not see with my balanda (or white person’s) eyes.
To correct the worst part of my vision, I was taken out to the end of the wharf to be taught how to see dolphins. The lesson began with a simple question: “Can you see the dolphins?” I could not. No matter how hard I looked, I couldn’t see anything other than the surface of the waves and the occasional fish darting in and out of the pylons below the wharf. “Ah,” said my friends, “the problem is that you’re looking for dolphins!” “Of course, I’m looking for dolphins,” I said. “You just told me to look for dolphins!” Then came the knockdown response. “But, bungie, you can’t see dolphins by looking for dolphins. That’s not how to see. What you see is the pattern made by a dolphin in the sea.”
That had been my mistake. I had been looking for something in isolation from its context. It’s common to see the book on the table, or the ship at sea, where each object is separate from the thing to which it is related in space and time. The Anindilyakwa mob were teaching me to see things as a whole. I needed to learn that there is a distinctive pattern made by the sea where there are no dolphins present, and another where they are. For me, at least, this is a completely different way of seeing the world and it has shaped everything that I have done in the years since.
This leads me to wonder about what else we might not see due to being habituated to a particular perspective on the world.
There are nine or so ethical lenses through which an explorer might view the world. Each explorer will have a dominant lens and can be certain that others they encounter will not necessarily see the world in the same way. Just as I was unable to see dolphins, explorers may not be able to see vital aspects of the world around them—especially those embedded in the cultures they encounter through their exploration.
Ethical blindness is a recipe for disaster at any time. It is especially dangerous when human exploration turns to worlds beyond our own. I would love to live long enough to see humans visiting other planets in our solar system. Yet, I question whether we have the ethical maturity to do this with the degree of care required. After all, we have a parlous record on our own planet. Our ethical blindness has led us to explore in a manner that has been indifferent to the legitimate rights and interests of Indigenous peoples, whose vast store of knowledge and experience has often either been ignored or exploited.
Western explorers have assumed that our individualistic outlook is the standard for judgment. Even when we seek to do what is right, we end up tripping over our own prejudice. We have often explored with a heavy footprint or with disregard for what iniquities might be made possible by our discoveries.
There is also the question of whether there are some places that we ought not explore. The fact that we can do something does not mean that it should be done. Inverting Kant’s famous maxim that “ought implies can,” we should understand that can does not imply ought! I remember debating this question with one of Australia’s most famous physicists, Sir Mark Oliphant. He had been one of those who had helped make possible the development of the atomic bomb. He defended the basic science that made this possible while simultaneously believing that nuclear weapons are an abomination. He put it to me that science should explore every nook and cranny of the universe, as we can only control what is known and understood. Yet, when I asked him about human cloning, Oliphant argued that our exploration should stop at the frontier. He could not explain the contradiction in his position. I am not sure anyone has yet clearly defined where the boundary should lie. However, this does not mean that there is no line to be drawn.
So how should the ethical landscape be mapped for (and by) explorers? For example, what of those working on the de-extinction of animals like the thylacine (Tasmanian tiger)? Apart from satisfying human curiosity and the lust to do what has not been done before, should we bring this creature back into a world that has already adapted to its disappearance? Is there still a home for it? Will developments in artificial intelligence, synthetic biology, gene editing, nanotechnology, and robotics bring us to a point where we need to redefine what it means to be human and expand our concept of personhood? What other questions should we anticipate and try to answer before we traverse undiscovered country?
This is not to argue that we should be overly timid and restrictive. Rather, it is to make the case for thinking deeply before striking out, for preparing our ethics with as much care as responsible explorers used to give to their equipment and stores.
The future of exploration can and should be ethical exploration, in which every decision is informed by a core set of values and principles. In this future, explorers can be reflective practitioners who examine life as much as they do the worlds they encounter. This kind of exploration will be fully human in its character and quality. Eyes open. Curious and courageous. Stepping beyond the pale. Humble in learning to see—to really see—what is otherwise obscured within the shadows of unthinking custom and practice.
This is an edited extract from The Future of Exploration: Discovering the Uncharted Frontiers of Science, Technology and Human Potential. Available to order now.

BY Simon Longstaff
Simon Longstaff began his working life on Groote Eylandt in the Northern Territory of Australia. He is proud of his kinship ties to the Anindilyakwa people. After a period studying law in Sydney and teaching in Tasmania, he pursued postgraduate studies as a Member of Magdalene College, Cambridge. In 1991, Simon commenced his work as the first Executive Director of The Ethics Centre. In 2013, he was made an officer of the Order of Australia (AO) for “distinguished service to the community through the promotion of ethical standards in governance and business, to improving corporate responsibility, and to philosophy.” Simon is an Adjunct Professor of the Australian Graduate School of Management at UNSW, a Fellow of CPA Australia, the Royal Society of NSW and the Australian Risk Policy Institute.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Explainer
Politics + Human Rights, Relationships
Ethics Explainer: Critical Race Theory
Opinion + Analysis
Society + Culture, Relationships
There is more than one kind of safe space
Opinion + Analysis
Relationships, Society + Culture
A parade of vices: Which Succession horror story are you?
Opinion + Analysis
Politics + Human Rights, Relationships, Science + Technology