
Ask an ethicist: Should I use AI for work?
Opinion + AnalysisScience + TechnologyBusiness + Leadership
BY Daniel Finlay 8 SEP 2025
My workplace is starting to implement AI usage in a lot of ways. I’ve heard so many mixed messages about how good or bad it is. I don’t know whether I should use it, or to what extent. What should I do?
Artificial intelligence (AI) is quickly becoming unavoidable in our daily lives. Google something, and you’ll be met with an “AI overview” before you’re able to read the first result. Open up almost any social media platform and you’ll be met with an AI chat bot or prompted to use their proprietary AI to help you write your message or create an image.
Unsurprisingly, this ubiquity has rapidly extended to the workplace. So, what do you do if AI tools are becoming the norm but you’re not sure how you feel about it? Maybe you’re part of the 36% of Australians who aren’t sure if the benefits of AI outweigh the harms. Luckily, there’s a few ethical frameworks to help guide your reasoning.
Outcomes
A lot of people care about what AI is going to do for them, or conversely how it will harm them or those they care about. Consequentialism is a framework that tells us to think about ethics in terms of outcomes – often the outcomes of our actions, but really there are lots of types of consequentialism.
Some tell us to care about the outcomes of rules we make, beliefs or attitudes we hold, habits we develop or preferences we have (or all of the above!). The common thread is the idea that we should base our ethics around trying to make good things happen.
This might seem simple enough, but ethics is rarely simple.
AI usage is having and is likely to have many different competing consequences, short and long-term, direct and indirect.
Say your workplace is starting to use AI tools. Maybe they’re using email and document summaries, or using AI to create images, or using ChatGPT like they would use Google. Should you follow suit?
If you look at the direct consequences, you might decide yes. Plenty of AI tools give you an edge in the workplace or give businesses a leg up over others. Being able to analyse data more quickly, get assistance writing a document or generate images out of thin air has a pretty big impact on our quality of life at work.
On the other hand, there are some potentially serious direct consequences of relying on AI too. Most public large language model (LLM) chatbots have had countless issues with hallucinations. This is the phenomenon where AI perceives patterns that cause it to confidently produce false or inaccurate information. Given how anthropomorphised chatbots are, which lends them an even higher degree of our confidence and trust, these hallucinations can be very damaging to people on both a personal and business level.
Indirect consequences need to be considered too. The exponential increase in AI use, particularly LLM generative AI like ChatGPT, threatens to undo the work of climate change solutions by more than doubling our electricity needs, increasing our water footprint, greenhouse gas emissions and putting unneeded pressure on the transition to renewable energy. This energy usage is predicted to double or triple again over the next few years.
How would you weigh up those consequences against the personal consequences for yourself or your work?
Rights and responsibilities
A different way of looking at things, that can often help us bridge the gap between comparing different sets of consequences, is deontology. This is an ethical framework that focuses on rights (ways we should be treated) and duties (ways we should treat others).
One of the major challenges that generative AI has brought to the fore is how to protect creative rights while still being able to innovate this technology on a large scale. AI isn’t capable of creating ‘new’ things in the same way that humans can use their personal experiences to shape their creations. Generative AI is ‘trained’ by giving the models access to trillions of data points. In the case of generative AI, these data points are real people’s writing, artwork, music, etc. OpenAI (creator of ChatGPT) has explicitly said that it would be impossible to create these tools without the access to and use of copyrighted material.
In 2023, the Writers Guild of America went on a five-month strike to secure better pay and protections against the exploitation of their material in AI model training and subsequent job replacement or pay decreases. In 2025, Anthropic settled for $1.5 billion in a lawsuit over their illegal piracy of over 500,000 books used to train their AI model.
Creative rights present a fundamental challenge to the ethics of using generative AI, especially at work. The ability to create imagery for free or at a very low cost with AI means businesses now have the choice to sidestep hiring or commissioning real artists – an especially fraught decision point if the imagery is being used with a profit motive, as it is arguably being made with the labour of hundreds or thousands of uncompensated artists.
What kind of person do you want to be?
Maybe you’re not in an office, though. Maybe your work is in a lab or field research, where AI tools are being used to do things like speed up the development of life-changing drugs or enable better climate change solutions.
Intuitively, these uses might feel more ethically salient, and a virtue ethics point of view could help make sense of that. Virtue ethics is about finding the valuable middle ground between extreme sets of characteristics – the virtues that a good person, or the best version of yourself, would embody.
On the one hand, it’s easy to see how this framework would encourage use of AI that helps others. A strong sense of purpose, altruism, compassion, care, justice – these are all virtues that can be lived out by using AI to make life-changing developments in science and medicine for the benefit of society.
On the other hand, generative AI puts another spanner in the works. There is an increasing body of research looking at the negative effects of generative AI on our ability to think critically. Overreliance and overconfidence in AI chatbots can lead to the erosion of critical thinking, problem solving and independent decision making skills. With this in mind, virtue ethics could also lead us to be wary of the way that we use particular kinds of AI, lest we become intellectually lazy or incompetent.
The devil in the detail
AI, in all its various capacities, is revolutionising the way we work and is clearly here to stay. Whether you opt in or not is hopefully still up to you in your workplace, but using a few different ethical frameworks, you can prioritise your values and principles and decide whether and what type of AI usage feels right to you and your purpose.
Whether you’re looking at the short and long-term impacts of frequent AI chatbot usage, the rights people have to their intellectual property, the good you can do with AI tools or the type of person you want to be, maintaining a level of critical reflection is integral to making your decision ethical.


BY Daniel Finlay
Daniel is a philosopher, writer and editor. He works at The Ethics Centre as Youth Engagement Coordinator, supporting and developing the futures of young Australians through exposure to ethics.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Business + Leadership
Businesses can’t afford not to be good
Opinion + Analysis
Business + Leadership
Risky business: lockout laws, sharks, and media bias
Opinion + Analysis
Business + Leadership
Survivor bias: Is hardship the only way to show dedication?
Reports
Business + Leadership