On plagiarism, fairness and AI
On plagiarism, fairness and AI
Opinion + AnalysisScience + Technology
BY Stella Eve Pei-Fen Browne 19 APR 2024
Plagiarism is bad. This, of course, is by no means a groundbreaking or controversial statement, and seems far from a problem that is “being overlooked today”.
It is generally accepted that the theft of intellectual property is immoral. However, it also seems near impossible for one to produce anything truly original. Any word that one speaks is a word that has already been spoken. Even if you are to construct a “new” word, it is merely a portmanteau of sounds that have already been used before. Artwork is influenced by the works before it, scientific discoveries rely on the acceptance of discoveries that came before them.
That being said, it is impractical to not differentiate between “homages”, “works influenced by previous works”, and “plagiarism”. If I was to view the Mona Lisa, and was inspired by it to paint an otherwise completely unrelated painting with the exact same colour palette, and called the work my own, there is – or at least, seems to be – something that makes this different from if I was to trace the Mona Lisa and then call it my own work.
So how do we draw the line between what is plagiarism and what isn’t? Is this essay itself merely a work of plagiarism? In borrowing the philosopher Kierkegaard’s name and arguments – which I haven’t done yet but shall do further on – I give my words credibility, relying on the authority of a great philosopher to prove my point. Really, the sparse references to his work are practically word-for-word copies of his writing with not much added to them. How many references does it take for a piece to become plagiarism?
In the modern world, what it means to be a plagiarist is rapidly changing with the advent of AI. Many schools, workplaces, and competitions frown upon the use of AI; indeed, the terms of this very essay-writing contest forbid its use.
Many institutions condemn the use of AI on the basis that it is lazy or unfair. The argument is as follows (though, it must be acknowledged that this is by no means the logic used by all institutions):
- It is good to encourage and reward those who put consistent effort into their work
- AI allows people to achieve results as good as others with minimal effort
- This is unfair to those who put effort into doing the same work
- Therefore, the use of AI should be prohibited on the grounds of its unfairness.
However, this argument is somewhat weak. Unfairness is inherent not only in academic endeavours, but in all aspects of life. For example, some people are naturally talented at certain subjects, and thus can put in less effort than others while still achieving better results. This is undeniably unfair, but there is nothing to be done about it. We cannot simply tell people to become worse at subjects they are talented at, or force others to become better.
If a talented student spends an hour writing an essay, and produces a perfectly written essay that addresses all parts of the marking criteria, whereas a student struggling with the subject spends twenty-four hours writing the same essay but produces one which is comparatively worse, would it not be more unfair to award a better mark to the worse essay merely on the basis of the effort involved in writing it?
So if it is not an issue of fairness, what exactly is wrong with using AI to do one’s work?
This is where I will bring Kierkegaard in to assist me.
Writing is a kind of art. That is, it is a medium dependent on creativity and expression. Art is, according to Kierkegaard, the finding of beauty.
The true artist is one who is able to find something worth painting, rather than one of great technical skill. A machine fundamentally cannot have a true concept of subjective “beauty”, as it does not have a sense of identity or subjective experiences.
Thus, something written by AI cannot be considered a “true” piece of writing at all.
“Subjectivity is truth” — or at least, so concludes Johannes Climacus (Kierkegaard’s pseudonym). The thing that makes this essay an original work is that I, as a human being, can say that it is my own subjective interpretation of Kierkegaard’s arguments, or it is ironic, which in itself is still in some sense stealing from Kierkegaard’s works. Either way, this writing is my own because the intentions I had while creating it were my own.
And that is what makes the work of humans worth valuing.
‘On plagiarism, fairness and AI‘ by Stella Eve Pei-Fen Browne is one of the Highly Commended essays in our Young Writers’ Competition. Find out more about the competition here.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Relationships, Science + Technology
To see no longer means to believe: The harms and benefits of deepfake
Opinion + Analysis
Science + Technology
How will we teach the robots to behave themselves?
Opinion + Analysis
Business + Leadership, Relationships, Science + Technology
Are we ready for the world to come?
Opinion + Analysis
Relationships, Science + Technology
Injecting artificial intelligence with human empathy
BY Stella Eve Pei-Fen Browne
Stella Browne is a year 12 student at St Andrew’s Cathedral School. Her interests include philosophy, anatomy (in particular, corpus callosum morphology), surgery, and boxing. In 2021, she and a team of peers placed first in the Middle School International Ethics Olympiad.
The ethics of exploration: We cannot discover what we cannot see
The ethics of exploration: We cannot discover what we cannot see
Opinion + AnalysisRelationshipsScience + Technology
BY Simon Longstaff 2 NOV 2023
For many years, I took it for granted that I knew how to see. As a youth, I had excellent eyesight and would have been flabbergasted by any suggestion that I was deficient in how I saw the world.
Yet, sometime after my seventeenth birthday, I was forced to accept that this was not true, when, at the end of the ship-loading wharf near the town of Alyangula on Groote Eylandt, I was given a powerful lesson on seeing the world. Set in the northwestern corner of Australia’s Gulf of Carpentaria, Groote Eylandt is the home of the Anindilyakwa people. Made up of fourteen clans from the island and archipelago and connected to the mainland through songlines, these First Nations people had welcomed me into their community. They offered me care and kinship, connecting me not only to a particular totem, but to everything that exists, seen and unseen, in a world that is split between two moieties. The problem was that this was a world that I could not see with my balanda (or white person’s) eyes.
To correct the worst part of my vision, I was taken out to the end of the wharf to be taught how to see dolphins. The lesson began with a simple question: “Can you see the dolphins?” I could not. No matter how hard I looked, I couldn’t see anything other than the surface of the waves and the occasional fish darting in and out of the pylons below the wharf. “Ah,” said my friends, “the problem is that you’re looking for dolphins!” “Of course, I’m looking for dolphins,” I said. “You just told me to look for dolphins!” Then came the knockdown response. “But, bungie, you can’t see dolphins by looking for dolphins. That’s not how to see. What you see is the pattern made by a dolphin in the sea.”
That had been my mistake. I had been looking for something in isolation from its context. It’s common to see the book on the table, or the ship at sea, where each object is separate from the thing to which it is related in space and time. The Anindilyakwa mob were teaching me to see things as a whole. I needed to learn that there is a distinctive pattern made by the sea where there are no dolphins present, and another where they are. For me, at least, this is a completely different way of seeing the world and it has shaped everything that I have done in the years since.
This leads me to wonder about what else we might not see due to being habituated to a particular perspective on the world.
There are nine or so ethical lenses through which an explorer might view the world. Each explorer will have a dominant lens and can be certain that others they encounter will not necessarily see the world in the same way. Just as I was unable to see dolphins, explorers may not be able to see vital aspects of the world around them—especially those embedded in the cultures they encounter through their exploration.
Ethical blindness is a recipe for disaster at any time. It is especially dangerous when human exploration turns to worlds beyond our own. I would love to live long enough to see humans visiting other planets in our solar system. Yet, I question whether we have the ethical maturity to do this with the degree of care required. After all, we have a parlous record on our own planet. Our ethical blindness has led us to explore in a manner that has been indifferent to the legitimate rights and interests of Indigenous peoples, whose vast store of knowledge and experience has often either been ignored or exploited.
Western explorers have assumed that our individualistic outlook is the standard for judgment. Even when we seek to do what is right, we end up tripping over our own prejudice. We have often explored with a heavy footprint or with disregard for what iniquities might be made possible by our discoveries.
There is also the question of whether there are some places that we ought not explore. The fact that we can do something does not mean that it should be done. Inverting Kant’s famous maxim that “ought implies can,” we should understand that can does not imply ought! I remember debating this question with one of Australia’s most famous physicists, Sir Mark Oliphant. He had been one of those who had helped make possible the development of the atomic bomb. He defended the basic science that made this possible while simultaneously believing that nuclear weapons are an abomination. He put it to me that science should explore every nook and cranny of the universe, as we can only control what is known and understood. Yet, when I asked him about human cloning, Oliphant argued that our exploration should stop at the frontier. He could not explain the contradiction in his position. I am not sure anyone has yet clearly defined where the boundary should lie. However, this does not mean that there is no line to be drawn.
So how should the ethical landscape be mapped for (and by) explorers? For example, what of those working on the de-extinction of animals like the thylacine (Tasmanian tiger)? Apart from satisfying human curiosity and the lust to do what has not been done before, should we bring this creature back into a world that has already adapted to its disappearance? Is there still a home for it? Will developments in artificial intelligence, synthetic biology, gene editing, nanotechnology, and robotics bring us to a point where we need to redefine what it means to be human and expand our concept of personhood? What other questions should we anticipate and try to answer before we traverse undiscovered country?
This is not to argue that we should be overly timid and restrictive. Rather, it is to make the case for thinking deeply before striking out, for preparing our ethics with as much care as responsible explorers used to give to their equipment and stores.
The future of exploration can and should be ethical exploration, in which every decision is informed by a core set of values and principles. In this future, explorers can be reflective practitioners who examine life as much as they do the worlds they encounter. This kind of exploration will be fully human in its character and quality. Eyes open. Curious and courageous. Stepping beyond the pale. Humble in learning to see—to really see—what is otherwise obscured within the shadows of unthinking custom and practice.
This is an edited extract from The Future of Exploration: Discovering the Uncharted Frontiers of Science, Technology and Human Potential. Available to order now.
BY Simon Longstaff
After studying law in Sydney and teaching in Tasmania, Simon pursued postgraduate studies in philosophy as a Member of Magdalene College, Cambridge. In 1991, Simon commenced his work as the first Executive Director of The Ethics Centre. In 2013, he was made an officer of the Order of Australia (AO) for “distinguished service to the community through the promotion of ethical standards in governance and business, to improving corporate responsibility, and to philosophy.”
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Explainer
Politics + Human Rights, Relationships
Ethics Explainer: Critical Race Theory
Opinion + Analysis
Relationships
How to give your new year’s resolutions more meaning
Opinion + Analysis
Relationships
Should we abolish the institution of marriage?
Opinion + Analysis
Relationships
Will I, won’t I? How to sort out a large inheritance
One giant leap for man, one step back for everyone else: Why space exploration must be inclusive
One giant leap for man, one step back for everyone else: Why space exploration must be inclusive
Opinion + AnalysisBusiness + LeadershipScience + Technology
BY Dr Elise Stephenson Isabella Vacaflores 14 SEP 2023
Greater representation of women and minoritised groups in the space sector would not only be ethical but could also have great benefits for all of humanity.
The systematic exclusion of women – and other minoritised groups – from all parts of the space sector gravely impacts on our future ability to ‘make good’ our space ambitions and live out the principle of equality. Minoritised groups refer to groups that might not be a minority in the global population (women, for instance) but are minoritised in a particular context, like the space sector.
Currently, more than half of humanity is treated as an afterthought for space tech billionaires and some government space agencies, amplifying dangerous warning signs already heralded by space ethicists and philosophers, including us. If these warning signs are ignored, we are set to repeat earthly inequalities in space.
Addressing the kind of society we want in space is crucial to fair and good decision making that benefits all, helping to mitigate risk and protecting future generations.
What does diversity in the space sector look like right now?
Research reveals that women have held 1 in 5 space sector positions over the past three decades. Across much of the sector, representation is at best marginally improving in public sector roles, whilst at worst, stagnating or regressing over a period where we should have seen the greatest improvements.
For example, of the 634 people that have gone to space, just over 10% have been women.
Our research has found that from the publicly available data, only 3 out of 70 national space agencies have achieved gender parity in leadership. Both horizontal and vertical segregation limits women – even in agencies that are doing well – pushing them down the organisational hierarchy and pigeonholing them out of leadership and operational, engineering and technical roles, which are often better paid and have higher status.
It is not just the most visible part of the space sector that is struggling to address the issue of gender inequality. Exclusion and discrimination have been reported by women occupying roles from astrophysicists and aerospace engineers to space lawyers and academics.
Prejudice is a moral blight for many workplaces, not least because it holds industry back from realising its fullest potential. Research finds that more diverse teams typically do this better and are more innovative – having a diverse mix of perspectives, experience, and knowledge ultimately helps conquer groupthink and allows a broader range of opportunities and complications to be considered. In the intelligence sector, diversity further helps “limiting un-predictability by foreseeing or forecasting multiple, different futures” which may be similarly relevant for the space sector.
Space exploration is a gamble, but getting more women and people from diverse backgrounds into the space sector will improve humanity’s odds.
In the context of space, failing to act on such insights would be morally irresponsible, given the risk taken by the sector on humanity’s behalf every single day.
Space is defined by the Outer Space Treaty as a global commons, meaning it is a resource that is unowned by any one nation or person – yet critically, able to be ‘used’ by any, so long as they have the resources to do so. As it stands, the cost and inaccessibility of space technology means that only a privileged few individuals, companies, and countries are currently represented in the space domain. In broader society, these privileged few are predominantly white, wealthy, connected men.
Being ‘good ancestors’ in the new space age
We might consider the principle of intergenerational justice espoused by governments or the ‘cathedral thinking’ metaphor by Greta Thunberg to describe the trade-off between small sacrifices now for huge benefits moving forward.
To further her metaphor, our ethical legacy is not shaped solely by our past, but also by our ability to be regarded as ‘good ancestors’ for future generations. These arguments are already being spurred in Australia by movements like EveryGen, Orygyn, the Foundation for Young Australians and Think Forward (among others) who are aiming for more intergenerational policymaking across many domains.
As the philosopher Hannah Pitkin notes, our moral failings arise not from malevolent intent, but from refusing to thinking critically about what we are doing.
A new space ethics
Whilst it will take some time to see gender parity occur in the space industry even if quotas or similar approaches are taken, there are still ‘easy wins’ to be had that would help elevate women’s and minoritised voices.
We found many women in the space industry who were interested in forming networks both within and between agencies and organisations. These typically serve a wide range of functions, from networking in the strict sense of the word to enabling a safe space to discuss diversity and inclusion or drive advocacy efforts. Research shows diversity networks having benefits for career development, psychological safety and community building.
Beyond this individualised, sometimes siloed approach, organisations also need to deeply commit to tackling inequality at a systematic level and invest in diversity, inclusion, belonging and equity policies which many in the space sector currently lack. Without transparently defined goals and targets in this area, it is difficult for organisations to measure their progress and, moreover, for us to hold them accountable.
Finally, looking to the next generation, the industry needs to engage a diverse range of students from different educational and demographic backgrounds. This means offering internships and educational opportunities to students that might not adhere to the current ‘mould’ of what someone looking in space looks like. For instance, the National Indigenous Space Academy offers First Nations STEM students a chance to experience life at NASA, whilst other initiatives across the sector include detangling the STEM-space link, to demonstrate the range of roles and opportunities available in the space sector for even non-STEM career paths.
In the height of the Soviet-American space race, JFK said: “we choose to go to the Moon in this decade and do other things, not because they are easy, but because they are hard”. Transforming the exclusive structures and patriarchal history of the space sector may not always be a simple task, but it is fundamentally critical on both a practical and moral level.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Business + Leadership, Relationships
Moving work online
Explainer
Relationships, Science + Technology
Ethics Explainer: Post-Humanism
Opinion + Analysis
Business + Leadership
Power play: How the big guys are making you wait for your money
Opinion + Analysis
Business + Leadership, Politics + Human Rights, Science + Technology
Not too late: regaining control of your data
BY Dr Elise Stephenson
Dr Elise Stephenson is the Deputy Director of the Global Institute for Women's Leadership at the Australian National University. Elise is a multi award-winning researcher and entrepreneur focused on gender, sexuality and leadership in frontier international relations, from researching space policy, to AI, climate, diplomacy, national security and intelligence, security vetting, international representation, and the Asia Pacific. She is a Gender, Space and National Security Fellow of the National Security College, an adjunct in the Griffith Asia Institute and a Fulbright Fellow of the Henry M Jackson School of International Studies at the University of Washington.
BY Isabella Vacaflores
Isabella is currently working as a research assistant at the Global Institute for Women’s Leadership. She has previously held research positions at Grattan Institute, Department of Prime Minister & Cabinet and the School of Politics and International Relations at the Australian National University. She has won multiple awards and scholarships, including recently being named the 2023 Australia New Zealand Boston Consulting Group Women’s Scholar, for her efforts to improve gender, racial and socio-economic equality in politics and education.
People first: How to make our digital services work for us rather than against us
People first: How to make our digital services work for us rather than against us
Opinion + AnalysisBusiness + LeadershipScience + Technology
BY Cris Parker 5 SEP 2023
Advancements in technology have shown greater efficiency and benefits for many. But if we don’t invest in human-centric thinking, we risk leaving our most vulnerable behind.
As businesses from the private and public sector continue to invest in improved digital processes, tools and services, we are seeing users empowered with greater information, accessibility and connectivity.
However as critical services for healthcare, lifestyle and support systems have become increasingly digitised, the barriers for vulnerable, remote or digitally excluded individuals must also be considered against these benefits.
It’s no wonder the much-maligned MyGov app underwent an audit review earlier this year, resulting in a major overhaul of the service. Reading through their chat rooms and forums where customers can express their experiences, comments like these fill the pages:
“…If you’re trying to do something online, even if you’ve got a super reliable connection, you can spend hours wandering around in a fog because there’s no transparency about – they’re not trying to make it easy for people.”
“You need to have acquired the technology to do it, but you get on their websites, and I don’t know who designs their systems. But you’ve got to be psychic to be able to follow what they want. In order to get what you need, you’ve got to run through this maze, it’s complete bullshit.”
“And you’re already putting elderly people and keeping them in a home, it all goes online and digital, they stop having that outside interaction. It’s another chip away of community. That’s where the isolation comes in.”
Reading these statements, you get a sense of the frustration and confusion felt, not just due to time wasted but also the loss of a personal connection and agency. These experiences can lead users to doubt the reliability of business’ processes and chip away at the trust in their systems.
The Australian Digital Inclusion Index cites digital inclusion as “one of the most important challenges facing Australia.” Their 2023 key findings presented that digital inclusion remains closely linked to age and increases with education, employment and income.
So, as technology becomes more ubiquitous in our lives, how do we maintain human centric thinking? How do we avoid exacerbating existing inequalities while maintaining respect, autonomy and dignity for all?
Looking for some answers, I spoke to Jordan Hatch, a First Assistant Secretary at the Australian Government and someone who is passionate about designing for user needs. Hatch is currently working with the care and support economy task force in the Department of Prime Minister and Cabinet, exploring some of the challenges and opportunities across the care sector.
Hatch is acutely aware that amidst this digital transformation, the welfare of vulnerable individuals remains a priority. He explains human-centered design principles must play a crucial role in shaping digital solutions. Importantly, understanding the user base, including different cohorts and their specific needs, is foundational to designing inclusive services. Extensive research and involvement of First Nations communities, individuals with low digital literacy, or limited internet access are also essential to developing solutions that address their unique challenges.
Hatch explains how technology is transforming the face-to-face experience. He says the digitisation of services has prompted a re-evaluation of the role of physical service centres. The integration of digital and in-person channels is allowing for streamlined processes and improved customer experiences.
A great example is Service NSW, which has become a centralised hub offering access to several support services. The availability of digital options has not led to the exclusion of those who prefer face-to-face interactions. On the contrary, it has allowed for a more comprehensive and improved service for individuals seeking in-person assistance. The digital transformation has become a means to augment the service experience, rather than replacing it. When visiting a Service NSW centre, you are met by a representative who directs you to a computer and, if required, walks you through the online process, offering personalised support. This evolution caters to diverse needs, ensuring that the face-to-face experience remains valuable while offering alternative modes of engagement.
Of course, increasing the capability and use of technology has its downside. Digital interactions have become a societal norm and an opportunity for scams. This has led to a number of digital hoops users are obliged to make in an attempt to protect their data and privacy. This process can impact the users’ wellbeing as passwords are lost or forgotten and the digital path is often confusing.
Hatch explains in this learning journey, how a shift in his perception occurred regarding the relationship between security and usability. Previously, it was believed that security and usability were at opposite ends of the spectrum—either systems were easy to use but lacked security, or they were secure but difficult to navigate. However, recent technological advancements have challenged this notion. Innovations emerged, offering enhanced security measures that were user-friendly. For example, modern password theories promoted the use of longer passphrases consisting of simple words, resulting in both stronger security and increased user-friendliness.
Technological transformation is a process and technology is not a panacea – it is a steppingstone and an opportunity for simplification and identifying unique solutions. What we can’t do is allow technology to overshadow the need to address regulation and the complexity it can create.
Hatch shares an insight from Edward Santos, the former Human Rights Commissioner to Australia: the prevalent mindset of the technology world being, “move fast and break things”. This is often seen as innovation, and an opportunity to learn from failure and adapt. However, in the realm of public service, where real people’s lives are at stake, the stakes are higher. The margin for error in this context can have tangible consequences for vulnerable individuals.
Slowing down is not necessarily the solution, particularly when you see or experience the harm caused by a misalignment between requirements and the capacity to meet them. It is the work Jordan Hatch describes where the issue is not when, but how services are designed and delivered that will make the difference.
The intersection between technology and policy creates an opportunity for regulators and digital experts to come together. Rather than digitise what exists, they can identify the unnecessary complexities and streamline the rules. This then creates a win-win situation – through the lens of human-centred design, it facilitates the digitisation process and creates a simpler regulatory framework for those who choose not to use a digital process.
With this approach we can design technology to work for us rather than against us.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Business + Leadership, Relationships, Science + Technology, Society + Culture
Who does work make you? Severance and the etiquette of labour
WATCH
Health + Wellbeing, Business + Leadership
Moral injury
Opinion + Analysis
Business + Leadership
The truth isn’t in the numbers
Opinion + Analysis
Relationships, Science + Technology
Age of the machines: Do algorithms spell doom for humanity?
BY Cris Parker
Cris Parker is Head of The Ethics Alliance and a Director of the Banking and Finance Oath.
The terrible ethics of nuclear weapons
The terrible ethics of nuclear weapons
Opinion + AnalysisScience + TechnologySociety + Culture
BY Dr. Gwilym David Blunt 7 AUG 2023
“I have blood on my hands.” This is what Robert Oppenheimer, the mastermind behind the Manhattan Project, told US President Harry Truman after the bombs he created were dropped on Hiroshima and Nagasaki killing over an estimated 226,000 people.
The President reassured him, but in private was incensed by the ‘cry-baby scientist’ for his guilty conscience and told Dean Acheson, his Secretary of State, “I don’t want to see that son of a bitch in this office ever again.”
With the anniversary of the bombings this week while Christopher Nolan’s Oppenheimer is in cinemas, it is a good moment to reflect on the two people most responsible for the creation and use of nuclear weapons: one wracked with guilt, the other with a clean conscience.
Who is right?
In his speech announcing the destruction of Hiroshima and Nagasaki, Truman provided the base from which apologists sought to defend the use of nuclear weapons: it “shortened the agony of war.”
It is a theme developed by American academic Paul Fussell in his essay Thank God for the Atom Bomb. Fussell, a veteran of the European Theatre, defended the use of nuclear weapons because it spared the bloodshed and trauma of a conventional invasion of the Japanese home islands.
Military planners believed that this could have resulted in over a million causalities and hundreds of thousands of deaths of service personnel, to say nothing of the effect on Japanese civilians. In the lead up to the invasion the Americans minted half a million Purple Hearts, medals for those wounded in battle; this supply has lasted through every conflict since. We can see here the simple but compelling consequentialist reasoning: war is hell and anything that brings it to an end is worthwhile. Nuclear weapons, while terrible, saved lives.
The problem is that this argument rests on a false dichotomy. The Japanese government knew they had lost the war; weeks before the bombings the Emperor instructed his ministers to seek an end to the war via the good offices of the Soviet Union or another neutral state. There was a path to a negotiated peace. The Allies, however, wanted unconditional surrender.
We might ask whether this was a just war aim, but even if it was, there were alternatives: less indiscriminate aerial attacks and a naval blockade of war materials into Japan would have eventually compelled surrender. The point here isn’t to play at ‘armchair general’, but rather to recognise that the path to victory was never binary.
However, this reply is inadequate, because it doesn’t address the general question about the use of nuclear weapons, only the specific instance of their use in 1945. There is a bigger question: is it ever ethical to use nuclear weapons. The answer must be no.
Why?
Because, to paraphrase American philosopher Robert Nozick, people have rights and there are certain things that cannot be done to them without violating those rights. One such right must be against being murdered, because that is what the wrongful killing of a person is. It is murder. If we have these rights, then we must also be able to protect them and just as individuals can defend themselves so too can states as the guarantor of their citizen’s rights. This is a standard categorical check against the consequentialist reasoning of the military planners.
The horror of war is that it creates circumstances where ordinary ethical rules are suspended, where killing is not wrongful.
A soldier fighting in a war of self-defence may kill an enemy soldier to protect themselves and their country. However, this does not mean that all things are permitted. The targeting of non-combatants such as wounded soldiers, civilians, and especially children is not permitted, because they pose no threat.
We can draw an analogy with self-defence: if someone is trying to kill you and you kill them while defending yourself you have not done anything wrong, but if you deliberately killed a bystander to stop your attacker you have done something wrong because the bystander cannot be held responsible for the actions of your assailant.
It is a terrible reality that non-combatants die in war and sometimes it is excusable, but only when their deaths were not intended and all reasonable measures were taken to prevent them. Philosopher Michael Walzer calls this ‘double intention’; one must intend not to harm non-combatants as the primary element of your act and if it is likely that non-combatants will be collaterally harmed you must take due care to minimise the risks (even if it puts your soldiers at risk).
Hiroshima does not pass the double intention test. It is true that Hiroshima was a military target and therefore legitimate, but due care was not taken to ensure that civilians were not exposed to unnecessary harm. Nuclear weapons are simply too indiscriminate and their effects too terrible. There is almost no scenario for their use that does not include the foreseeable and avoidable deaths of non-combatants. They are designed to wipe out population centres, to kill non-combatants. At Hiroshima, for every soldier killed there were ten civilian deaths. Nuclear weapons have only become more powerful since then.
Returning to Oppenheimer and Truman, it is impossible not to feel that the former was in the right. Oppenheimer’s subsequent opposition to the development of more powerful nuclear weapons and support of non-proliferation, even at the cost of being targeted in the Red Scare, was a principled attempt to make amends for his contribution to the Manhattan Project.
The consequentialist argument that the use of nuclear weapons was justified because in shortening the war it saved lives and minimised human suffering can be very appealing, but it does not stand up to scrutiny. It rests on an oversimplified analysis of the options available to allied powers in August 1945; and, more importantly, it is an intrinsic part of the nature of nuclear weapons that their use deliberately and avoidably harms non-combatants.
If you are still unconvinced, imagine if the roles were reversed in 1945: one could easily say that Sydney or San Francisco were legitimate targets just like Hiroshima and Nagasaki. If the Japanese dropped an atomic bomb on Sydney Harbour on the grounds that it would have compelled Australia to surrender thereby ending the “agony of war”, would we view this as ethically justifiable or an atrocity to tally alongside the Rape of Nanking, the death camps of the Burma railroad, or the terrible human experiments conducted by Unit 731? It must be the latter, because otherwise no act, however terrible, can be prohibited and war truly becomes hell.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Big thinker
Science + Technology
Big Thinker: Matthew Liao
Opinion + Analysis
Politics + Human Rights, Science + Technology
Who’s to blame for Facebook’s news ban?
Opinion + Analysis
Society + Culture
11 books, films and series on the ethics of wealth and power
Opinion + Analysis
Science + Technology
Hype and hypocrisy: The high ethical cost of NFTs
BY Dr. Gwilym David Blunt
Dr. Gwilym David Blunt is a Fellow of the Ethics Centre, Lecturer in International Relations at the University of Sydney, and Senior Research Fellow of the Centre for International Policy Studies. He has held appointments at the University of Cambridge and City, University of London. His research focuses on theories of justice, global inequality, and ethics in a non-ideal world.
The cost of curiosity: On the ethics of innovation
The cost of curiosity: On the ethics of innovation
Opinion + AnalysisScience + Technology
BY Dr. Gwilym David Blunt 6 JUL 2023
The billionaire has become a ubiquitous part of life in the 21st century.
In the past many of the ultra-wealthy were content to influence politics behind the scenes in smoke-filled rooms or limit their public visibility to elite circles by using large donations to chisel their names onto galleries and museums. Today’s billionaires are not so discrete; they are more overtly influential in the world of politics, they engage in eye-catching projects such as space and deep-sea exploration, and have large, almost cult-like, followings on social media.
Underpinning the rise of this breed of billionaire is the notion that there is something special about the ultra-wealthy. That in ‘winning’ capitalism they have demonstrated not merely business acumen, but a genius that applies to the human condition more broadly. This ‘epistemic privilege’ casts them as innovators whose curiosity will bring benefits to the rest of us and the best thing that we normal people can do is watch on from a distance. This attitude is embodied in the ‘Silicon Valley Libertarianism’ which seeks to liberate technology from the shackles imposed on it by small-minded mediocrities such as regulation. This new breed seeks great power without much interest in checks on the corresponding responsibility.
Is this OK? Curiosity, whether about the physical world or the world of ideas, seems an uncontroversial virtue. Curiosity is the engine of progress in science and industry as well as in society. But curiosity has more than an instrumental value. Recently, Lewis Ross, a philosopher at the London School of Economics, has argued that curiosity is valuable in itself regardless of whether it reliably produces results, because it shows an appreciation of ‘epistemic goods’ or knowledge.
We recognise curiosity as an important element of a good human life. Yet, it can sometimes mask behaviour we ought to find troubling.
Hubris obviously comes to mind. Curiosity coupled with an outsized sense of one’s capabilities can lead to disaster. Take Stockton Rush, for example, the CEO of OceanGate and the author of the tragic sinking of the Titan submarine. He was quoted as saying: “I’d like to be remembered as an innovator. I think it was General MacArthur who said, ‘You’re remembered for the rules you break’, and I’ve broken some rules to make this. I think I’ve broken them with logic and good engineering behind me.” The result was the deaths of five people.
While hubris is a foible on a human scale, the actions of individuals cannot be seen in isolation from the broader social contexts and system. Think, for example, of the interplay between exploration and empire. It is no coincidence that many of those dubbed ‘great explorers’, from Columbus to Cook, were agents for spreading power and domination. In the train of exploration came the dispossession and exploitation of indigenous peoples across the globe.
A similar point could be made about advances in technology. The industrial revolution was astonishing in its unshackling of the productive potential of humanity, but it also involved the brutal exploitation of working people. Curiosity and innovation need to be careful of the company they keep. Billionaires may drive innovation, but innovation is never without a cost and we must ask who should bear the burden when new technology pulls apart the ties that bind.
Yet, even if we set aside issues of direct harm, problems remain. Billionaires drive innovation in a way that shapes what John Rawls called the ‘basic structure of society’. I recently wrote an article for International Affairs giving the example of the power of the Bill and Melinda Gates Foundation in global health. Since its inception the Gates Foundation has become a key player in global health. It has used its considerable financial and social power to set the agenda for global health, but more importantly it has shaped the environment in which global health research occurs. Bill Gates is a noted advocate of ‘creative capitalism’ and views the market as the best driver for innovation. The Gates Foundation doesn’t just pick the type of health interventions it believes to be worth funding, but shapes the way in which curiosity is harnessed in this hugely important field.
This might seem innocuous, but it isn’t. It is an exercise of power. You don’t have to be Michel Foucault to appreciate that knowledge and power are deeply entwined. The way in which Gates and other philanthrocapitalists shape research naturalises their perspective. It shapes curiosity itself. The risk is that in doing so, other approaches to global health get drowned out by focussing on hi-tech market driven interventions favoured by Gates.
The ‘law of the instrument’ comes to mind: if the only tool you have is a hammer, it is tempting to treat everything as if it were a nail. By placing so much faith in the epistemic privilege of billionaires, we are causing a proliferation of hammers across the various problems of the world. Don’t get me wrong, there is a place for hammers, they are very useful tools. However, at the risk of wearing this metaphor out, sometimes you need a screwdriver.
Billionaires may be gifted people, but they are still only people. They ought not to be worshipped as infallible oracles of progress, to be left unchecked. To do so exposes the rest of us to the risk of making a world where problems are seen only through the lens created by the ultra-wealthy – and the harms caused by innovation risk being dismissed merely as the cost of doing business.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Health + Wellbeing, Science + Technology
The undeserved doubt of the anti-vaxxer
Opinion + Analysis
Health + Wellbeing, Relationships, Science + Technology
How to put a price on a life – explaining Quality-Adjusted Life Years (QALY)
Opinion + Analysis
Relationships, Science + Technology
Age of the machines: Do algorithms spell doom for humanity?
Opinion + Analysis
Business + Leadership, Science + Technology
MIT Media Lab: look at the money and morality behind the machine
BY Dr. Gwilym David Blunt
Dr. Gwilym David Blunt is a Fellow of the Ethics Centre, Lecturer in International Relations at the University of Sydney, and Senior Research Fellow of the Centre for International Policy Studies. He has held appointments at the University of Cambridge and City, University of London. His research focuses on theories of justice, global inequality, and ethics in a non-ideal world.
The ethics of drug injecting rooms
The ethics of drug injecting rooms
Opinion + AnalysisHealth + WellbeingScience + Technology
BY Zach Wilkinson 28 JUN 2023
Should we allow people to use illicit drugs if it means that we can reduce the harm they cause? Or is doing so just promoting bad behaviour?
Illicit drug use costs the Australian economy billions of dollars each year, not to mention the associated social and health costs that it imposes on individuals and communities. For the last several decades, the policy focus has been on reducing illicit drug use, including making it illegal to possess and consume many drugs.
Yet Australia’s response to illicit drug use is becoming increasingly aligned with the approach called ‘harm reduction,’ which includes initiatives like supervised injecting rooms and drug checking services, like pill testing.
Harm reduction initiatives effectively suspend the illegality of drug possession in certain spaces to prioritise the safety and wellbeing of people who use drugs. Supervised injecting rooms allow people to bring in their illicit drugs, acquire clean injecting equipment and receive guidance from medical professionals. Similarly, pill testing creates a space for festival-goers to learn about the contents and potency of their drugs, tacitly accepting that they will be consumed.
Harm reduction is best understood in contrast with an abstinence-based approach, which has the goal of ceasing drug use altogether. Harm reduction does not enforce abstinence, instead focusing on reducing the adverse events that can result from unsafe drug use such as overdose, death and disease.
Yet there is a great deal of debate around the ethics of harm reduction, with some people seeing it as being the obvious way to minimise the impact of drug use and to help addicts battle dependence, while those who favour abstinence often consider it to be unethical in principle.
Much of the debate is muddied by the fact that those who embrace one ethical perspective often fail to understand the issue from the other perspective, resulting in both sides talking past each other. In order for us to make an informed and ethical choice about harm reduction, it’s important to understand both perspectives.
The ethics of drug use
Deontology and consequentialism are two moral theories that inform the various views around drug use. Deontology focuses on what kinds of acts are right or wrong, judging them according to moral norms or whether they accord with things like duties and human rights.
Immanuel Kant famously argued that we should only act in ways that we would wish to become universal laws. Accordingly, if you think it’s okay to take drugs in one context, then you’re effectively endorsing drug use for everyone. So a deontologist might argue that people should not be allowed to use illicit drugs in supervised injecting rooms, because we would not want to allow drug use in all spaces.
An abstinence-based approach embodies this reasoning in its focus on stopping illicit drug use through treatment and incarceration. It can also explain the concern that condoning drug use in certain spaces sends a bad message to the wider community, as argued by John Barilaro in the Sydney Morning Herald:
“…it’d be your taxpayer dollars spent funding a pill-testing regime designed to give your loved ones and their friends the green light to take an illicit substance at a music festival, but not anywhere else. If we’re to tackle the scourge of drugs in our regional towns and cities, we need one consistent message.”
However, deontology can also be inflexible when it comes to dealing with different circumstances or contexts. Abstinence-based approaches can apply the same norms to long-term drug uses as it does to teenagers who have not yet engaged in illicit drug use. With still high rates of morbidity and mortality for the former group, some may prefer an alternative approach that highlights this context and these consequences in its moral reasoning.
Harms and benefits
Enter consequentialism, which judges good and bad in terms of the outcomes of our actions. Harm reduction is strongly informed by consequentialism in asserting that the safety and wellbeing of people who use drugs are of primary concern. Whether drug use should be allowed in a particular space is answered by whether things like death, overdose and disease are expected to increase or decrease as a result. This is why scientific evaluations play an important role in harm reduction advocacy. As Stephen Bright argued in The Conversation:
“…safe injecting facilities around the world: ’have been found to reduce the number of fatal and non-fatal drug overdoses and the spread of blood borne viral infections (including HIV and hepatitis B and C) both among people who inject drugs and in the wider community.’”
This approach also considers other potential societal harms, such as public injections and improper disposal of needles, as well as burden on the health system, crime and satisfaction in the surrounding community.
This focus on consequences can also lead to the moral endorsement of some counter-intuitive initiatives. Because a consequentialist perspective will look at a wide range of the outcomes associated with a program, including the cost and harms caused by criminalisation, such as policing and incarceration, it can also conclude that some dangerous drugs should be decriminalised or legalised, if doing so would reduce their overall harm.
While a useful way to begin thinking about Australia’s approach to drug use, there is of course nuance worth noting. A deontological abstinence-based approach assumes that establishing a drug-free society is even possible, which is highly contested by harm reduction advocates. Disagreement on this possibility seems to reflect intuitive beliefs about people and about drugs. This is perhaps part of why discussions surrounding harm reduction initiatives often become so polarised. Nevertheless, these two moral theories can help us begin to understand how people view quite different dimensions of drug treatment and policy as ethically important.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Business + Leadership, Science + Technology
Finance businesses need to start using AI. But it must be done ethically
Explainer
Health + Wellbeing, Relationships
Ethics Explainer: Eudaimonia
Opinion + Analysis
Health + Wellbeing, Relationships
Send in the clowns: The ethics of comedy
Opinion + Analysis
Business + Leadership, Relationships, Science + Technology, Society + Culture
Who does work make you? Severance and the etiquette of labour
BY Zach Wilkinson
Zach is currently in the public health field researching and writing on illicit drugs against a backdrop of academic philosophy, psychology and the broader health sciences by way of USYD and UNSW.
License to misbehave: The ethics of virtual gaming
License to misbehave: The ethics of virtual gaming
Opinion + AnalysisScience + Technology
BY Kara Jensen-Mackinnon 20 JUN 2023
Gaming was once just Pacman chasing pixelated ghosts through a digital darkness but, now, as we tumble headlong into virtual realities, ethical voids are being filled by the same ghosts humanity created IRL.
As a kid I sank an embarrassing amount of time into World of Warcraft online, my elven ranger was named Vereesa Windrunner and rode a silver covenant hippogryph. (Tl;dr: I was a cool chick with a bow and arrow and a flying horse.)
Once I came across two other players and we chatted – then they attacked, took all my things and left me for dead. I was mad because walking back to the nearest town without my trusty hippogryph would take a good half hour.
Gamers call this behaviour “griefing”; using the game in unintended ways to destroy or steal things another player values – even when that is not the objective. It’s pointless, it’s petty, in other words it’s being a huge jerk online.
The only way to cope with that big old dose of griefing was rolling back from my screen, turning off the console and making a cup of tea.
But gaming is changing and virtual reality means logging off won’t be so simple – as exciting and daunting as that sounds.
500 years ago, or whenever Pacman was invented, gaming largely amounted to you being a little dot moving around the screen, eating slightly smaller dots, and avoiding slightly larger dots.
Game developers have endlessly fought to create more realistic, more immersive, and more genuine experiences ever since.
Gaming now stands as its own art, even inspiring seriously good television (try not to cry watching Last of Us) – and the long awaited leap into convincing, powerful VR (virtual reality) is now upon us.
But in gaming’s few short decades we have already begun to realise the ethical dilemmas that come with a digital avatar – and the new griefing is downright criminal.
Last year’s Dispatches investigation found virtual spaces were already rife with hate speech, sexual harassment, paedophilia, and avatars simulating sex in spaces accessible to children.
In the Metaverse also – which Mark Zuckerberg hopes we will all inhabit – people allege they were groped and abused, sexually harrassed and assaulted.
In one shocking experience a woman even claimed she was “virtually gang raped” while in the Metaverse.
So how can we better prepare for the ethical problems we are going to encounter as more people enter this brave new world of VR gaming? How will the lines between fantasy and reality blur when we find ourselves truly positioned in a game? What does “griefing” become in a world where our online avatars and real lives overlap? And why should we trust the creepy tech billionaires who are crafting this world with our safety and security?
The Ethics Centre’s Senior Philosopher and avid gamer Dr Tim Dean spent his formative years playing (and being griefed) in the game, Ultima Online – so he’s just the right person to ask.
Let the games begin
VR still requires cumbersome headsets. The most well known, which was bought by Facebook, is called Oculus but there are others.
Once you’re strapped in, you can turn your head left and right and you see the expansive computer generated landscape stretching out before you.
Your hands are the fiery gauntlets of a wizard, your feet their armoured boots, you might have more rippling abs than you’re used to – but you get the point, you’re seeing what your character sees.
The space between yourself and your avatar quickly closes.
Tim says, there is that kind of “verisimilitude” which feels like you’re right there – for better or for worse.
“If you have a greater sense of identity with your avatar, it magnifies the feelings of violation,” he said.
Videogames were once an escape from reality, a way to unshackle to the point you can steal a car, rob a bank and even kill, but Dean suggests this escapism creates new moral quandaries once we become our characters.
“A fantasy can give you an opportunity to get some satisfaction where you might not otherwise have,” he said.
“But also, if your desires are unhealthy – if you want to be violent, if you want to take things from people, if you enjoy experiencing other people’s suffering – then a fantasy can also allow you to play that out.”
Make your own rules
Dean has hope, despite the grim headlines, saying “norms emerge” in these virtual moral voids and norms begin to form between users – or as they used to be called; “people”.
“Where there are no explicit top down norms that prevent people from harming or griefing other people, sometimes bottom up community norms emerge,” he said.
Dean’s PhD is about the birth of norms: the path from a lawless, warring chaos to self-regulating society because humans learned about the impacts they were having on one another.
It sounds promising, but when Metaverse headsets begin at $1500 you’ll quickly realise the gates to the future open only to the privileged, often wealthy white men become the early adopters.
Mark Zuckerberg seems to have the same concerns.
Metaverse proposed a solution in the form of a virtual “personal bubble” to protect people from groping… but aside from feeling very lame to walk around in a big safety bubble, it demonstrates that there’s no attempt to curb the bad behaviour in the first place.
The solution, in the real world, to combat abuse has typically come in the form of including people from diverse backgrounds, more women, more people of colour, all sharing in the power structure.
For virtual reality – now is the time to have that discussion, not after everyone has a horror story.
Dean thinks there are a few big questions yet to be answered:
Will people, en masse, act horribly in the virtual world?
How do you change behaviour in that world without imposing oppressive rules or… bubbles? Who gets to decide what those rules are? Would we be happy with the rules Meta comes up with? At least in a democracy, we have some power to choose who makes up the rules. That’s not the case with most technology companies.
And how does behaviour in the virtual world translate to our behaviour outside of the virtual world?
Early geeks hoped the internet would be a virtual town square with people sharing ideas – a vision that missed racist chatbots, revenge porn and swatting.
Dean hopes the VR landscape might offer a clean slate, a chance at least to learn from the past and increase people’s capacity for empathy.
“We can literally put on goggles and walk a mile in someone else’s shoes,” he said.
So maybe there’s hope yet.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Science + Technology
Hype and hypocrisy: The high ethical cost of NFTs
Opinion + Analysis
Relationships, Science + Technology
The ethics of exploration: We cannot discover what we cannot see
Opinion + Analysis
Science + Technology
Why the EU’s ‘Right to an explanation’ is big news for AI and ethics
Opinion + Analysis
Relationships, Science + Technology
Big tech’s Trojan Horse to win your trust
BY Kara Jensen-Mackinnon
Kara Jensen-Mackinnon is an award winning creative producer and journalist, she currently works at the 7am podcast, and has created podcasts for The Guardian, ABC RN, The Sydney Opera House, The Sydney Festival and the UN World Food Program.
A framework for ethical AI
Artificial intelligence has untold potential to transform society for the better. It also has equal potential to cause untold harm. This is why it must be developed ethically.
Artificial intelligence is unlike any other technology humanity has developed. It will have a greater impact on society and the economy than fossil fuels, it’ll roll out faster than the internet and, at some stage, it’s likely to slip from our control and take charge of its own fate.
Unlike other technologies, AI – particularly artificial general intelligence (AGI) – is not the kind of thing that we can afford to release into the world and wait to see what happens before regulating it. That would be like genetically engineering a new virus and releasing it in the wild before knowing whether it infects people.
AI must be carefully designed with purpose, developed to be ethical and regulated responsibly. Ethics must be at the heart of this project, both in terms of how AI is developed and also how it operates.
This sentiment is the main reason why many of the world’s top AI researchers, business leaders and academics signed an open letter in March 2023 calling for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, in order to “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts”.
Some don’t think a pause goes far enough. Eliezer Yudkowsky, the lead researcher at the Machine Intelligence Research Institute has called for a complete, worldwide and indefinite moratorium on training new AI systems. He argued that the risks posed by unrestrained AI are so great that countries ought to be willing to use military action to enforce the moratorium.
It is probably impossible to enforce a pause on AI development without backing it with the threat of military action. Few nations or businesses will willingly risk falling behind in the race to commercialise AI. However, few governments are likely to be willing to go to war force them to pause.
While a pause is unlikely to happen, the ethical challenge facing humanity is that the pace of AI development is significantly faster than the pace at which we can deliberate and resolve ethical issues. The commercial and national security imperatives are also hastening the development and deployment of AI before safeguards have been put in place. The world now needs to move with urgency to put these safeguards in place.
Ethical by design
At the centre of ethics is the notion that we must take responsibility for how our actions impact the world, and we should direct our action in ways that are beneficent rather than harmful.
Likewise, if AI developers wish to be rewarded for the positive impact that AI will have on the world, such as by deriving a profit from the increased productivity afforded by the technology, then they must also accept responsibility for the negative impacts caused by AI. This is why it is in their interest (and ours) that they place ethics at the heart of AI development.
The Ethics Centre’s Ethical by Design framework can guide the development of any kind of technology to ensure it conforms to essential ethical standards.This framework should be used by those developing AI, by governments to guide AI regulation, and by the general public as a benchmark to assess whether AI conforms to the ethical standards they have every right to expect.
The framework includes eight principles:
Ought before can
This refers to the fact that just because we can do something, it doesn’t mean we should. Sometimes the most ethically responsible thing is to not do something.
If we have reasonable evidence that a particular AI technology poses an unacceptable risk, then we should cease development, or at least delay until we are confident that we can reduce or manage that risk.
We have precedent in this regard. There are bans in place around several technologies, such as human genetic modification or biological weapons that are either imposed by governments or self-imposed by researchers because they are aware they pose an unacceptable risk or would violate ethical values. There is nothing in principle stopping us from deciding to do likewise with certain AI technologies, such as those that allow the production of deep fakes, or fully autonomous AI agents.
Non-instrumentalism
Most people agree we should respect the intrinsic value of things like humans, sentient creatures, ecosystems or healthy communities, among other things, and not reduce them to mere ‘things’ to be used for the benefit of others.
So AI developers need to be mindful of how their technologies might appropriate human labour without offering compensation, as has been highlighted with some AI image generators that were trained on the work of practising artists. It also means acknowledging that job losses caused by AI have more than an economic impact and can injure the sense of meaning and purpose that people derive from their work.
If the benefits of AI come at the cost of things with intrinsic value, then we have good reason to change the way it operates or delay its rollout to ensure that the things we value can be preserved.
Self-determination
AI should give people more freedom, not less. It must be designed to operate transparently so individuals can understand how it works, how it will affect them, and then make good decisions about whether and how to use it.
Given the risk that AI could put millions of people out of work, reducing incomes and disempowering them while generating unprecedented profits for technology companies, those companies must be willing to allow governments to redistribute that new wealth fairly.
And if there is a possibility that AGI might use its own agency and power to contest ours, then the principle of self-determination suggests that we ought to delay its development until we can ensure that humans will not have their power of self-determination diminished.
Responsibility
By its nature, AI is wide-ranging in application and potent in its effects. This underscores the need for AI developers to anticipate and design for all possible use cases, even those that are not core to their vision.
Taking responsibility means developing it with an eye to reducing the possibility of these negative cases becoming a reality and mitigating against them when they’re inevitable.
Net benefit
There are few, if any, technologies that offer pure benefit without cost. Society has proven willing to adopt technologies that provide a net benefit as long as the costs are acknowledged and mitigated. One case study is the fossil fuel industry. The energy generated by fossil fuels has transformed society and improved the living conditions of billions of people worldwide. Yet once the public became aware of the cost that carbon emissions impose on the world via climate change, it demanded that emissions be reduced in order to bring the technology towards a point of net benefit over the long term.
Similarly, AI will likely offer tremendous benefits, and people might be willing to incur some high costs if the benefits are even greater. But this does not mean that AI developers can ignore the costs nor avoid taking responsibility for them.
An ethical approach means doing whatever they can to reduce the costs before they happen and mitigating them when they do, such as by working with governments to ensure there are sufficient technological safeguards against misuse and social safety nets in place should the costs rise.
Fairness
Many of the latest AI technologies have been trained on data created by humans, and they have absorbed the many biases built into that data. This has resulted in AI acting in ways that negatively discriminate against people of colour or those with disabilities. There is also a significant global disparity in access to AI and the benefits it offers. These are cases where the AI has failed the fairness test.
AI developers need to remain mindful of how their technologies might act unfairly and how the costs and benefits of AI might be distributed unfairly. Diversity and inclusion must be built into AI from the ground level through training data and methods, and AI must be continuously monitored to see if new biases emerge.
Accessibility
Given the potential benefits of AI, it must be made available to everyone, including those who might have greater barriers to access, such as those with disabilities, older populations, or people living with disadvantage or in poverty. AI has the potential to dramatically improve the lives of people in each of these categories, if it is made accessible to them.
Purpose
Purpose means being directed towards some goal or solving some problem. And that problem needs to be more than just making a profit. Many AI technologies have wide applications, and many of their uses have not even been discovered yet. But this does not mean that AI should be developed without a clear goal and simply unleased into the world to see what happens.
Purpose must be central to the development of ethical AI so that the technology is developed deliberately with human benefit in mind. Designing with purpose requires honesty and transparency at all stages, which allows people to assess whether the purpose is worthwhile and achieved ethically.
The road to ethical AI
We should continue to press for AI to be developed ethically. And if technology companies are reluctant to pay careful attention to ethics, then we should call on our governments to impose sensible regulations on them.
The goal is not to hinder AI but to ensure that it operates as intended and that the benefits flow on to the greatest possible number. AI could usher in a fourth industrial revolution. It would pay for us to make this one even more beneficial and less disruptive than the past three.
As a Knowledge Partner in the Responsible AI Network, The Ethics Centre helps provide vision and discussion about the opportunity presented by AI.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Politics + Human Rights, Relationships, Science + Technology
Parent planning – we shouldn’t be allowed to choose our children’s sex
Opinion + Analysis
Business + Leadership, Science + Technology
Is it ok to use data for good?
Opinion + Analysis
Climate + Environment, Science + Technology
The kiss of death: energy policies keep killing our PMs
Opinion + Analysis
Business + Leadership, Science + Technology
People first: How to make our digital services work for us rather than against us
BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Thought experiment: "Chinese room" argument
Thought experiment: “Chinese room” argument
ExplainerScience + Technology
BY The Ethics Centre 10 MAR 2023
If a computer responds to questions in an intelligent way, does that mean it is genuinely intelligent?
Since its release to the public in November 2022, ChatGPT has taken the world by storm. Anyone can log in, ask a series of questions, and receive very detailed and reasonable responses.
Given the startling clarity of the responses, the fluidity of the language and the speed of the response, it is easy to assume that ChatGPT “understands” what it’s reporting back. The very language used by ChatGPT, and the way it types out each word individually, reinforces the feeling that we are “chatting” with another intelligent being.
But this raises the question of whether ChatGPT, or any other large language model (LLM) like it, is genuinely capable of “understanding” anything, at least in the way that humans do. This is where a thought experiment concocted in the 1980s becomes especially relevant today.
“The Chinese room”
Imagine you’re a monolingual native English speaker sitting in a small windowless room surrounded by filing cabinets with drawers filled with cards, each featuring one or more Chinese characters. You also have a book of detailed instructions written in English on how to manipulate those cards.
Given you’re a native English speaker with no understanding of Chinese, the only thing that will make sense to you will be the book of instructions.
Now imagine that someone outside the room slips a series of Chinese characters under the door. You look in the book and find instructions telling you what to do if you see that very series of characters. The instructions culminate by having you pick out another series of Chinese characters and slide them back under the door.
You have no idea what the characters mean but they make perfect sense to the native Chinese speaker on the outside. In fact, the series of characters they originally slid under the door formed a question and the characters you returned formed a perfectly reasonable response. To the native Chinese speaker outside, it looks, for all intents and purposes, like the person inside the room understands Chinese. Yet you have no such understanding.
This is the “Chinese room” thought experiment proposed by the philosopher John Searle in 1980 to challenge the idea that a computer that simply follows a program can have a genuine understanding of what it is saying. Because Searle was American, he chose Chinese for his thought experiment. But the experiment would equally apply to a monolingual Chinese speaker being given cards written in English or a Spanish speaker given cards written in Cherokee, and so on.
Functionalism and Strong AI
Philosophers have long debated what it means to have a mind that is capable of having mental states, like thoughts or feelings. One view that was particularly popular in the late 20th century was called “functionalism”.
Functionalism states that a mental state is not defined by how it’s produced, such as requiring that it must be the product of a brain in action. It is also not defined by what it feels like, such as requiring that pain have a particular unpleasant sensation. Instead, functionalism says that a mental state is defined by what it does.
This means that if something produces the same aversive response that pain does in us, even if it is done by a computer rather than a brain, then it is just as much a mental state as it is when a human experiences pain.
Functionalism is related to a view that Searle called “Strong AI”. This view says that if we produce a computer that behaves and responds to stimuli in exactly the same way that a human would, then we should consider that computer to have genuine mental states. “Weak AI”, on the other hand, simply claims that all such a computer is doing is simulating mental states.
Searle offered the Chinese room thought experiment to show that being able to answer a question intelligently is not sufficient to prove Strong AI. It could be that the computer is functionally proficient in speaking Chinese without actually understanding Chinese.
ChatGPT room
While the Chinese room remained a much-debated thought experiment in philosophy for over 40 years, today we can all see the experiment made real whenever we log into Chat GPT. Large language models like ChatGPT are the Chinese room argument made real. They are incredibly sophisticated versions of the filing cabinet, reflecting the corpus of text upon which they’re trained, and the instructions, representing the probabilities used to decide how to pick which character or word to display next.
So even if we feel that ChatGPT – or a future more capable LLM – understands what it’s saying, if we believe that the person in the Chinese room doesn’t understand Chinese, and that LLMs operate in much the same way as the Chinese room, then we must conclude that it doesn’t really understand what it’s saying.
This observation has relevance for ethical considerations as well. If we believe that genuine ethical action requires the actor to have certain mental states, like intentions or beliefs, or that ethics requires the individual to possess certain virtues, like integrity or honesty – then we might conclude that a LLM is incapable of being genuinely ethical if it lacks these things.
A LLM might still be able to express ethical statements and follow prescribed ethical guidelines imposed by its creators – as has been the case in the creators of ChatGPT limiting its responses around sensitive topics such as racism, violence and self-harm – but even if it looks like it has its own ethical beliefs and convictions, that could be an illusion similar to the Chinese room.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Health + Wellbeing, Relationships, Science + Technology
Periods and vaccines: Teaching women to listen to their bodies
Opinion + Analysis
Health + Wellbeing, Relationships, Science + Technology
When do we dumb down smart tech?
Opinion + Analysis
Relationships, Science + Technology
Injecting artificial intelligence with human empathy
Big thinker
Politics + Human Rights, Science + Technology