Hallucinations that help: Psychedelics, psychiatry, and freedom from the self

Hallucinations that help: Psychedelics, psychiatry, and freedom from the self
Opinion + AnalysisHealth + WellbeingRelationshipsScience + Technology
BY Joseph Earp 22 FEB 2022
Dr. Chris Letheby, a pioneer in the philosophy of psychedelics, is looking at a chair. He is taking in its individuated properties – its colour, its shape, its location – and all the while, his brain is binding these properties together, making them parts of a collective whole.
This, Letheby explains, is also how we process the self. We know that there are a number of distinct properties that make us who we are: the sensation of being in our bodies, the ability to call to mind our memories or to follow our own trains of thought. But there is a kind of mental glue that holds these sensations together, a steadfast, mostly uncontested belief in the concrete entity to which we refer when we use the word “me.”
“Binding is a theoretical term,” Letheby explains. “It refers to the integration of representational parts into representational wholes. We have all these disparate representations of parts of our bodies and who we were at different points at time and different roles we occupy and different personality traits. And there’s a very high-level process that binds all of these into a unified representation; that makes us believe these are all properties and attributes of one single thing. And different things can be bound to this self model more tightly.”
Freed from the Self
So what happens when these properties become unbound from one another – when we lose a cohesive sense of who we are? This, after all, is the sensation that many experience when taking psychedelic drugs. The “narrative self” – the belief that we are an individuated entity that persists through time – dissolves. We can find ourselves at one with the universe, deeply connected to those around us.
Perhaps this sounds vaguely terrifying – a kind of loss. But as Letheby points out, this “ego dissolution” can have extraordinary therapeutic results in those who suffer from addiction, or experience deep anxiety and depression.
“People can get very harmful, unhealthy, negative forms of self-representation that become very rigidly and deeply entrenched,” Letheby explains.
“This is very clear in addiction. People very often have all sorts of shame and negative views of themselves. And they find it very often impossible to imagine or to really believe that things could be different. They can’t vividly imagine a possible life, a possible future in which they’re not engaging in whatever the addictive behaviours are. It becomes totally bound in the way they are. It’s not experienced as a belief, it’s experienced as reality itself.”
This, Letheby and his collaborator Philip Gerrans write, is key to the ways in which psychedelics can improve our lives. “Psychedelics unbind the self model,” he says. “They decrease the brain’s confidence in a belief like, ‘I am an alcoholic’ or ‘I am a smoker’. And so for the first time in perhaps a very long time [addicts] are able to not just intellectually consider, but to emotionally and experientially imagine a world in which they are not an alcoholic. Or if we think about anxiety and depression, a world in which there is hope and promise.”
A comforting delusion?
Letheby’s work falls into a naturalistic framework: he defers to our best science to make sense of the world around us. This is an unusual position, given some philosophers have described psychedelic experiences as being at direct odds with naturalism. After all, a lot of people who trip experience what have been called “metaphysical hallucinations”: false beliefs about the “actual nature” of the universe that fly in the face of what science gives us reason to believe.
For critics of the psychedelic experience then, these psychedelic hallucinations can be described as little more than comforting falsehoods, foisted upon the sick – whether mentally or physically – and dying. They aren’t revelations. They are tricks of the mind, and their epistemic value remains under question.
But Letheby disagrees. He adopts the notion of “epistemic innocence” from the work of philosopher Lisa Bortolotti, the belief that some falsehoods can actually make us better epistemic agents. “Even if you are a naturalist or a materialist, psychedelic states aren’t as epistemically bad as they have been made out to be,” he says, simply. “Sometimes they do result in false beliefs or unjustified beliefs … But even when psychedelic experiences do lead to people to false beliefs, if they have therapeutic or psychological benefits, they’re likely to have epistemic benefits too.”
To make this argument, Letheby returns again to the archetype of the anxious or depressed person. This individual, when suffering from their illness, commonly retreats from the world, talking less to their friends and family, and thus harming their own epistemic faculties – if you don’t engage with anyone, you can’t be told that you are wrong, can’t be given reasons for updating your beliefs, can’t search out new experiences.
“If psychedelic states are lifting people out of their anxiety, their depression, their addiction and allowing people to be in a better mode of functioning, then my thought is, that’s going to have significant epistemic benefits,” Letheby says. “It’s going to enable people to engage with the world more, be curious, expose their ideas to scrutiny. You can have a cognition that might be somewhat inaccurate, but can have therapeutic benefits, practical benefits, that in turn lead to epistemic benefits.”
As Letheby has repeatedly noted in his work, the study of the psychiatric benefits of psychedelics is in its early phases, but the future looks promising. More and more people are experiencing these hallucinations – these new, critical beliefs that unbind the self – and more and more people are getting well. There is, it seems, a possible world where many of us are freed from the rigid notions of who we are and what we want, unlocked from the cage of the self, and walking, for the first time in a long time, in the open air.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Relationships
A guide to ethical gift giving (without giving to charity)
Opinion + Analysis
Relationships, Society + Culture
Big Thinker: Kwame Anthony Appiah
Explainer
Relationships
Ethics Explainer: Epistemology
Big thinker
Relationships, Society + Culture
Big Thinker: Simone de Beauvoir

BY Joseph Earp
Joseph Earp is a poet, journalist and philosophy student. He is currently undertaking his PhD at the University of Sydney, studying the work of David Hume.
Meet Dr Tim Dean, our new Senior Philosopher

Meet Dr Tim Dean, our new Senior Philosopher
Opinion + AnalysisRelationshipsSociety + Culture
BY The Ethics Centre 21 FEB 2022
Ethics is about engaging in conversations to understand different perspectives and ways in which we can approach the world.
Which means we need a range of people participating in the conversation.
That’s why we’re excited to share that we have recently appointed Dr Tim Dean as our Senior Philosopher. An award-winning philosopher, writer, speaker and honorary associate with the University of Sydney, Tim has developed and delivered philosophy and emotional intelligence workshops for schools and businesses across Australia and the Asia Pacific, including Meriden and St Mark’s high schools, The School of Life, Small Giants and businesses including Facebook, Commonwealth Bank, Aesop, Merivale and Clayton Utz.
We sat down with Tim to discuss his views on morality, social media, cancel culture and what ethics means to him.
What drew you to the study of philosophy?
Children are natural philosophers, constantly asking “why?” about everything around them. I just never grew out of that tendency, much to the chagrin of my parents and friends. So when I arrived at university, I discovered that philosophy was my natural habitat, furnishing me with tools to ask “why?” better, and revealing the staggering array of answers that other thinkers have offered throughout the ages. It has also helped me to identify a sense of meaning and purpose that drives my work.
What made you pursue the intersection of science and philosophy?
I see science and philosophy as continuous. They are both toolkits for understanding the world around us. In fact, technically, science is a sub-branch of philosophy (even if many scientists might bristle at that idea) that specialises in questions that are able to be investigated using empirical tools, hence its original name of “natural philosophy”. I have been drawn to science as much as philosophy throughout my life, and ended up working as a science writer and editor for over 10 years. And my study of biology and evolution transformed my understanding of morality, which was the subject of my PhD thesis.
How does social media skew our perception of morals?
If you wanted to create a technology that gave a distorted perception of the world, that encouraged bad faith discourse and that promoted friction rather than understanding, you’d be hard pressed to do better than inventing social media. Social media taps into our natural tendencies to create and defend our social identity, it triggers our natural outrage response by feeding us an endless stream of horrific events, it rewards us with greater engagement when we go on the offensive while preventing us from engaging with others in a nuanced way. In short, it pushes our moral buttons, but not in a constructive way. So even though social media can do good, such as by raising awareness of previously marginalised voices and issues, overall I’d call it a net negative for humanity’s moral development.
How do you think the pandemic has changed the way we think about ethics?
The COVID-19 pandemic has both expanded and shrunk our world. On the one hand, lockdowns and border closures have grounded us in our homes and our local communities, which in many cases has been a positive thing, as people get to know their neighbours and look out for each other. But it has also expanded our world as we’ve been stuck behind screens watching a global tragedy unfold, often without any real power to fix it. But it has also made us more sensitive to how our individual actions affect our entire community, and has caused us to think about our obligations to others. In that sense, it has brought ethics to the fore.
Tell us a little about your latest book ‘How We Became Human, And Why We Need to Change’?
I’ve long been fascinated by the story of how we evolved from being a relatively anti-social species of ape a few million years ago to being the massively social species we are today. Morality has played a key part in that story, helping us to have empathy for others, motivating us to punish wrongdoing and giving us a toolkit of moral norms that can guide our community’s behaviour. But in studying this story of moral evolution, I came to realise that many of the moral tendencies we have and many of the moral rules we’ve inherited were designed in a different time, and they often cause more harm than good in today’s world. My book explores several modern problems, like racism, sexism, religious intolerance and political tribalism, and shows how they are all, in part, products of our evolved nature. I also argue that we need to update our moral toolkit if we want to live and thrive in a modern, globalised and diverse world, and that means letting go of past solutions and inventing new ones.
How do you think the concepts of right and wrong will change in the coming years?
The world is changing faster than ever before. It’s also more diverse and fragmented than ever before. This means that the moral rules that we live by and the values that drive us are also changing faster than ever before – often faster than many people can keep up. Moral change will only continue, especially as new generations challenge the assumptions and discard the moral baggage of past generations. We should expect that many things we took for granted will be challenged in the coming decades. I foresee a huge challenge in bringing people along with moral change rather than leaving them behind.
What are your thoughts on the notion of ‘cancel culture’?
There are no easy answers when it comes to the limits of free speech. We value free speech to the degree that it allows us to engage with new ideas, seek the truth and to be able to express ourselves and hear from others. But that speech comes at a cost, particularly when it allows bad faith speech to spread misinformation, to muddy the truth, or dehumanise others. There are some types of speech that ought to be shut down, but we must be careful how the power to shut down speech is used. In the same way that some speech can be in bad faith, so too can be efforts to shut it down. Some instances of “cancelling” might be warranted, but many are a symptom of mob culture that seeks to silence views the mob opposes rather than prevent bad kinds of speech. Sometimes it’s motivated by a sense that a speaker is not just mistaken but morally corrupt, which prevents people from engaging with them and attempting to change their views. This is why one thing I advocate strongly for is rebuilding social capital, or the trust and respect that enables good faith discourse to occur at all. It’s only when we have that trust and respect that we will be able to engage in good faith rather than feel like we need to resort to cancelling or silencing people.
Lastly, the big one – what does ethics mean to you?
Ethics is what makes our species unique. No other creature can live alongside and cooperate with other individuals on the scale that we do. This is all made possible by ethics, which is our ability to consider how we ought to behave towards others and what rules we should live by. It’s our superpower, it’s what has enabled our species to spread across the globe. But understanding and engaging with ethics, figuring out our obligations to others, and adapting our sense of right and wrong to a changing world, is our greatest and most enduring challenge as a species.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Politics + Human Rights, Society + Culture
On truth, controversy and the profession of journalism
Big thinker
Politics + Human Rights, Relationships
Big Thinker: Germaine Greer
Opinion + Analysis
Politics + Human Rights, Relationships, Society + Culture
Stop giving air to bullies for clicks
Big thinker
Relationships
Big Thinker: Immanuel Kant

BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
To see no longer means to believe: The harms and benefits of deepfake

To see no longer means to believe: The harms and benefits of deepfake
Opinion + AnalysisRelationshipsScience + Technology
BY Mehhma Malhi 18 FEB 2022
The use of deepfake technology is increasing as more companies devise different models.
It is a form of technology where a user can upload an image and synthetically augment a video of a real person or create a picture of a fake person. Many people have raised concerns about the harmful possibilities of these technologies. Yet, the notion of deception that is at the core of this technology is not entirely new. History is filled with examples of fraud, identity theft, and counterfeit artworks, all of which are based on imitation or assuming a person’s likeliness.
In 1846, the oldest gallery in the US, The Knoedler, opened its doors. By supplying art to some of the most famous galleries and collectors worldwide, it gained recognition as a trusted source of expensive artwork – such as Rothko’s and Pollock’s. However, unlike many other galleries, The Knoedler allowed citizens to purchase the art pieces on display. Shockingly, in 2009, Ann Freedman, who had been appointed as the gallery director a decade prior, was famously prosecuted for knowingly selling fake artworks. After several buyers sought authentication and valuation of their purchases for insurance purposes, the forgeries came to light. The scandal was sensational, not only because of the sheer number of artworks involved in the deception that lasted years but also because millions of dollars were scammed from New York’s elite.
The grandiose art foundation of NYC fell as the gallery lost its credibility and eventually shut down. Despite being exact replicas and almost indistinguishable, the understanding of the artist and the meaning of the artworks were lost due to the lack of emotion and originality. As a result, all the artworks lost sentimental and monetary value.
Yet, this betrayal is not as immoral as stealing someone’s identity or engaging in fraud by forging someone’s signature. Unlike artwork, when someone’s identity is stolen, the person who has taken the identity has the power to define how the other person is perceived. For example, catfishing online allows a person to misrepresent not only themselves but also the person’s identity that they are using to catfish with. This is because they ascribe specific values and activities to a person’s being and change how they are represented online.
Similarly, deepfakes allow people to create entirely fictional personas or take the likeness of a person and distort how they represent themselves online. Online self-representations are already augmented to some degree by the person. For instance, most individuals on Instagram present a highly curated version of themselves that is tailored specifically to garner attention and draw particular opinions.
But, when that persona is out of the person’s control, it can spur rumours that become embedded as fact due to the nature of the internet. An example is that of celebrity tabloids. Celebrities’ love lives are continually speculated about, and often these rumours are spread and cemented until the celebrity comes out themselves to deny the claims. Even then, the story has, to some degree, impacted their reputation as those tabloids will not be removed from the internet.
The importance of a person maintaining control of their online image is paramount as it ensures their autonomy and ability to consent. When deepfakes are created of an existing person, it takes control of those tenets.
Before delving further into the ethical concerns, understanding how this technology is developed may shed light on some of the issues that arise from such a technology.
The technology is derived from deep learning, a type of artificial intelligence based on neural networks. Deep neural network technologies are often composed of layers based on input/output features. It is created using two sets of algorithms known as the generator and discriminator. The former creates fake content, and the latter must determine the authenticity of the materials. Each time it is correct, it feeds information back to the generator to improve the system. In short, if it determines whether the image is real correctly, the input receives a greater weighting. Together this process is known as generative adversarial network (GAN). It uses the process to recognise patterns which can then be compiled to make fake images.
With this type of model, if the discriminator is overly sensitive, it will provide no feedback to the generator to develop improvements. If the generator provides an image that is too realistic, the discriminator can get stuck in a loop. However, in addition to the technical difficulties, there are several serious ethical concerns that it gives rise to.
Firstly, there have been concerns regarding political safety and women’s safety. Deepfake technology has advanced to the extent that it can create multiple photos compiled into a video. At first, this seemed harmless as many early adopters began using this technology in 2019 to make videos of politicians and celebrities singing along to funny videos. However, this technology has also been used to create videos of politicians saying provocative things.
Unlike, photoshop and other editing apps that require a lot of skill or time to augment images, deepfake technology is much more straightforward as it is attuned to mimicking the person’s voice and actions. Coupling the precision of the technology to develop realistic images and the vast entity that we call the internet, these videos are at risk of entering echo chambers and epistemic bubbles where people may not know that these videos are fake. Therefore, one primary concern regarding deepfake videos is that they can be used to assert or consolidate dangerous thinking.
These tools could be used to edit photos or create videos that damage a person’s online reputation, and although they may be refuted or proved as not real, the images and effects will remain. Recently, countries such as the UK have been demanding the implementation of legislation that limits deepfake technology and violence against women. Specifically, there is a slew of apps that “nudify” any individual, and they have been used predominantly against women. All that is required of users is to upload an image of a person. One version of this website gained over 35 million hits over a few days. The use of deepfake in this manner creates non-consensual pornography that can be used to manipulate women. Because of this, the UK has called for stronger criminal laws for harassment and assault. As people’s main image continues to merge with technology, the importance of regulating these types of technology is paramount to protect individuals. Parameters are increasingly pertinent as people’s reality merges with the virtual world.
However, like with any piece of technology, there are also positive uses. For example, Deepfake technology can be used in medicine and education systems by creating learning tools and can also be used as an accessibility feature within technology. In particular, the technology can recreate persons in history and can be used in gaming and the arts. In more detail, the technology can be used to render fake patients whose data can be used in research. This protects patient information and autonomy while still providing researchers with relevant data. Further, deepfake tech has been used in marketing to help small businesses promote their products by partnering them with celebrities.
Deepfake technology was used by academics but popularised by online forums. Not used to benefit people initially, it was first used to visualise how certain celebrities would look in compromising positions. The actual benefits derived from deepfake technology were only conceptualised by different tech groups after the basis for the technology had been developed.
The conception of such technology often comes to fruition due to a developer’s will and, given the lack of regulation, is often implemented online.
While there are extensive benefits to such technology, there need to be stricter regulations, and people who abuse the scope of technology ought to be held accountable. As we see our present reality merge with virtual spaces, a person’s online presence will continue to grow in importance. Stronger regulations must be put into place to protect people’s online persona.
While users should be held accountable for manipulating and stripping away the autonomy of individuals by using their likeness, more specifically, developers must be held responsible for using their knowledge to develop an app using deepfake technology that actively harms.
To avoid a fallout like Knoedler, where distrust, skepticism, and hesitancy rooted itself in the art community, we must alert individuals when deepfake technology is employed; even in cases where the use is positive, be transparent that it has been used. Some websites teach users how to differentiate between real and fake, and some that process images to determine their validity.
Overall, this technology can help individuals gain agency; however, it can also limit another persons’ right to autonomy and privacy. This type of AI brings unique awareness to the need for balance in technology.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Health + Wellbeing, Relationships
How to break up with a friend
Opinion + Analysis
Climate + Environment, Relationships
“Animal rights should trump human interests” – what’s the debate?
Opinion + Analysis
Politics + Human Rights, Relationships
A critical thinker’s guide to voting
Opinion + Analysis
Business + Leadership, Science + Technology
Is technology destroying your workplace culture?
BY Mehhma Malhi
Mehhma recently graduated from NYU having majored in Philosophy and minoring in Politics, Bioethics, and Art. She is now continuing her study at Columbia University and pursuing a Masters of Science in Bioethics. She is interested in refocusing the news to discuss why and how people form their personal opinions.
The great resignation: Why quitting isn't a dirty word

The great resignation: Why quitting isn’t a dirty word
Opinion + AnalysisBusiness + Leadership
BY Jack Derwin 17 FEB 2022
More than 47 million Americans quit their jobs last year, a new record for the United States. While it is most obvious in North America, a form of ‘The Great Resignation’ phenomenon is showing up in Australia as well.
Recent surveys suggest that almost one in two Australian workers are currently looking to switch jobs, with more than one million people accepting new ones between September and November alone. That part matters, making the local trend more akin to a ‘Great Reshuffle’, in the words of Australia’s own Treasurer.
The fact is most people aren’t throwing off the shackles of capitalism and running from the workforce altogether. Rather an astounding number are simply searching for something better – and fast.
Workers are motivated to leave
The pandemic has understandably taken a toll. Exhausted frontline and public-facing workers have operated under heavy stress for two years. If they haven’t been locked down or quarantined then they have faced the genuine risk of contracting the virus. It’s no wonder then that the highest number of resignations have come from healthcare with retail not far behind. Meanwhile sectors like the arts have been quietly decimated.
Professionals fortunate enough to work from home have faced a different set of challenges, whether losing contact with colleagues or having the lines between their professional and personal lives blur.
Whether the pandemic led to burnout or gave workers time to reflect and reconsider their choices, much has changed since 2020. Whether they are fed up with the old or energised to start something new, the result is the same. They’re ready to move on.
It’s the economy, stupid
That’s not to say we lived in some kind of capitalist utopia before March 2020. Indeed since 2013, wages in Australia haven’t meaningfully grown across industries, placing increasing pressure on workers over the last decade to either demand or find their own pay rises.
Yet the record economic stimulus unleashed during the pandemic is changing that dynamic. Almost $300 billion in government spending helped expand the economy while JobKeeper and JobSeeker payments have kept households either in work or able to live without it.
Such was the level of support during the pandemic that overall Australians are actually, on average, better off now than they were before it to the point where we are collectively sitting on $260 billion in savings right now.
Meanwhile job openings are 45% higher now than they were pre-pandemic and unemployment has plummeted to its lowest level since 2008. Before that you’d have to go back to the 1970s to find anything comparable. Simply put, Australian workers are in hot demand at the same time they are in short supply.
This is an environment in which, for the first time in recent memory, workers have genuine bargaining power in their current role as well as when negotiating for their next one. As the recovery remains uneven, there’s certainly an incentive to jump from one industry to another with mid-career professionals currently the most likely to switch careers entirely.
But whether it’s asking for a raise, finding a new job or taking time out altogether, this period has largely been a coup for employees.
Don’t let guilt boss you around
Rather than celebrating or exploiting this new power dynamic, many feel uneasy however at the thought of demanding more, let alone quitting their job.
Economically speaking, this makes no sense. Resignations aren’t a sign of fickleness. Workers that can freely pursue their interests and abilities in a more productive way are instead part of a healthy and efficient economy.
‘The Great Reshuffle’ can be seen as much a consequence of an economy that wasn’t previously functioning as it is the emergence of meaningful choice for a workforce that has been long without it.
Yet despite these sound economic and personal rationales, there remains a stigma attached to separating from our workplaces and going our own way. The idea of quitting can conjure up feelings of guilt, failure and even betrayal despite what we may stand to gain from it.
This is perhaps inevitable. Our jobs absorb eight or more hours a day, or more time than most people spend with their loved ones. Whether or not we grumble about them, they are so embedded in our culture and language that we talk about our ‘work lives’ as if they were interchangeable with our ‘real lives’.
Then there is a certain dependence associated with work. Beyond simply a paycheck, a profession creates a sense of identity and purpose. Consider the refrain ‘I am a doctor/a hairdresser/a butcher’. We are our occupation, or, more specifically, we are our current job. Significantly, this desire for the personal value of work has only increased during the pandemic.
In combination these ties can bind. The responsibility of a role can naturally and subconsciously manifest as an unreasonable obligation to stay in one, no matter how uncomfortable, ill–suited or even toxic it may be.
All of these factors help to stoke a sense of loyalty that is impossible to ignore. The fact that our motivations for leaving are all our own, whether to pursue a raise, a promotion or some other desire, only amplifies this further as we inevitably place our own interests above those of our employer and colleagues.
As a consequence, a resignation can feel an awful lot like infidelity. Despite our acceptance into the tribe, it is ultimately our decision, and ours alone, to leave it behind.
Bite the bullet
Resignation however remains a valuable right and a vital avenue of self-empowerment and self-determination.
An autonomous individual has an obligation to themselves to pursue the opportunities that interest and suit them and to find work that is both fulfilling and sustainable, or to exit employment that is harmful or boring.
There is also nothing shameful about periods of unemployment should we demand or desire some time out of the workforce. There is fortunately a growing appreciation of our wellbeing as people beyond our status as workers.
Whereas once gaps in resumes may have been viewed as red flags for prospective employers, there is a deeper understanding of the challenges behind them, whether related to family obligations, mental and emotional health or the pursuit of study or other interests.
There are of course different ways to leave work.
How to quit ethically
First, reflect on what is driving your decision. Is it a boss that micromanages, substandard pay and conditions, an unfair workload or a lack of opportunities?
If it is a single issue in isolation, consider seriously whether there are any possible remedies. Sometimes a frank discussion with an employer or manager can drastically improve a situation but first they need to know what is wrong. Businesses, especially at the moment, are motivated to retain staff and often may simply be unaware of what they can do better.
If you’re certain that your employment has become untenable, then you can be comforted by the fact that there is no other solution and feel justified in your decision to depart.
To counter any ill feelings of guilt that may arise, we need to interrogate its source. Generally guilt is brought on by the knowledge that an action has or will harm someone else or be immoral. In the context of resigning, it’s helpful to zoom out and consider the real world ramifications.
This analysis should both appreciate the real benefits of leaving and recognise the often minor costs. For example, by changing roles you may be in a better position to find or accept fulfilling work, or a job that allows you the flexibility you need to lead a more contented life.
By leaving, your manager may have to recruit someone else to do your job. This may inconvenience them for a few hours but will the business collapse as a result? It’s highly unlikely. In fact, they may well find someone more fitting for the role. Resignations simply aren’t a zero-sum game.
Nor does your decision represent a moral transgression. We know that resignations are a natural feature of any workplace. Feelings to the contrary can be mitigated by instead focusing on resigning appropriately.
Again it’s helpful to articulate your reasons to yourself before sharing them with a manager. Plan out how you will do it rather than letting yourself crack under pressure. Practice how you might break the news to your workplace. Schedule a private meeting, talk through why you’re leaving respectfully and end on good terms.
If you’re worried about offending your boss, don’t be. It’s unhelpful and unnecessary to lie or deceive them in an attempt to mitigate guilt. Instead keep your head high. By voicing your concerns you may help improve the workplace for future staff on your way out.
Ultimately, if you’re ready to go then resigning is in everyone’s best interests. If your job isn’t working out for you, quit feeling conflicted and throw in the towel.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Business + Leadership
Sir Geoff Mulgan on what makes a good leader
Opinion + Analysis
Health + Wellbeing, Business + Leadership
The ethics of workplace drinks, when we’re collectively drinking less
WATCH
Business + Leadership, Climate + Environment, Science + Technology
How to build good technology
Opinion + Analysis
Business + Leadership, Health + Wellbeing
The super loophole being exploited by the gig economy

BY Jack Derwin
Jack is a Sydney-based writer and journalist, specialising in business and economics. His reporting has appeared in the Sydney Morning Herald, the Australian Financial Review, Business Insider and the Asahi Shimbun among others.
Big Thinker: Jean-Paul Sartre

Jean-Paul Sartre (1905–1980) is one of the best known philosophers of the 20th century, and one of few who became a household name. But he wasn’t only a philosopher – he was also a provocative novelist, playwright and political activist.
Sartre was born in Paris in 1905, and lived in France throughout his entire life. He was conscripted during the war, but was spared the front line due to his exotropia, a condition that caused his right eye to wander. Instead, he served as a meteorologist, but was captured by German forces as they invaded France in 1940. He spent several months in a prisoner of war camp, making the most of the time by writing, and then returned to occupied Paris, where he remained throughout the war.
Before, during and after the war, he and his lifelong partner, the philosopher and novelist Simone de Beauvoir, were frequent patrons of the coffee houses around Saint-Germain-des-Prés in Paris. There, they and other leading thinkers of the time, like Albert Camus and Maurice Merleau-Ponty, cemented the cliché of bohemian thinkers smoking cigarettes and debating the nature of existence, freedom and oppression.
Sartre started writing his most popular philosophical work, Being and Nothingness, while still in captivity during the war, and published it in 1943. In it, he elaborated on one of his core themes: phenomenology, the study of experience and consciousness.
Learning from experience
Many philosophers who came before Sartre were sceptical about our ability to get to the truth about reality. Philosophers from Plato through to René Descartes and Immanuel Kant believed that appearances were deceiving, and what we experience of the world might not truly reflect the world as it really is. For this reason, these thinkers tended to dismiss our experience as being unreliable, and thus fairly uninteresting.
But Sartre disagreed. He built on the work of the German phenomenologist Edmund Husserl to focus attention on experience itself. He argued that there was something “true” about our experience that is worthy of examination – something that tells us about how we interact with the world, how we find meaning and how we relate to other people.
The other branch of Sartre’s philosophy was existentialism, which looks at what it means to be beings that exist in the way we do. He said that we exist in two somewhat contradictory states at the same time.
First, we exist as objects in the world, just as any other object, like a tree or chair. He calls this our “facticity” – simply, the sum total of the facts about us.
The second way is as subjects. As conscious beings, we have the freedom and power to change what we are – to go beyond our facticity and become something else. He calls this our “transcendence,” as we’re capable of transcending our facticity.
However, these two states of being don’t sit easily with one another. It’s hard to think of ourselves as both objects and subjects at the same time, and when we do, it can be an unsettling experience. This experience creates a central scene in Sartre’s most famous novel, Nausea (1938).
Freedom and responsibility
But Sartre thought we could escape the nausea of existence. We do this by acknowledging our status as objects, but also embracing our freedom and working to transcend what we are by pursuing “projects.”
Sartre thought this was essential to making our lives meaningful because he believed there was no almighty creator that could tell us how we ought to live our lives. Rather, it’s up to us to decide how we should live, and who we should be.
“Man is nothing else but what he makes of himself.”
This does place a tremendous burden on us, though. Sartre famously admitted that we’re “condemned to be free.” He wrote that “man” was “condemned, because he did not create himself, yet is nevertheless at liberty, and from the moment that he is thrown into this world he is responsible for everything he does.”
This radical freedom also means we are responsible for our own behaviour, and ethics to Sartre amounted to behaving in a way that didn’t oppress the ability of others to express their freedom.
Later in life, Sartre became a vocal political activist, particularly railing against the structural forces that limited our freedom, such as capitalism, colonialism and racism. He embraced many of Marx’s ideas and promoted communism for a while, but eventually became disillusioned with communism and distanced himself from the movement.
He continued to reinforce the power and the freedom that we all have, particularly encouraging the oppressed to fight for their freedom.
By the end of his life in 1980, he was a household name not only for his insightful and witty novels and plays, but also for his existentialist phenomenology, which is not just an abstract philosophy, but a philosophy built for living.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Relationships, Society + Culture
Meet Eleanor, our new philosopher in residence
WATCH
Relationships
Deontology
Opinion + Analysis
Politics + Human Rights, Relationships
What do we want from consent education?
Opinion + Analysis
Relationships, Science + Technology, Society + Culture
5 things we learnt from The Festival of Dangerous Ideas 2022

BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Ethics Explainer: Beauty

Research shows that physical appearance can affect everything from the grades of students to the sentencing of convicted criminals – are looks and morality somehow related?
Ancient philosophers spoke of beauty as a supreme value, akin to goodness and truth. The word itself alluded to far more than aesthetic appeal, implying nobility and honour – it’s counterpart, ugliness, made all the more shameful in comparison.
From the writings of Plato to Heraclitus, beautiful things were argued to be vital links between finite humans and the infinite divine. Indeed, across various cultures and epochs, beauty was praised as a virtue in and of itself; to be beautiful was to be good and to be good was to be beautiful.
When people first began to ask, ‘what makes something (or someone) beautiful?’, they came up with some weird ideas – think Pythagorean triangles and golden ratios as opposed to pretty colours and chiselled abs. Such aesthetic ideals of order and harmony contrasted with the chaos of the time and are present throughout art history.
Leonardo da Vinci, Vitruvian Man, c.1490
These days, a more artificial understanding of beauty as a mere observable quality shared by supermodels and idyllic sunsets reigns supreme.
This is because the rise of modern science necessitated a reappraisal of many important philosophical concepts. Beauty lost relevance as a supreme value of moral significance in a time when empirical knowledge and reason triumphed over religion and emotion.
Yet, as the emergence of a unique branch of philosophy, aesthetics, revealed, many still wondered what made something beautiful to look at – even if, in the modern sense, beauty is only skin deep.
Beauty: in the eye of the beholder?
In the ancient and medieval era, it was widely understood that certain things were beautiful not because of how they were perceived, but rather because of an independent quality that appealed universally and was unequivocally good. According to thinkers such as Aristotle and Thomas Aquinas, this was determined by forces beyond human control and understanding.
Over time, this idea of beauty as entirely objective became demonstrably flawed. After all, if this truly were the case, then controversy wouldn’t exist over whether things are beautiful or not. For instance, to some, the Mona Lisa is a truly wonderful piece of art – to others, evidence that Da Vinci urgently needed an eye check.

Consequently, definitions of beauty that accounted for these differences in opinion began to gain credence. David Hume famously quipped that beauty “exists merely in the mind which contemplates”. To him and many others, the enjoyable experience associated with the consumption of beautiful things was derived from personal taste, making the concept inherently subjective.
This idea of beauty as a fundamentally pleasurable emotional response is perhaps the closest thing we have to a consensus among philosophers with otherwise divergent understandings of the concept.
Returning to the debate at hand: if beauty is not at least somewhat universal, then why do hundreds and thousands of people every year visit art galleries and cosmetic surgeons in pursuit of it? How can advertising companies sell us products on the premise that they will make us more beautiful if everyone has a different idea of what that looks like? Neither subjectivist nor objectivist accounts of the concept seem to adequately explain reality.
According to philosophers such as Immanuel Kant and Francis Hutcheson, the answer must lie somewhere in the middle. Essentially, they argue that a mind that can distance itself from its own individual beliefs can also recognize if something is beautiful in a general, objective sense. Hume suggests that this seemingly universal standard of beauty arises when the tastes of multiple, credible experts align. And yet, whether or not this so-called beautiful thing evokes feelings of pleasure is ultimately contingent upon the subjective interpretation of the viewer themselves.
Looking good vs being good
If this seemingly endless debate has only reinforced your belief that beauty is a trivial concern, then you are not alone! During modernity and postmodernity, philosophers largely abandoned the concept in pursuit of more pressing matters – read: nuclear bombs and existential dread. Artists also expressed their disdain for beauty, perceived as a largely inaccessible relic of tired ways of thinking, through an expression of the anti-aesthetic.

Nevertheless, we should not dismiss the important role beauty plays in our day-to-day life. Whilst its association with morality has long been out of vogue among philosophers, this is not true of broader society. Psychological studies continually observe a ‘halo effect’ around beautiful people and things that see us interpret them in a more favourable light, leading them to be paid higher wages and receive better loans than their less attractive peers.
Social media makes it easy to feel that we are not good enough, particularly when it comes to looks. Perhaps uncoincidentally, we are, on average, increasing our relative spending on cosmetics, clothing, and other beauty-related goods and services.
Turning to philosophy may help us avoid getting caught in a hamster wheel of constant comparison. From a classical perspective, the best way to achieve beauty is to be a good person. Or maybe you side with the subjectivists, who tell us that being beautiful is meaningless anyway. Irrespective, beauty is complicated, ever-important, and wonderful – so long as we do not let it unfairly cloud our judgements.
Step through the mirror and examine what makes someone (or something) beautiful and how this impacts all our lives. Join us for the Ethics of Beauty on Thur 29 Feb 2024 at 6:30pm. Tickets available here.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Relationships
Three ways philosophy can help you decide what to do
Explainer
Health + Wellbeing, Relationships
Ethics Explainer: Values
Opinion + Analysis
Relationships, Society + Culture
Based on a true story: The ethics of making art about real-life others
Opinion + Analysis
Relationships
How to respectfully disagree

BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Why morality must evolve

If you read the news or spend any time on social media, then you’d be forgiven for thinking that there’s a lack of morality in the world today.
There is certainly no shortage of serious social and moral problems in the world. People are discriminated against just because of the colour of their skin. Many women don’t feel safe in their own home or workplace. Over 450 million children around the world lack access to clean water. There are whole industries that cause untold suffering to animals. New technologies like artificial intelligence are being used to create autonomous weapons that might slip from our control. And people receive death threats simply for expressing themselves online.
It’s easy to think that if only morality featured more heavily in people’s thinking, then the world would be a better place. But I’m not convinced. This might sound strange coming from a moral philosopher, but I have come to believe that the problem we face isn’t a lack of morality, it’s that there’s often too much. Specifically, too much moral certainty.
The most dangerous people in the world are not those with weak moral views – they are those who have unwavering moral convictions. They are the ones who see the world in black and white, as if they are in a war between good and evil. They are ones who are willing to kill or die to bring about their vision of utopia.
That said, I’m not quite ready to give up on morality yet. It sits at the heart of ethics and guides how we decide on what is good and bad. It’s still central to how we live our lives. But it also has a dark side, particularly in its power to inspire rigid black and white thinking. And it’s not just the extremists who think this way. We are all susceptible to it.
To show you what I mean, let me ask you what will hopefully be an easy question:
Is it wrong to murder someone, just because you don’t like the look of their face?
I’m hoping you will say it is wrong, and I’m going to agree with you, but when we look at what we mean when we respond this way, it can help us understand how we think about right and wrong.
When we say that something like this is wrong, we’re usually not just stating a personal preference, like “I simply prefer not to murder people, but I don’t mind if you do so”. Typically, we’re saying that murder for such petty reasons is wrong for everyone, always.
Statements like these seem to be different from expressions of subjective opinion, like whether I prefer chocolate to strawberry ice cream. It seems like there’s something objective about the fact that it’s wrong to murder someone because you don’t like the look of their face. And if someone suggests that it’s just a subjective opinion – that allowing murder is a matter of personal taste – then we’re inclined to say that they’re just plain wrong. Should they defend their position, we might even be tempted to say they’re not only mistaken about some basic moral truth, but that they’re somehow morally corrupt because they cannot appreciate that truth.
Murdering someone because you don’t like the look of their face is just wrong. It’s black and white.
This view might be intuitively appealing, and it might be emotionally satisfying to feel as if we have moral truth on our side, but it has two fatal flaws. First, morality is not black and white, as I’ll explain below. Second, it stifles our ability to engage with views other than our own, which we are bound to do in a large and diverse world.
So instead of solutions, we get more conflict: left versus right, science versus anti-vaxxers, abortion versus a right to choose, free speech versus cancel culture. The list goes on.
Now, more than ever, we need to get away from this black and white thinking so we can engage with a complex moral landscape, and be flexible enough to adapt our moral views to solve the very real problems we face today.
The thing is, it’s not easy to change the way we think about morality. It turns out that it’s in our very nature to think about it in black and white terms.
As a philosopher, I’ve been fascinated by the story of where morality came from, and how we evolved from being a relatively anti-social species of ape a few million years ago to being the massively social species we are today.
Evolution plays a leading role in this story. It didn’t just shape our bodies, like giving us opposable thumbs and an upright stance. It also shaped our minds: it made us smarter, it gave us language, and it gave us psychological and social tools to help us live and cooperate together relatively peacefully. We evolved a greater capacity to feel empathy for others, to want to punish wrongdoers, and to create and follow behavioural rules that are set by our community. In short: we evolved to be moral creatures, and this is what has enabled our species to work together and spread to every corner of the globe.
But evolution often takes shortcuts. It often only makes things ‘good enough’ rather than optimising them. I mentioned we evolved an upright stance, but even that was not without cost. Just ask anyone over the age of 40 years about their knees or lower backs.
Evolution’s ‘good enough’ solution for how to make us be nice to each other tens of thousands of years ago was to appropriate the way we evolved to perceive the world. For example, when you look at a ripe strawberry, what do you see? I’m guessing that for most of you – namely if you are not colour blind – you see it as being red. And when you bite into it, what do you taste? I’m guessing that you experience it as being sweet.
However, this is just a trick that our mind plays on us. There really is no ‘redness’ or ‘sweetness’ built into the strawberry. A strawberry is just a bunch of chemicals arranged in a particular way. It is our eyes and our taste buds that interpret these chemicals as ‘red’ or ‘sweet’. And it is our minds that trick us into believing these are properties of the strawberry rather than something that came from us.
The Scottish philosopher David Hume called this “projectivism”, because we project our subjective experience onto the world, mistaking it for being an objective feature of the world.
We do this in all sorts of contexts, not just with morality. This can help explain why we sometimes mistake our subjective opinions for being objective facts. Consider debates you may have had around the merits of a particular artist or musician, or that vexed question of whether pineapple belongs on pizza. It can feel like someone who hates your favourite musician is failing to appreciate some inherently good property of their music. But, at the end of the day, we will probably acknowledge that our music tastes are subjective, and it’s us who are projecting the property of “awesomeness” onto the sounds of our favourite song.
It’s not that different with morality. As the American psychologist, Joshua Greene, puts it: “we see the world in moral colours”. We absorb the moral values of our community when we are young, and we internalise them to the point where we see the world through their lens.
As with colours, we project our sense of right and wrong onto the world so that it looks like it was there all along, and this makes it difficult for us to imagine that other people might see the world differently.
In studying the story of human physical, psychological and cultural evolution, I learnt something else. While this is how evolution shaped our minds to see right and wrong, it’s not how morality has actually developed. Even though we’re wired to see our particular version of morality as being built into the world, the moral rules that we live by are really a human invention. They’re not black and white, but come in many different shades of grey.
You can think of these rules as being a cultural tool kit that sits on top of our evolved social nature. These tools are something that our ancestors created to help them live and thrive together peacefully. They helped to solve many of the inevitable problems that emerge from living alongside one another, like how to stop bullies from exploiting the weak, or how to distribute food and other resources so everyone gets a fair share.
But, crucially, different societies had different problems to solve. Some societies were small and roamed across resource-starved areas surrounded by friendly bands. Their problems were very different from those of a settled society defending its farmlands from hostile raiders. And their problems differed even more from those of a massive post-industrial state with a highly diverse population. Each had their own set of challenges to solve, and each came up with different solutions to suit their circumstances.
Those solutions also changed and evolved as their societies did. As social, environmental, technological and economic circumstances changed, societies faced new problems, such as rising inequality, conflict between diverse cultural groups or how to prevent industry from damaging the environment. So they had to come up with new solutions.
For an example of moral evolution, consider how attitudes towards punishing wrongdoing have varied among different societies and changed over time. Let’s start with a small-scale hunter-gatherer society, like that of the !Kung, dwelling in the Kalahari desert a little over a century ago.
If one member of the band started pushing others around, perhaps turning to violence to get their way, there were no police or law courts to keep them in line. Instead, it was left to individuals to keep their own justice. That’s why if a bully murdered a family member, it was not only permitted, but it was a moral duty for the family to kill the murderer. Revenge – and the threat thereof – was an important and effective tool in the !Kung moral toolkit.
We can see that revenge also played a similar role in many moral systems around the world and throughout history. There are countless tales that demonstrate the ubiquity of revenge in great works like the Iliad, Mahabharata and Beowulf. In the Old Testament, God tells Moses the famous line that allows his people to take an eye for an eye and tooth for a tooth.
But as societies changed, as they expanded, as people started interacting with more strangers, it turned out that revenge caused more problems than it solved. While it could be managed and contained in small-scale societies like the !Kung, in larger societies it could lead to feuds where extended family groups might get stuck in a cycle of counter-retaliation for generations, all started by a one single regrettable event.
As societies changed, they created new moral tools to solve the new problems they faced, and they often discarded tools that no longer worked. That’s why the New Testament advises people to reject revenge and “turn the other cheek” instead.
Modern societies have the resources and institutions to outsource punishment to a specialised class of individuals in the form of police and law courts. When these institutions are well run and can be trusted, they have proven to be a highly effective means of keeping the peace to the point that revenge and vigilante justice is now frowned upon.
This is moral evolution. This is how different societies have adapted to new ways of living, solving the new social problems that emerge as their societies and circumstances change.
(I must stress that this does not make !Kung morality inferior or less evolved than other societies. Similar to how all creatures alive today are equally evolved, so too are all extant moral systems. My point is not that there is a linear progression from small-scale to large-scale societies, from primitive to civilised, it’s that any moral system needs to fit the circumstances that the society finds itself in and change as those circumstances change.)
But there’s a catch: moral evolution has typically moved painfully slowly, not least because our innate tendency towards black and white thinking has stifled moral innovation.
This wasn’t such a problem 10,000 years ago, when living conditions would have remained relatively static for generations. In this case, there was less pressure to evolve and adapt the moral toolkit. But the world today is not like this. It is changing faster than ever before, and so we are forced to adapt faster than our minds might be comfortable with.
This means pushing back on our black and white tendencies and looking at morality through a new lens. Instead of seeing it as something that is fixed, we can look at it as a toolkit that we adapt to the problems at hand.
It also means acknowledging that many of the social and moral problems we face today have no single perfect solution. Many come in shades of grey, like deciding if free speech should give people a right to offend, or to what extent we should tolerate intolerance, or under what circumstances we should allow people to end their own lives. There is almost certainly no single set of moral rules that will solve these problems in every circumstance without also causing undesirable consequences.
On the other hand, we should also acknowledge that there are many social and moral problems that have more than one good solution. Consider one of the questions that sits at the centre of ethics: what constitutes a good life? There are likely going to be many good answers to that question. This remains the case even if some answers come into conflict with others, such as one perspective stressing individual freedom while another stresses greater community and interpersonal obligations.
This full-spectrum evolutionary perspective on morality can also help explain why there is such a tremendous diversity of conflicting moral viewpoints in the world today. For a start, many cultures are still wielding tools that were made to solve problems from a different time, and they have carried them into today’s world, such as tools that prioritise in-group loyalty at the cost of suspicion of others. Some conservative cultures are reluctant to give these tools up, even if they are obsolete.
Other tools were never very good at their job, or they were co-opted by an elite for their own benefit to the detriment of others, such as tools that subjugated women or disenfranchised certain minorities.
Other tools are responses to different conceptions of the good life. Some represent the trade-off that is built into many moral problems. And there is constant production of new and experimental tools that have yet to be tested. Some will prove to work well and may be kept, while others will fall short, or cause more problems than they solve, and will be dropped.
One thing is clear: the world we are living in today is unlike anything our distant ancestors faced. It is larger, more complex, more diverse and more interconnected than it has ever been, and we are hearing from voices that once were silenced. The world is changing faster than ever before.
This might be taxing for our slow-moving black and white minds – and we should forgive ourselves and others for being human – but we must adapt our moral views to the world of today, and not rely on the old solutions of yesterday.
This calls for each of us to be mindful of how we think about morality, our black and white tendencies, and whether the views we inherited from our forebears are the best ones to solve the serious problems we face today. It also means we must rethink morality as being a human invention, a toolkit that can be adapted as the world changes, with many new problems and many tools that can solve them.
What matters today is not clinging to the moral views we were raised with, but looking at each problem, listening to each other, and working together to find the best solution. What we need now is genuine moral evolution.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Big thinker
Politics + Human Rights, Relationships
Big Thinker: Germaine Greer
Opinion + Analysis
Climate + Environment, Relationships
You can’t save the planet. But Dr. Seuss and your kids can.
Explainer
Relationships
Ethics explainer: Normativity
Opinion + Analysis
Politics + Human Rights, Relationships, Society + Culture
The ethics of tearing down monuments

BY Dr Tim Dean
Dr Tim Dean is Philosopher in Residence at The Ethics Centre and author of How We Became Human: And Why We Need to Change.
Why businesses need to have difficult conversations

Why businesses need to have difficult conversations
Opinion + AnalysisBusiness + Leadership
BY The Ethics Alliance 8 FEB 2022
Let’s step back to examine the ethical foundation for conversation as seen by Socrates, who engaged in dialogue to converse.
This process involved asking and answering questions with the intent of sharing views in pursuit of a common goal towards a common good. This would then create a mutually accepted direction preventing any one person from pursuing a self-interested good.
Socrates felt these conversations allowed each to hold the other to account if what was presented was untrue. This process of back and forth questioning and answering draws on qualities of friendship, such as sharing, and allowing equal and fair time to respond, all while acknowledging the value and importance of each other’s points of view.
But what if you’re not friends? Or what if you feel your view should be prioritised? Conversations become essential when there is an urgency to resolve disagreements and there is a complex array of relationships with stakeholders who could be harmed or could benefit from the decisions that need to be made.
We are seeing this play out in all parts of society in attempts to address climate change.
There was a time in the 1900s when mining was crucial to the colony, with steamships, railways and steam mills playing a vital role in developing Australia’s economy. Today we recognise that past behaviour has and continues to contribute to the climate crisis.
Different organisations will be at different maturity stages in their path to a net zero future. There will be unintended consequences and changes in trajectories. To trust this process so that we can feel confident in addressing the trade-offs, we need to better understand it and be comfortable having these conversations.
What is missing that is preventing discussions from being focused on the ‘common good’?
Currently there is a stalemate at the Resolution Copper mine in Arizona between two Australian mining companies, BHP and Rio Tinto, and the Native American activist group, Apache Stronghold, claiming the land is sacred and shouldn’t be mined. The copper is needed to produce renewable energy and electric vehicles. 11 federally recognised tribes are part of the formal consultation process and they all have differing views around the project. At this stage conversation has failed and they are waiting on the law to determine next moves.
In 2023 a windfarm in Kaban, 49km south of Mt Emerald, QLD is due to start operations powering 96,000 homes. The project area includes 129 hectares of threatened species habitat and is home to greater gliders and the brood frog. The work done to date has come under heavy criticism from local conservation groups who see destroying the rich biodiversity as a means for greater wind energy as a complete oxymoron.
The issue is polarising for the general community, though, with some people seeing the project as a positive opportunity for employment and making the most out of a situation they feel they have no control over.
Others, like traditional owner Joyce Bean, broke down and cried after seeing the destruction caused to the land, saying “we didn’t have a say in it”. Traditional owners don’t have veto rights over projects on lands they claim native title on.
The acknowledgment of people’s dignity and worth is a principal element of a conversation. Has a lack of power or recognition eventuated in the local community being omitted from the conversation?
A TED Countdown Summit in Edinburgh was a platform for a difficult and at times emotional conversation on the trajectory of decarbonisation. The guests included Royal Dutch Shell’s global CEO, Ben van Beurden; Chris James, founder of the activist fund Engine No. 1; and Lauren MacDonald, a Scottish climate activist. The platform was formatted in such a way that each speaker was asked to present their position in addressing decarbonisation and the other two could ask a question of them which would then be answered – much like the Socratic method of enquiry.
The conversation broke down when MacDonald passionately presented a statement and question to van Beurden but was unable to stay sharing the stage to hear the answer with the person she felt was responsible for a crisis situation in Scotland. The organisation had lost legitimacy in her eyes. The result was no conversation.
Greg Joiner, VP Renewables and Energy solutions at Shell, recognises how difficult it is to turn people’s views when trying to explain Shell’s corporate strategy to reach net zero by 2050. He explains that playing a significant role in transitioning the energy sector ‘is not linear, it’s dynamic and iterative and there are unintended consequences”. He says that often models need to be redesigned creating discontinuities which are challenging for everyone and leave an organisation open to greenwashing accusations.
Does this suggest the best way forward is to not have conversations but rather do the work, meet the targets and let the results speak for themselves?
What is the benefit of conversation? As much as the exchange of ideas and thoughts is important, the ability to listen may be more so. In conversations we learn about people’s values and principles and emotional investment. We also gain insight into how others interpret and evaluate our ideas. All of this helps to develop empathy and think of new ways to approach a complex situation.
If we want to embed ethics into our business and decision-making, we need to continuously encourage conversations that monitor the circumstances and be willing to change our minds.
Trying to change people’s views or omit them from the discussion hinders or prevents the conversation. As humans we are fallible and opening ourselves up to different perspectives, even those we disagree with, creates new possibilities. If we want to protect ourselves, the animals and biodiverse planet we live on, we need to have conversations.
A Socratic discussion shows that how we communicate is often more important than what we say. We don’t need to be friends, but if we start conversations from a place of curiosity and respect, sharing and providing equal opportunity for reciprocity, then the conversation can remain mutually supportive, and we can successfully pursue a ‘common good’.

This article was originally written for The Ethics Alliance. Find out more about this corporate membership program. Already a member? Log in to the membership portal for more content and tools here.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Big thinker
Business + Leadership
Big Thinker: Karl Marx
Opinion + Analysis
Business + Leadership, Relationships
Employee activism is forcing business to adapt quickly
Opinion + Analysis
Business + Leadership, Politics + Human Rights, Society + Culture
Corruption, decency and probity advice
Opinion + Analysis
Business + Leadership
Survivor bias: Is hardship the only way to show dedication?

BY The Ethics Alliance
The Ethics Alliance is a community of organisations sharing insights and learning together, to find a better way of doing business. The Alliance is an initiative of The Ethics Centre.
Why we should be teaching our kids to protest

Why we should be teaching our kids to protest
Opinion + AnalysisPolitics + Human RightsRelationships
BY Dr Luke Zaphir 3 FEB 2022
When the Prime Minister says classrooms shouldn’t be political and students should stay in school, that’s an implicit argument about what kinds of citizen he thinks we should have.
It’s not unreasonable. The type of citizen who has not gone out to protest will have certain habits and dispositions that are desirable. Hard-working, diligent, focused. However, the question about what it means to be a citizen and how to become one is complicated and not one that any one person has the truth about.
Let’s go back to basics though. What’s the point of education? It’s to prepare children for life. Many would claim it’s to get children ready for work, but if that was the case we would put them in training facilities rather than schools. Our education systems have many tasks – to make children work ready to be sure, but also to develop their personhood, to allow them to engage in society, to help them flourish. Every part of the curriculum, from its General Capabilities of critical and creative thinking to the discipline specific like technologies, is designed to provide young people with the skills, knowledge and dispositions necessary for being 21st century citizens.
What many don’t realise is that learning what it means to be a citizen isn’t localised to the curriculum. Interactions with parents, teachers, with each other, with news and social media – all of these contribute to the definition of a citizen.
Every time a politician says that children should be seen and not heard, that’s an indication of the type of citizen they want.
Most politicians don’t want kids out protesting after all – not only is it disruptive to whatever is in school that day, it looks really bad on the news for them. Protests are bad news for politicians in general and if children are involved, there’s no good way to spin it.
But we do want children to learn how to protest. We want them to be able to see corruption and have discussions and heated debates and embrace complexity. Everyone should have the ability to say their piece and be heard in a democracy. This is something that we’ve already recognised as persuasion is a major part of education and has been for years.
However, when we talk about this, we need to recognise that we aren’t just talking about skills or knowledge. This isn’t putting together a pithy response or clever tweet. It’s about being capable of contributing to public discourse, and for that, we need children to hold certain intellectual virtues and values.
An intellectual virtue refers to the way we approach inquiry. An intellectually virtuous citizen is someone who approaches problems and perspectives with open-mindedness, curiosity, honesty and resilience; they wish to know more about it and are truth seeking, unafraid of what terrors lie in it.
If virtues are about the willingness to engage in inquiry, intellectual values are the cognitive tools needed to do so effectively. It’s essential in conversation to be able to speak with coherence; an argument that doesn’t meaningfully connect ideas is one that is confusing at best, and manipulative at worst. If we’re not able to share our thoughts and display them clearly, we’re just shouting at each other.
Values and virtues are difficult to teach though. You can’t hold up flash cards and point to “fallibility” and say “okay, now remember that you can always be wrong”. We have cognitive biases that stand between us and accepting a virtue like “resilience to our own fallibility” – it feels bad to be wrong. The way we learn these habits of mind is through practice, through acceptance and agreement. Teachers, parents and adults can all develop these habits explicitly through classroom activities, and implicitly by modelling these behaviours themselves.
If a student can share their ideas without fear of being shut down by authority, they’ll develop greater clarity and coherence. They’ll be more open-minded about the ideas of others knowing they don’t have to defensively guard their beliefs.
To the original question of what it means to be a good citizen in a global context: we want our children to develop into conscientious adults. A good citizen is able to communicate their ideas and perspectives, and listen to the same from others. A good citizen can discern policy from platitude, and dilemmas from demagoguery.
But it takes practice and time. It takes new challenges and new contexts and new ideas to train these habits. We don’t have to teach students the logistics of organising a revolution or how to get elected.
And if we’re not teaching them when or why they should protest, we’re not teaching them to be good citizens at all.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Politics + Human Rights, Society + Culture
What comes after Stan Grant’s speech?
Opinion + Analysis
Relationships, Society + Culture
The Bear and what it means to keep going when you lose it all
Opinion + Analysis
Relationships, Science + Technology
Making friends with machines
Opinion + Analysis
Business + Leadership, Relationships
Corporate whistleblowing: Balancing moral courage with moral responsibility

BY Dr Luke Zaphir
Luke is a researcher for the University of Queensland's Critical Thinking Project. He completed a PhD in philosophy in 2017, writing about non-electoral alternatives to democracy. Democracy without elections is a difficult goal to achieve though, requiring a much greater level of education from citizens and more deliberate forms of engagement. Thus he's also a practicing high school teacher in Queensland, where he teaches critical thinking and philosophy.
Is the right to die about rights or consequences?

Is the right to die about rights or consequences?
Opinion + AnalysisPolitics + Human Rights
BY Joshua Pearl 31 JAN 2022
Of all public policy debates, voluntary assisted dying is an ethical debate as much as any other, made clear by the recent impassioned speeches on the floor of the New South Wales parliament and the accompanying public debate.
The various arguments for and against voluntary assisted dying are motivated by a range of different reasons. For some it’s by personal experiences and time spent with dying loved ones. For others it’s by views on human dignity – reasons that are used both for and against. Many arguments are motivated instead by a deep belief in the existence of God, and what this means for how we treat others.
While there may be no “best way” to consider and assess the case for and against voluntary-assisted-dying, it seems to me that a useful approach is to focus on two central ethical issues:
- The level of rights an individual has over their body
- Whether legalised voluntary assisted dying makes a society worse-off due to the negative consequences that may ensue, such as increased self-harm in the broader population or individuals being pressured to prematurely end their lives.
The rights case for voluntary assisted dying largely centres on an individual’s self-ownership rights – what they are permitted to do with their bodies. These rights do not rest on any consequential benefits that might arise, such as a more cohesive society or a happier public, but are natural rights, without further need of justification.
If people have self-ownership rights in a strong sense – for example, they can do as they please with their bodies, free from any external government interference – then it seems the proposed NSW voluntary assisted dying bill does not go far enough.
Patients must have a condition that is advanced, progressive and will cause death within six months (or 12 months for a neurodegenerative disease). This timeframe appears unfair because it means patients with higher levels of pain who are further from the relief of death will suffer more, for longer. If anything, a person experiencing a higher level of pain has a greater need for voluntary assisted dying. If we regard incurable psychological suffering as an affliction comparable with physical suffering (a possibility it seems we do take seriously as a society), failing to provide relief for this cohort seems if not unfair, then at least inconsistent.
However, our existing social norms suggest self-ownership rights are not inviolable. We are not allowed to sell our organs, even if to save another person’s life (we can donate them). We are not allowed to sell ourselves into slavery, even if this could raise vital funds our families or children need to lead better lives.
When we impose risks that are great enough, either to ourselves or others, we are restricted from doing things as menial as leaving home after dark, as was the case in parts of south-west Sydney during the COVID-19 lockdowns. Sometimes these restrictions are publicly justified on the basis of being good for the individual (paternalistic reasons), and other times on the basis of being good for society (what economists might call “externality” reasons).
With regard to consequences, from an individual’s perspective, it seems reasonable to suggest that a condition can be so severe, so acute, that life is not worth living. Our existing medical practices align with this view. It is permissible in NSW for doctors to withdraw life-saving treatment at the request of patients and doctors are under no obligation to provide medical treatment when treatment is considered futile. While there is a difference between killing and letting die, this practice suggests it is possible for the benefits of death to outweigh the costs of life.
Therefore, from a consequentialist perspective, it seems to me the primary issue of concern is whether voluntary assisted dying makes society worse. One argument made is that voluntary assisted dying can increase suicides in the general population and pressure vulnerable people to prematurely end their lives. It seems reasonable to accept this is possible and reasonable to accept that we cannot know with full certainty how legalising voluntary assisted dying will impact NSW.
The primary issue of concern is whether voluntary assisted dying makes society worse.
However, these consequential considerations can be informed by looking at the experience of other jurisdictions. Voluntary assisted dying has been legal in the US states of Oregon, Washington and Vermont since 1997, 2009, and 2013 respectively; legal in the Netherlands and Belgium since 2002; and legal in Switzerland since 1918.
Given both sides of the debate argue the evidence is in favour of their own argument, a useful exercise is for the government to commission an independent non-partisan group of experts to analyse the existing data and academic literature, and publicly report back. This would help inform members of NSW Legislative Council when they consider amendments and vote on voluntary-assisted-dying legislation in 2022.
The non-partisan group would analyse how laws have been introduced in other jurisdictions and how these laws have changed over time. The group would ideally look for evidence of whether voluntary assisted dying has increased general population suicides or self-harm, or pressured individuals to prematurely end their lives. They might even consider whether voluntary assisted dying legalisation has numbed or lessened the community spirit, or negatively (or positively) changed the way a community treats and thinks about death.
An independent non-partisan report would provide a greater understanding of the trade-off between individual rights and social consequences. Should it be the case there is near zero risk of negative social consequences, then the case for voluntary assisted dying would seem unassailable. But if there is a risk of increased general population self-harm (for example), the decision then centres on a threshold issue of what level of risk and what level of social impact we are willing to accept.
We might be willing to accept one additional event of self-harm or we might be willing to accept one hundred. We might even be willing to accept that an individual’s rights over their bodies are so strong that patients in agonising pain have a right to voluntary assisted dying, regardless of the social consequences that might result. That is, we might conclude that individual rights trump social consequences.
Should the voluntary assisted dying bill become law, the NSW experience may differ from other jurisdictions due to a range of policy or cultural reasons, which is why it seems an oversight the proposed bill does not require more in the way of future data collection and future reviews (something that could be undertaken by the proposed Voluntary Assisted Dying Board). This amendment would aid future debates (should the bill be passed by the NSW Legislative Council) on whether voluntary assisted dying should be expanded, amended, or even repealed.
It seems to me, the proposed voluntary assisted dying bill permits too little where rights are concerned by setting too strict a timeframe on nearness to death, and permits too much where consequences are concerned by not adequately taking into account the potential for negative social consequences.
The proposed bill and the ethical debate would be improved by considering ways to treat individuals consistently and fairly, by the government commissioning an independent non-partisan group to publicly report back before the NSW Legislative Council votes on the voluntary assisted dying bill, and by amending the proposed bill to require greater data collection and mandate future reviews.
These measures would enhance our understanding of individual rights and social consequences and enable our politicians to vote with their conscience alongside the relevant facts.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Business + Leadership, Politics + Human Rights
Berejiklian Conflict
Opinion + Analysis
Climate + Environment, Politics + Human Rights
Increase or reduce immigration? Recommended reads
Opinion + Analysis
Politics + Human Rights, Relationships
Assisted dying: 5 things to think about
Opinion + Analysis
Politics + Human Rights, Society + Culture
Respect for persons lost in proposed legislation
