What we owe to our pets

Australians love having pets around. But what are our obligations to our animal companions?

Much has been said about the benefits of living with pets – from companionship to improved mental and physical health. Out of two thirds of Australia households that are home to at least one pet, a whopping 85% of owners have said that pets have a positive impact on their lives.  

In other words, they’re good for us. But are we good for them? And what are our obligations in these relationships so many of us find ourselves in? 

Legally, we own animals. We purchase them, register them as our pets, and pay for their routine and emergency health care where required. In this way, pets are similar to property. But that doesn’t mean we can treat them like ordinary property.  

Unlike us, animals are not typical “moral agents”. They cannot make and enact ethical decisions, like choosing between different foods based on their carbon footprint, or making an informed choice about breeding if there is a risk of passing a heritable disorder to offspring. 

Pets fall into the category of “moral patients” – beings who matter morally, but who are subjects of our moral consideration and actions. They need us to make good decisions for them. This is especially important because we take them out of their natural environment and bring them into our homes, environments designed specifically to cater to the needs of our species. We require animals to live within these spaces and adapt to our lifestyles.  

Sometimes we forget how challenging this can be. For example, animals may be nocturnal, but we expect them to adapt to the hours we keep. Indoor dogs rely on us to provide outside access for toileting and exercise. Cats rely on us to change their litter tray and provide suitable surfaces to scratch and climb. They cannot predict when our routines – and hence their routines – will change.  

As a veterinarian, sometimes my client (a human) will tell me that my patient (an animal), urinated outside of the designated toileting area “to spite them” when they were late home from work. It is not uncommon for humans to attribute hostile intentions to animals who display behaviours we find problematic, when these behaviours may be manifestations of an animal’s frustration if their needs are not met (e.g. provision of multiple litter trays for cats, lack of choice).  

As a result, shelters are full of animals who have, for one reason or another, not been able to adapt to human behaviour. This is not the fault of the animal; it’s often due to our unrealistic expectations.  

In Australia, we tend to think of “responsible pet ownership” as ensuring that pets are well behaved (that is, they do not cause a nuisance to others), that they are microchipped, desexed and registered, and that we provide appropriate food, water, shelter and veterinary care as required. 

Though many people would characterise their role in terms that go beyond “ownership”. Indeed, the use of the term “pet owner” has decreased, in favour of terms like “guardian” or “caretaker” or even “pet parent”. This reflects a view that animals are more like family members than property.  

Animal welfare science and legislation increasingly reflect the position that we need to provide animals with lives worth living, or good lives. That is, not just minimisation of poor welfare and associated negative mental experiences or feelings, but striving to maximise positive mental experiences.

The early view of good animal welfare was informed by the “Five Freedoms” model of animal welfare. These include freedom from: pain, injury and disease; fear and distress; discomfort; hunger and thirst; and to express normal behaviour. 

But according to the more recent “Five Domains”, physical and functional factors including nutrition, physical environment, health, behavioural interactions with other animals, humans and the environment) influence mental experiences (the fifth domain). 

This model stresses that what matters most to animals is their subjective experiences. Therefore, just acting to minimise negative mental experiences (such as confining a dog to stop them running on the road and experiencing a painful injury), doesn’t necessarily lead to positive welfare.  

To ensure that an animal has a good life, they should be able to have more positive than negative experiences. With the example of the confined dog, that means providing, where possible, opportunities for positive interaction with animals, people and the environment. Walks. Sniffing. Chasing a ball (if that’s their game). Spending time with them. Of course, not all animals are social species. We need to pay attention to the preferences of animals, and, where possible, offer them choice, so they can experience positive welfare. 

That doesn’t mean we shouldn’t make unpopular choices for them. While sentient animals – including humans – like to exercise choice or agency, we need to ensure that those choices are likely to lead to sustained positive welfare rather than poor welfare. So while an indoor cat may have a ravenous appetite, indulging the cat to the point where they become obese is likely to lead to poor welfare in the long term.  

We need to remind ourselves that, as humans living in environments designed almost exclusively for humans, we expect a lot of our animal companions. In addition to meeting their basic needs, we should aim to provide them with good lives. This requires doing a bit of research, observing animals carefully, and learning from them. Our lives are richer for it. 

copy license

What are we afraid of? Horror movies and our collective fears

In 1968, the dead walked the Earth. George A. Romero’s The Night of The Living Dead, shot on a shoestring budget in black and white and film stock, was the movie that popularised zombies as we now know them.

In the film, a ragtag group of survivors must band together, as corpses rise from the grave in a desperate search for human flesh.

Night of the Living Dead was a smash hit. It made more than 250 times its budget, becoming a cultural talking point, and scarring an entire generation. Taken on face value, its success is surprising. The film is unremittingly bleak, even for a horror movie – aside from the blood and guts, it also ends with one of cinema’s great downers, as the film’s hero, Ben (Duane Jones), is murdered not by the zombies, but instead an armed posse dispatched to kill the zombies, who confuse him for a reanimated corpse.

Given Ben is African-American, the film’s final image – his dead body set alight, burning to ash on top of a pile of zombie corpses – had powerful, painful symbolism for audiences in the late sixties. It was a time of huge social upheaval; of struggle, pain, and change. America was still in the midst of the Vietnam war, and the Civil Rights movement was continuing to gather steam.

Indeed, that collective social suffering is precisely the means of explaining Night of the Living Dead’s great success. Mired in images of real-life suffering, beamed into their homes via their TV sets, American audiences flocked to see a film that gave a voice to the feeling of the times – its ambient horror. To borrow a quote from horror director Wes Craven, Night of the Living Dead didn’t “create” fear – it released it.

Naming the unnamable

Horror’s persistent success – slashers almost always make money, and are a go-to for indie directors precisely because they’re almost guaranteed to make a financial return – speaks to the genre’s ability to address the unnamed. Each generation has its trauma, and each generation has a horror film that speaks to that trauma.

Alfred Hitchcock’s Psycho, released in 1960, was a reaction to a spate of serial murders – chiefly the killings committed by Ed Gein. Rosemary’s Baby, released the same year as Night of the Living Dead, bottled the collective anxiety that came from modern urbanisation: set in a sprawling apartment block, it posits that you can never truly know your neighbours. Carrie channeled first wave feminism, and the oppression conducted by both men and the church; Clive Barker’s Hellraiser explored queerness and kink, and the violent reaction to it; Hostel and Saw, the forerunners of the “torture porn” movement, were birthed by images of violence released out of Abu Ghraib; and Get Out compressed years of racist oppression into one shocking tale.

In this way, horror films tap into what philosopher and psychologist Carl Jung referred to as the “collective unconsciousness”. Jung believed that we all have both personal unconsciousness, and an unconsciousness shared by all. This deeper, shared unconsciousness is populated by “instincts”, “primal fears”, and “archetypes.” These archetypes directly tie us back to our ancestors – they are as old as human beings are. They are core figures, images, and stories, that, given they are located deep inside us, are frequently “strange” and disturbing. Jung gave these archetypes names, from “eternity” to “the profane”.

Horror draws on these archetypes. In fact, given how common archetypes are – we cannot help, Jung thought, coming back to them – sometimes, these archetypes can be described as “cliches.” Just as the collective unconsciousness keeps returning to specific images and figures, so too does horror have its tropes: the masked murderer; the demon child; the haunted house; the animal that appears as a harbinger of doom.

This is why horror movies tap us in a deep, powerful way. They apply images and stories to things that live deep inside us; that are innate. Indeed, the philosopher William James thought that fear was key to the unconscious – he argued that if you dropped an Eskimo into the African savannah, they would know to be afraid of a lion, even if they had never encountered one before. Whether we like it or not, terror lives somewhere deep inside us; an unavoidable vein of anxiety, running through the human condition. By engaging with that, horror movies hit us on a primal level – and bring us closer to each other.

The release of fear

This releasing quality of horror films has been proven to have personally therapeutic benefits. Horror movies allow us to confront our fears directly; to expose ourselves to them. But more than that, they allow us to do so in a controlled, calm setting.

When we’re terrified by a horror movie, we’re not terrified in the same way we would be if we were literally in the situation outlined. Our brains know, on some level, that we are safe; protected; sitting at home, or in a movie theatre. A recent study showed that horror movies can provide “stress release, managing real-life fears, and gradually reducing the impact of stressors through exposure to danger and fear in a controlled environment.”

But horror movies don’t just help individuals expose themselves to fear. Their hugely beneficial ethical dimension is their ability to help societies to understand their fears.

We cannot solve or change something that we cannot name. Without proper language – without images – we cannot hope to confront the sick or ailing parts of our society. Horror movies funnel collective anxieties into precise ones. And with their image as a kind of vocabulary, we can start talking about these issues, and in that way, move to change them.

Indeed, stories of horror and evil have historically served as a way of forming and reinforcing moral judgement. We tell ourselves stories of what taboos look like when they are broken in order to remind ourselves of the importance of those taboos. Using these narratives, we can come together in order to decide what is permissible and what is not – the shock of anti-social, violent behaviour moving us towards a place of steadfast moral judgement. The horror movie only scares us because it shows us what we shouldn’t do, what we don’t want, what we, collectively, will work to avoid.

This quality of sharing is important. The success of a film like Get Out brought people together. It cut across class, gender and racial divides. In cinemas across the world, people sat in the dark, and discovered that they were afraid of the same thing as the person in the seat next to them. And it is from that base of solidarity – provided by cinematic nightmares, no less – that special things can happen.

 

Image: Get Out, Universal Pictures

copy license

Should I have children? Here’s what the philosophers say

Parenthood has traditionally been considered the normal outcome of growing up. A side effect of reaching maturity.

Across Europe and the US, only 10%-20% of adults remain childless or (more positively) child free. In some cases, this is accidental. People wait for an ideal time that never arrives – and then it is too late.

Anti-natalism is the philosophical view that it is ethically wrong to bring anyone else into being. The justifications draw upon worries about suffering and choice. And it’s not an exclusively modern attitude. The ancient Greek playwright Sophocles, writing at the end of the 5th century BC, tells us that it is “best of all” not to have been born, because life contains far more suffering than good.

Contemporary anti-natalist arguments add a nuance by focusing on an asymmetry between pain and its absence. The absence of all pain is good, but this good can only be achieved through not bringing anyone into existence at all. The presence of pain is bad, and it is always part of life. So why forego the certainty of a good thing for the certainty of many bad things?

Philosopher David Benatar presents the best known contemporary argument along these lines in his 2006 Sophocles-inspired book, Better Never to have Been:

“It is curious that while good people go to great lengths to spare their children from suffering, few of them seem to notice that the one (and only) guaranteed way to prevent all the suffering of their children is not to bring those children into existence in the first place.”

Other versions of anti-natalism focus instead upon the fact that nobody chooses to exist. Existence is thrust upon us. Inconveniently, this suggests that the vast number of teenagers who tell their parents: “I didn’t ask to be born”, may in fact be budding philosophers.

The problem with anti-natalism

Anti-natalist arguments can sound like something from Oscar Wilde, rather than practical guidance for life. This makes them difficult to challenge. However, one popular response is to say that a refutation is unnecessary.

Having children is part of the canvas on which ethics is painted, rather than part of the picture. The ethical picture can change, but the canvas is not optional. It holds our way of human life in place. Individuals can choose to procreate or not to procreate, but rejecting parenthood entirely has no place within a good society.

Critics find this response evasive. Many of us also wonder why humans are drawn toward parenthood and what we might be missing if we choose not to procreate. Schopenhauer answers the “why” question in The World as Will and Representation (1818) by claiming that biology overrides sound judgement and tricks us into producing the next generation.

But is it really a trick? After all, there do seem to be some important good things bound into parenthood.

The philosophical benefits of parenthood

Plato’s Lysis struggles to identify these good aspects of parental care. His central character, Socrates, gives some young men a hard time when they cannot identify what benefit they bring to their parents. What they fail to recognise is that the goods of parenthood involve seeing a child grow and mature – and finding meaning in the process.

This recognition of the role played by care for others is also present in many religious traditions – particularly in the ways that they address life’s sufferings.

Buddhists celebrate the rebirth of enlightened humans into a world of suffering in the hope that they may help other beings.

Confucians highlight that, across generations, children can care for parents and grandparents.

In both cases, care binds a good society together, in ways that sustain social hope. In contemporary social economy, the younger generation of taxpayers supports older generations as well as childcare.

While non-existence would avoid may bad things, new humans carry the possibility of making the future better than the past. Losing such hope for the future would be terrible all round.

Focusing instead on the lack of choice exercised by a nonexistent, unborn human generates interesting philosophical puzzles, but bypasses what runs philosophically deep. Such as the wonder that the female body is where the creation of all humans happens – the place where every pianist, pickpocket and anti-natalist starts out.

The female power to give birth also counteracts complex forms of sociocultural control and sets in motion practical problems: who will become family members of a new human? Will relatives and our wider society care in the right ways?

Women must make the final decision about giving or not giving birth. At the same time, to give life a sense of meaning, we share our lives with friends, life partners, and children. Disappointment, joy and loss are part of the package. Even Schopenhauer, who spurned parental love, felt the need to lavish care upon his beloved dog.

We can love and find meaning without having children. But parenthood is one of our more entrenched ways of trying to live meaningful lives. For some, there may be no other workable path. Personal histories can lead any of us to feel incomplete without children. More disturbingly, it can lead people to feel like failures if they remain childless. And that, surely, is a bad thing.

In a rare Sydney appearance, philosopher David Benatar presents The Case for Not Having Children at The Festival of Dangerous Ideas on Sunday 25 August, 2024. Tickets on sale now.

This article was originally published in The Conversation.

copy license

We are witnessing just how fragile liberal democracy is – it’s up to us to strengthen its foundations

Unless we want to slip into a world where force and coercion drive politics, then we all must invest in reinforcing the institutions that keep liberal democracy working.

For most of human history, politics was — and in many parts of the world today, still is — a wilderness. Political victories were won at the point of a spear or the barrel of a gun, rather than at the ballot box. When there was a dispute about whose interests ought to take priority, how to distribute resources, or even who gets to have a say in how people live their lives, it was those who wielded the greatest force who typically got to choose. And, unsurprisingly, they often chose in favour of themselves.

This makes liberal democracy an historical anomaly. Within liberal democracy, we fully expect there to be disagreements about how best to run society — not least because the “liberal” part allows each person to define their own vision of a good life rather than having one imposed on us by others. But in liberal democracy, these disagreements are not won through coercive force but through persuasion, or as the German liberal philosopher Jürgen Habermas puts it, “the unforced force of the better argument”.

But the wall of civility surrounding the garden of liberal democracy is not impregnable. Coercive force lingers just outside, threatening to burst in and bypass the messy process of persuasion — as it did on 13 July 2024, when a would-be assassin attempted to silence former President Donald Trump with an assault rifle rather than words.

The good news is that the near universal expressions of shock and condemnation at the attempted assassination show that most people in the United States, and in other liberal democracies, still prefer to resolve their disputes within the norms of the liberal democratic garden rather than returning to the wilderness. Still, this episode serves as a potent reminder of just how fragile and important the norms that preserve liberal democracy are, and that the institutions that enable peaceful political debate require constant reinforcement.

The grand bargain

The problem is that, in recent years, liberal democracy has been failing itself. One of the “unforced forces” that keeps the system operating is a tacit buy-in on behalf of every individual within the system. We need to believe that the system is working for us, that it’s fair, and that our voice matters, otherwise we have little incentive to work within it. If we feel powerless, disenfranchised, embattled or feel our livelihood or safety is threatened, we have more reason to step outside the walls of civility.

But liberal democracies, such as the United States — and to a lesser but nonetheless significant extent, Australia — have often failed to give us good reason to believe the system is working.

For many of us, the “grand bargain” of liberal democratic society is breaking down. This bargain states that if we work hard, get a good education, and play by the rules, then we’ll have every opportunity to live a fulfilled and fulfilling life. But that’s just not the reality for a large proportion of the population. Many liberal democracies are facing an omni-crisis — combining housing, inflation, wealth inequality, climate change, mental health, loneliness, childcare, aging, the erosion of traditional jobs, the fragmentation of communities, as well as racism, sexism and other forms of systemic discrimination, and more besides.

If people feel powerless or disenfranchised, they’ll reject the constraints the system places on them to engage in peaceful debate.

Or if they feel that the stakes are so high that they can’t afford to let the other side win, then they’ll reject the ballot box and turn to other means to achieve their political ends.

How to restore faith in liberal democracy

Of course, those in power must not neglect their responsibility to protect and strengthen the system, and restore the grand bargain, even if they might forego short-term political or financial advantage in doing so.

Although it’s up to us to hold them to account. We should demand more of our elected representatives. But we must demand more of ourselves as well. We must lower the temperature of popular discourse: tune out the hyperbole, avoid partisan media, carefully curate our social media, don’t engage with those promoting conspiracy theories, and refuse to feed the trolls. Listen and ask questions of people who have different opinions. Advance our views with conviction, but also with humility. Acknowledge that there is probably not one right answer to many of the challenges we face, and that compromise is inevitable.

Just as important is building the social foundations that enable civil but spirited discourse. That means investing in our local communities to build “social capital” — the trust, respect, and norms of reciprocity that keep society functioning. Talking to your neighbour over the fence, taking your dog to the park, participating in a class at your local community centre, volunteering for a local organisation, joining an activist group — these are the grassroots of the liberal democratic garden, and they’re just as important as the larger institutions. They reinforce our common humanity; our neighbour might vote differently to us, but we still share the same human concerns.

As American political commentator Yuval Levin has stated, those we disagree with aren’t just going to disappear if we coerce them into silence or bully our way into power. Their views will persist, and if we give them no voice, they will be motivated to find other ways to be heard. We must practice tolerance and compromise, because the alternative is a return to the wilderness.

Catch Democracy is Not Worth Dying For at The Festival of Dangerous Ideas, Sunday 25 August at Carriageworks, Sydney. Tickets on sale now.

This article was originally published by ABC religion and Ethics.


Trump and tyrannicide: Can political violence ever be justified?

It is remarkable that Donald Trump, former US President and presumptive Republican nominee, is alive today.

He survived an assassination attempt at relatively close range, which killed one bystander and serious injured two more. Trump himself was lightly wounded. The photograph of Trump bloodied and bellowing defiance as he is dragged from the stage by Secret Service agents has become the defining picture of this election — and perhaps the state of American democracy. It is a portrait of a demagogue who conjured violence and malice for nearly a decade in American politics only, like the sorcerer’s apprentice, to have these same forces turn against him. It is a portrait of a fracturing republic.

In the aftermath of the assassination attempt there has been universal condemnation of the attack. President Joe Biden addressed the nation declaring in no uncertain terms that there was no place for violence in American politics and that the attack was “sick”.

The condemnations seem platitudinous and empty, however — a slightly more refined form of the “thoughts and prayers” ritually offered in the wake of mass shootings. It also seems to run counter to reality.

Political violence is part of American culture. It birthed the republic in the Revolutionary War. The founding fathers all recognised that, under certain conditions, political violence was both just and necessary.

Many Americans still agree. Just last month, Richard Pape from the University of Chicago found that some 10 per cent of Americans support the use of force to stop Trump from regaining the White House, while 6.9 per cent of Americans would support the use of force to install him. That is some 44 million Americans. Simple attempts to wave political violence away is not sufficient to deal with this problem. There needs to be a serious discussion about when political violence is justifiable.

Is Trump a tyrant?

Blanket condemnations of political violence are frequently unconvincing. This would condemn us to doing nothing in the face of evil. The real discussion is about when such things are permissible. I want to address one particular act of political violence: tyrannicide.

We can distinguish tyrannicide from assassination by saying the former is justifiable political killing and the latter is not. You might say there is no such thing. When I ask my students about whether deliberate killing is justifiable, often most of them do not think it is. The intuition that deliberately taking another person’s life is deeply ingrained, and rightly so, but it does need to be critically examined.

Consider a test case. Reinhard Heydrich was a high-ranking officer in the SS, a chief lieutenant of Adolf Hitler, and one of the prime movers of the Holocaust. On 27 May 1942, Czech and Slovak partisans assassinated him with an improvised bomb. It is difficult to argue that the killing of a man deeply implicated in the coercive imposition of a racist totalitarian regime and industrialised murder of innocent persons is wrong.

There has been significant discourse around the threat Trump poses to American democracy. As president he showed little knowledge or interest in the guardrails against his power, he relentlessly demonised his opponents, and instigated a violent mob to prevent the peaceful transition of power. In the lead up to the 2024 presidential election, little seems to have changed. The poisonous rhetoric continueshe threatens to jail his political opponents, and he has indicated his desire to reshape America into a more authoritarian and theocratic state with his ties to the Heritage Foundation’s Project 2025.

Here, then, is the crux of the problem: Trump seems like a figure bent on hollowing out republican institutions and accumulating arbitrary power in the office of the president, to wield as he pleases. He looks a lot like a Julius Caesar. But does this legitimise a modern Brutus?

The answer is no, but we need to know why.

“An enemy of all humanity”

The bar for tyrannicide must be high and clear. This is for two reasons. The first is simply the categorical value of human life: if murder is not wrong then what is? People like Heydrich forfeit their immunity from violence by committing terrible acts that shock the conscience of humanity. In the past, they would be described as hostis humani generisan enemy of all humanity. This term was used to describe pirates and those who violated the basic terms of human social cooperation. They were outlaws — quite literally, beyond the protection of the law. People like Heydrich and his master were legitimate targets for tyrannicide because they committed crimes against humanity and in doing so made themselves a threat to all persons. To kill a Heydrich or a Hitler is akin to killing in self-defence or the defence of others. It is justifiable.

There might be some push back. This bar requires crimes against humanity to be committed before an act of tyrannicide — but what if they could be prevented by removing criminals before they act. The problem with this stance is that it creates an almost impossible burden of judgement. Think of John Wilkes Booth. After shooting Abraham Lincoln, he shouted sic semper tyrannis, “thus ever for tyrants”, the call used by the assassins of Julius Caesar. Yet, the judgement of history is that Lincoln, far from being a tyrant, was one of America’s greatest leaders and his murder one of its most profound tragedies. There is no such ambiguity when it comes to the likes of Heydrich or Hitler.

The further reason for having a high bar for tyrannicide is the consequences. One of the reasons Jeremy Bentham was critical of the right to resist oppression was that it left too much to the judgement of individuals and could lead to anarchy if anyone who felt oppressed could turn a knife on the judge who condemns them or the politician who advocates a policy with which they disagree. Those who would use this sort of violence run a terrible risk of breaking democratic systems.

Democracy is almost alchemical in its operation. It can transmute violent dissent into peaceful disagreement. The enemy becomes the rival. How? Because of the “losers’ consent”. The defeated side in a democracy does not resort to violence as they recognise that they may win the next contest. The legitimacy of the system survives electoral defeat. Political violence and assassinations erode this fundamental norm; they signify a withdrawal of consent.

Under these conditions, violence can produce the very outcome it seeks to prevent: a total collapse and a spiral into authoritarianism.

Again, think of those who killed Julius Caesar. They acted to preserve the Roman Republic, but instead they sparked a brutal civil war that eventually produced the Roman Empire.

Political violence of this sort can only be justifiable under the worse conditions. We may find Donald Trump repugnant, but he has not committed crimes against humanity. He is not hostis humani generis. This does not mean, however, that Trump is beyond reproach. His outriders — including his vice-presidential running mate, J.D. Vance — have claimed that democratic rhetoric about the risk Trump poses to American democracy are responsible for the assassination. They are attempting to elevate him the paradoxical state of a living martyr who cannot be criticised.

Setting aside the fact that we still have no notion of what motivated the would-be assassin, this evolution in the Trump cult of personality must be resisted for the sake of democracy. The sad fact is that no one has done more to erode the norms of democracy than the man who was almost killed on that stage in rural Pennsylvania. This cannot be ignored. He must be held to account — but with ballots, not bullets.

 

Is democracy worth dying for? Find out more at The Festival of Dangerous Ideas, 24-25 August at Carriageworks, Sydney.
David Blunt also chairs The Pitchforks are Coming at The Festival of Dangerous Ideas. Tickets on sale now.

 

This article was originally published by ABC Religion and Ethics.


How The Festival of Dangerous Ideas helps us have difficult conversations

We have probably never been good at having difficult conversations about controversial topics. But today these conversations are more important than ever.

In a world of explosive complexity, escalating diversity and accelerating change, there’s a risk that our old ideas have already become stale, and we desperately need to replace them with new ideas built to handle the monumental challenges we face today.

That’s why the Festival of Dangerous Ideas (FODI) exists. FODI’s purpose is to challenge conventional wisdom, because sometimes our ideas and assumptions need to be tested, even if to just make sure they still hold up. And should we find them wanting, FODI offers new ideas to improve upon and replace the old.

But challenging what we assume to be true, questioning what is “known,” or raising radical new ideas can be confronting, even offensive, to many ears. It can lead to difficult conversations, even if they’re precisely the ones we need to have.

The problem is that today’s world is particularly hostile towards having difficult and challenging conversations. Some cling to their old ideas and refuse to move with a changing world. Others vigorously defend their particular new ideas, preventing them from being tested to ensure they’re the best ones for our times. Both camps act like they have their backs against the wall, and the stakes feel so high that they’re unwilling to allow conversation that might challenge their beliefs – or their identity.

This is why FODI is dangerous.

FODI’s remit puts it at odds with a growing culture of outrage, cancellation and self-censoring. Yet FODI is as committed as ever to its mission, even if that means facing the fact that some people will react badly to having their beliefs challenged or recoil from radical new ideas.

To achieve its mission, FODI has a commitment to principle and transparency. One of FODI’s core principles is that not every dangerous idea is worth promoting. FODI only offers ideas that are backed by robust reasoning and evidence, and which are delivered in good faith – meaning they’re spoken with authenticity, integrity and with an intention to make the world a better place.

And while some ideas offered at FODI might be sensational, FODI rejects sensationalism. It doesn’t promote a speaker simply because they will cause a stir, but acknowledges that some challenging ideas will grab attention or trigger a backlash.

Another core principle is that FODI doesn’t preach. There is no particular belief, political ideology or ethical viewpoint that FODI seeks to promote – except its meta-commitment to good ideas supported by reason, evidence and good faith.

FODI chooses its speakers carefully. First, they must be qualified. Some are experts in their field, with qualifications and publications that demonstrate their mastery of the subject. Others have a more personal kind of expertise, with direct lived experience of the subject matter they’re invited to share. Second, they must be speaking in good faith rather than seeking to feed platitudes to the public in order to elevate their status.

But there are red lines that FODI will not cross. FODI is committed to respecting the inherent dignity of all people, meaning there might be some subjects that are inherently degrading or dehumanising to a particular population, or which are impossible to appropriately address in a public forum. FODI will not platform such ideas.

However, FODI is tolerant of minor indiscretions – because it acknowledges each of us is flawed – but only to the degree that speakers are willing to own their actions and take appropriate measures to rectify any harm they have caused.

The final hurdle that FODI speakers must clear is perhaps the most difficult to judge. Occasionally there will be a dangerous idea that has merit, and the speaker has the qualifications to express it, but which is so far outside of acceptable discourse that there is little or no chance that the idea will be received by the audience in good faith. Some ideas simply require more space or charity on behalf of the audience than FODI is able to provide, so they are not the kinds of ideas that will be platformed at the festival.

FODI is about dangerous ideas. It’s in the name. FODI’s line-up won’t please everybody. It never does. Nor does it aim to. But FODI is a space for curious minds to challenge ideas that need to be tested, and offer new ideas that the world desperately needs.

The Festival of Dangerous Ideas returns to Carriageworks, Sydney from 24-25 August 2024. Tickets on sale now at festivalofdangerousideas.com.


The ethical price of political solidarity

Which takes ethical precedence: keeping a promise to remain loyal to your group or sticking to your principles?

This is a question that has faced first-term Western Australian senator, Fatima Payman, repeatedly over the past few weeks. Ultimately, she chose her principles, crossing the floor to vote for a Greens bill calling to recognise Palestinian statehood, and now she’s paying the price for breaking her pledge of caucus solidarity with the Australian Labor Party (ALP). 

Meanwhile, Prime Minister Anthony Albanese, faced a different dilemma. Even though his party’s National Platform ostensibly supported Payman’s principled position, the fact remains that she broke caucus solidarity by crossing the floor, an act that he was obliged by party rules to punish with a one-week suspension from caucus.  

But then Payman doubled down on her principled stance by stating on national television that she would be willing to cross the floor again should another vote arise on Palestinian statehood. Again, Albanese felt his hand was forced, with him issuing her with an indefinite suspension. 

Payman’s suspension has proven divisive, with many Labor members and supporters expressing outrage that she would violate her sacred pledge of caucus solidarity and draw media attention away from key Labor initiatives, such as the revised stage 3 tax cuts.  

Others, such as the Australia Palestine Advocacy Network, have seen events through a different lens, saying it was “disturbed by the suggestion that towing the Labor Party’s line is more important than standing up for the rights and lives of Palestinians as they are slaughtered in Gaza.” 

Ultimately, both Payman and Albanese were placed in an ethical dilemma, with competing obligations pulling them in different directions. However, the episode raises deeper questions about whether politicians should be allowed to vote on matters of conscience or principle, and whether it is justified for a political party to punish them for doing so. 

Ethical tension

When we vote for a politician based on their stated values and principles, we might expect they stand by them and vote accordingly when they’re in parliament. However, that’s often not the case. 

Members of parliament are typically bound to vote for – and publicly support – their party’s agreed position, even if that position contradicts their own. In fact, since its inception in 1891, Labor has maintained a strict policy of caucus solidarity, with members pledging to uphold it as sacrosanct.  

This means Labor members are free to argue forcefully for their views inside caucus meetings, but once the caucus has decided on a position, they are bound to vote for it. This has sometimes put Labor members in a difficult position, such as when Labor Senator Penny Wong was obliged to vote against same-sex marriage in 2008, despite her deep commitment to marriage equality. 

In keeping with its traditional liberal roots, and the notion that it’s a “broad church”, the Liberal Party takes a relatively softer stance, ostensibly allowing members to cross the floor on matters of principle. However, even though the Liberal Party doesn’t require its members to make a pledge of caucus solidarity, they are still strongly encouraged to vote with the party, and often suffer punishment if they go against the party line. 

The exception is when the leadership of a political party announces a “free” or “conscience” vote. These are rare, and are typically related to bills with a strong ethical element, such as abortion, euthanasia or embryonic stem cell research. In these cases, members are released from their obligations to vote with the party. However, over the last few decades the ALP has been less likely to allow a conscience vote than the Liberal Party, and the bill on Palestinian statehood that Payman crossed the floor on was not declared as a conscience vote by Labor. 

Caucus solidarity is often justified in terms of the party being more stable – and more effective in governing – if it works as a collective rather than a group of individuals with diverse views. If every member of parliament were free to vote on any issue, then parties would have to work harder to curry favour with each representative, possibly watering down bills in order to get them on board. That could result in weaker legislation and prevent a party from genuinely being able to enact the policy platform that it presented to the electorate. It would also make it harder to vote for a party platform, knowing that any member might vote against it at any time. 

Still, party solidarity could be seen as a political solution that involves an ethical compromise, not only preventing politicians from voting according to their deeply held views – which might be the very views that got them elected – but also requiring them to act inauthentically by publicly supporting a view they don’t personally hold.

Ultimately, political leaders – Anthony Albanese included – have a choice to make when faced with the dilemma of a sitting member crossing the floor: which is more important, solidarity or principle? And voters have a choice of whether to vote for a candidate, knowing that they might be prevented from voting in accordance with their values and principles.

copy license

Let’s cure the cause of society’s ills, and not just treat the symptoms

The lead up to June 30th, is critically important for organisations, like The Ethics Centre, that depend on tax-deductible donations in order to make ends meet. Every organisation has a compelling claim to make for why their cause is deserving of support.

Some have ‘natural constituencies’ defined by the specific nature of the issue to be addressed. For example, medical research institutes will focus on those touched by the various diseases they work to treat and to cure. Other charities have a clear focus on solving one clearly defined problem, like homelessness, and can point to the specific impact that every dollar of charitable giving can have.

And then there are organisations like The Ethics Centre – where the work ranges across the whole span of human affairs with an impact that may take decades, if ever, to register. Consider this … How do you capture the significance of the countless bad things that did not happen as a direct result of good ethics leading to good decision making?

Yet, ours is a story that needs to be told – again and again. It’s not just a matter of ‘rattling the tin’ in the hope of securing a few more donations. Please believe me when I say we need them – as never before. However, there is also real and growing sense in which the need for ‘ethics’ is growing greater with each passing day.

Poverty, inequality and disadvantage are not ‘natural’ aspects of the human condition. Nor is it inevitable that the earth should be ravaged by our species. Rather, the root cause of social and environmental degradation lies in the character and quality of human choice – the most powerful force on this planet. Knowing this to be so, philosophers have spent millennia working to understand the underlying structure of human choice and how it might be harnessed for good rather than ill. Ethics is the branch of practical philosophy that addresses this question.

Australia stands on the cusp of a brilliant future. It has everything any society could need: vast natural resources, abundant clean energy and an unrivalled repository of wisdom held in trust by the world’s oldest continuous culture supplemented by a richly diverse people drawn from every corner of the planet.

As such, Australia could become the most prosperous and just society the world has ever known. However, whether or not this future can be grasped depends not on our natural resources, our financial capital or our technical nous. The ultimate determinant lies in the public’s willingness to trust those who will lead the process of translating the vision into reality.

Thus, we see the effects of ethical failure not only in its most obvious symptoms. Its corrosive effects can also be seen in the loss of public trust in nearly all of our major public and private institutions. This loss of trust has come at precisely the time when it is most needed – when we should be able to rely on those institutions to help guide society as it navigates a landscape of increasing complexity.

Australia lacks a peak organisation with a mandate to address the major ethical questions of our age. Significant legal questions can be referred to the Australian Law Reform Commission. The Productivity Commission performs a similar function in relation to questions of economic significance. No such body exists to consider ethical questions – such as are encountered on a daily basis. For example: should we deploy lethal autonomous weapons systems? What are the implications of embracing modern manufacturing techniques that will make many existing jobs redundant? Should we be moving to tax consumption and/or the means of production rather than labour so as to preserve the capacity to offer a social ‘safety net’? Is access to a ‘home’ (as opposed to ‘shelter’) a basic human right? And so on …

The Ethics Centre has spent 35 years building the foundations upon which to build a world-first, national Ethics Institute. But, as of today, that still remains a dream. Meanwhile, we have the work of the moment. It includes offering our Ethi-call service, the world’s only free national helpline for people with ethical issues. Its greatest impact is when it helps to prevent the kind of ‘moral injury’ that does so much harm to mental health. And then there are events like the Festival of Dangerous Ideas (FODI) one of the few places left where reasonable people can gather together and engage in the civilising art of ‘principled disagreement’. These are just some of the things we do in order to help bring ethics to the centre of everyday life.

I come back to one central point. If you’d like to address causes over symptoms then please consider supporting ethics. It’s simple: better ethics make for a better world. Even a few dollars can help that to be as true in reality as it is in principle.

 

With your support, The Ethics Centre can continue to be the leading, independent advocate for bringing ethics to the centre of everyday life in Australia. Click here to make a tax deductible donation today. 

copy license

Critical thinking in the digital age

It’s a usual Friday night; you’re at a friend’s place. A hush falls over the group as one of the boys pipes up: “Yeah nah but Andrew Tate has some good ideas too”.  

Hopefully, you’ve never found yourself in this position, but you might know someone who has. Figures like Andrew Tate, known for his proud advocacy of misogyny among young men, aren’t uncommon on social media and their followings are microcosms of a much larger issue: a deficit of critical thinking in online education and spaces. 

Unfortunately, social media platforms aren’t adequately removing hate speech or radicalising figures, nor can we expect filters and rules to ever be enough. With this in mind, we need ways of educating (often young) viewers on how to decide what and who to trust. 

I recently attended a live recording of the Principle of Charity podcast at Sydney Writers’ Festival where this idea was raised with comparisons between non-fiction books and videos. Philosopher A.C. Grayling and art historian and content creator Mary McGillivray spoke at length about the various limitations of each medium. While both were reluctant to take polarising positions, eventually a sticking point did arise. 

Grayling argued that the relative accessibility and ease of creating social media content makes it more dangerous, given its ability to platform those with little-to-no credibility or expertise. This echoes an interesting evolutionary theory called “costly signalling”, which says the more someone invests in sending a message – i.e. the more “costly” it is for them – the more likely it is to be trustworthy. On the other hand, the less someone invests in sending a message – the “cheaper” it is – the more likely it is to be untrustworthy or noisy. This is one reason why handwritten letters tend to be more thoughtful than fleeting social media posts. McGillivray’s response was two-fold.  

Firstly, she noted that there are plenty of questionable published books out there by charlatans and the like – books that prey on insecurities to make sales, or simply peddle well-disguised misinformation. So, this isn’t a problem unique to social media. 

Secondly, to which Grayling conceded, traditional avenues of information dissemination are often gatekept behind propriety, things like formal educational status or money. This is the flip side to costly signalling when applied to humans: only those who can “afford” to make costly communications will be heard and believed, so the wealthy and powerful will often dominate public discourse.  

There are significant benefits, then, to having a medium that’s accessible to those who have never been afforded the circumstances to engage in these systems. There is still a trade-off: low-cost communication means potential for an oversaturation of inaccurate or useless information, but that is something we must contend with if we want to avoid restricting these things to the already-rich-and-powerful.  

McGillivray herself is in the middle of a PhD, not yet an expert in the eyes of academia, yet she has amassed a huge following for her historical art analysis and edutainment (educational entertainment). Using her MPhil in History of Art and Architecture, she is able to produce well-researched and engaging videos that educate an audience that might not otherwise have been introduced to the topic at all. 

Unfortunately, it’s especially easy for poor video information to also garner audiences because of its reliance on charisma above all else. We can be easily swayed by charisma as we passively consume it, because passive consumption leaves us more open to suggestion.  

We can be prone to dismissing legitimate information and being overly charitable towards frauds based almost entirely on aesthetics.  

 

Whom should we listen to? 

Who qualifies as an expert, and when is being an expert relevant? 

To figure this out, we have to talk about the combination of trust and critical thinking. Regardless of whether we’re picking up a book or scrolling on our phones, we need to know whether we can trust what we’re consuming and who we’re consuming it from, else we become yet another cog in machines of misinformation.  

Unfortunately, there are many forces in the world that try to deceive us every day, from advertisers to influencers to corporations to politicians. To gain a good sense for what to trust, we need to know what to distrust by developing our critical thinking skills. This process isn’t simple, yet anyone can do it.  

The first step is acknowledging that we are sometimes gullible and generally easy to convince with rhetoric. Speaking quickly, using jargon, seeming confident, having a high production quality are all things we intuitively associate with intelligence or expertise and yet they can often act as a distraction from the incoherence of the information being presented to us. This isn’t our fault – they’re designed and used specifically to lower our intellectual guards – but it is something we can learn to recognise and counter.  

On top of this, even when we are capable of being critical of what we consume, we’re often bombarded with so much information that it makes it almost impossible anyway. In these situations, we rely on heuristics and biases to varying degrees of success. These mental shortcuts let us make faster decisions, but that speed often comes at a cost of accuracy.  

 

Being discerning of sources 

One clear thing to look for is the source of the information.  

Check if the information comes from a generally trusted source, like a government website, a university, a peer-reviewed study or a reputable news outlet. While it’s dangerous to be overly sceptical, it is important to note that critical thinking should also extend to sources you trust. Just like anything else, governments, universities or other sources can be biased in certain ways, and being aware of those biases can help you understand the motivations behind different ways of presenting otherwise accurate information. 

For example, news with a repeated focus on or omittance of a certain group of people can imply to viewers a false sense of significance. Without context, even accurate information can be used to misrepresent a wider picture. 

There are many organisations dedicated to fact-checking and bias-checking news and politics, so using them alongside your broader consumption of information can give you a more comprehensive perspective on different issues. For fact-checking of global news trends, there is the International Fact Checking Network, or in Australia there is the Australian Associated Press Fact Check. These resources can help us identify common misinformation trends.

However, there are other ways to train ourselves to be aware of our own biases when we consume media and especially news. For example, Ground News is an organisation that collates news stories from thousands of sources and demonstrates how the same stories are presented differently depending on the general political slant of the publication. This is an effective way of learning about how even subtle differences in language can have a drastic impact on our interpretation of news. Using resources like this is a good way to develop media literacy by training yourself to recognise patterns in media coverage. 

 

Critical reflection 

Many people aren’t going to get their information straight from the source, though. Let’s say you have found an author or a creator that you like. Maybe your friend even told you about it. How do you know if you should trust them? 

Again, the first thing to check is where they’re getting their information from. If they’re presenting evidence, then check that that evidence is coming from a place that you know to be trustworthy. This isn’t a check that you need to do all the time but putting this effort in at least at the beginning will give you a foundation for trusting this person in the future.  

If they aren’t presenting empirical evidence, you can check their credentials and history. Do they have reputable qualifications in a relevant field? Do they have a relevant lived experience to speak from? Do they have a history of presenting accurate information? Is what they’re saying consistent with their own points and with what you know from trusted sources? 

While not all necessary, these are all different ways of making sure that you critically reflect on the news and media you consume.  

Other useful ways to engage critically with media are to challenge yourself to identify different aspects of it. The intended audience can tell you a lot about a publication, like what their motivations or affiliations might be. You can identify this by looking at the language used and asking yourself how it reflects who it is trying to speak to. You can also find potential bias by imagining how the information might sound different if it was written with another kind of audience in mind. 

Another method of testing your understanding of a piece of news is to try to identify what the intended message is by summarising it in 1-2 sentences. Doing this is a good way to practice understanding the implications of lots of bits of information as a whole. It will also help you to ascertain underlying assumptions that are made without the viewer’s comprehension. Identifying these assumptions will help prevent you from being misled.  

For example, there is a general implicit assumption that if something is in the news, then it is important. But you might not agree with that. Sometimes things are reported on or spoken about to cause them to seem important, so by questioning that foundational assumption we can begin to critique the content and motivations behind certain information.  

Identifying these can help you determine how to process the information being presented to you and whether you need to scrutinise it further. Developing critical thinking skills to increase your media literacy is an important part of ethical consumption because it helps us navigate systems of power and slowly prevents the spread of harmful information and ideas. 

 

The Principle of Charity is a podcast that injects curiosity and generosity back into difficult conversations, bringing together two expert guests with opposing views on big social issues.

copy license

Self-presentation with the collapse of the back and front stage

The digital revolution has ushered us into a theatre where the stage extends into our living rooms, and the audience is always watching. Nowhere has this been more evident than in the context of work.  

In the pandemic-driven shift from office to home (and back again), the curtains between personal and professional lives have not just been pulled back but torn down. This has changed not only where we work, but how we work, and crucially, how we are seen by our colleagues. It also has radical implications for what it means to be ‘authentic’ in the workplace. 

Where the pre-2020 professional may have meticulously managed their work persona, the pandemic era has introduced authenticity by default, for better or worse. This collision of worlds invites us to question: In the blur between ‘onstage’ and ‘offstage’, which parts of ourselves do we bring to the virtual office table? 

Presenting our authentic selves

David Hume, a prominent Scottish philosopher of the 18th century, radically destabilised our notions of a consistent, singular self. In A Treatise of Human Nature, Hume argued that our perceptions are the only real things about us and that these perceptions are constantly changing.  

He famously stated, “When I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure. I never can catch myself at any time without a perception, and never can observe anything but the perception.” 

In essence, Hume’s view of the self as a ‘bundle’ of perceptions challenges the idea of a singular, unchanging and enduring entity. Instead, our sense of self is more like a series of fluid, dynamic experiences that are related but distinct, with no unifying core or essence. This perspective has had a significant influence on later philosophical thought, particularly in discussions about personal identity and the nature of the self. 

Given that for Hume, the self is a collection of changing perceptions, it might follow that our self-presentation is inherently fluid and adaptable. This adaptability might imply a level of control, as different situations or contexts bring different perceptions to the forefront, thus altering our self-presentation.  

Rather than managing a stable, singular identity, self-presentation for Hume may be more about managing perceptions, both internal (how we perceive ourselves) and external (how others perceive us).  

Hume also acknowledges the influence of external factors like social context and relationships on our perceptions. This indicates that while individuals might have some control over their self-presentation, it is also shaped by external circumstances beyond their complete control. 

Self-presentation on the new office ‘stage’

In 1959, the influential sociologist Erving Goffman explained self-presentation as how we try to control how others see us. Drawing on theatrical metaphor, Goffman, noted the different roles we play in front of different audiences. He also highlighted the importance of ‘backstage’ spaces – private areas in which we are free to drop any act without spoiling the performance.  

Current workplace paradigms, however, often involve catching glimpses of life behind the scenes. We increasingly perform our work roles immersed in the geography of our non-work selves while trying to create some separation between the two. Self-presentation in these new liminal spaces is therefore less straightforward than it might first appear and, again, poses questions about authenticity.  

Pre-2020, jokes about pairing waist-up office wear with below-desk pyjamas were largely targeted at newsreaders. However, the pandemic-driven move from office to home sent many of us hurtling into colleagues’ worlds like never before.  

The abrupt pivot to online meetings flung open virtual doors into our colleagues’ sanctuaries – spaces that were once off-limits to our professional personas. The sanctity and intimacy of the home, traditionally a retreat from the public gaze, became a shared space, albeit digitally, and often reluctantly.  

The unexpected cameo of a pet, the soft murmur of children playing in the background, or the occasional appearance of a spouse served as reminders of the complex lives beyond the professional facades we maintain. These moments, while humanising, also signalled a tectonic shift in our understanding of authenticity. 

As we continue to navigate this uncharted territory, it’s clear that our pre-pandemic understanding of privacy needs re-evaluating. The distinctions between work and private life must be redefined to protect the sanctity of both.

Employers and employees alike must collaboratively develop new norms and etiquettes that respect personal boundaries while accommodating the realities of our interconnected digital lives.

Examining authenticity

Identity curation is a core part of self-presentation, and we do it constantly. It is this curation that radically undermines claims of self-authenticity. In many workplaces, the move online merely highlighted ambiguities or inconsistencies in thinking about authenticity that were always there. 

The language of authenticity is pervasive in the workplace; it features in mission statements, company values and even competency frameworks yet is rarely defined by organisations in any meaningful way. 

The idea of an authentic self underpins the relatively recent introduction of the notion of ‘bringing your whole self to work.’ We have witnessed a radical shift from earlier 20th-century conceptions of workplace conformity and strict adherence to a professional persona that often required suppressing personal identities and emotions, to a workplace culture that rhetorically valorises the uniqueness of individual’s ‘lived experience’ and the expression of people’s unique perspectives and backgrounds in the workplace.   

However, it still begs the question as to what precisely constitutes an individual’s authentic self? And which modes of authenticity are valued over others?

Expecting employees to somehow just be authentic – a complex and debated concept in an era of multiple states of self-presentation – is therefore an unreasonable ask, whether offline, online or moving between the two.

Recent discussions on authenticity in the workplace highlight a complex landscape where the concept is both lauded for its potential to enhance individual well-being and critiqued for its unintended consequences. Hamilton and Almeida at the LSE Business Review point out that an authentic work environment can be beneficial to psychological health and performance outcomes, yet the narrow social norms of professionalism often limit who can genuinely express themselves without repercussions, leading to assimilation and code-switching​​​​. 

Furthermore, Hamilton and Almeida also found that authenticity can become a source of cognitive strain and alienation when individuals feel compelled to align with dominant group norms, which may conflict with their innate values and personal identities​​. This is particularly challenging for minority groups within organisations, as the pressure to conform can lead to psychological distress and inhibit genuine self-expression​​. 

Critiques also focus on the potential for a culture of authenticity to inadvertently perpetuate social inequalities, as those who cannot or choose not to conform to the dominant narrative of authenticity might self-segregate or face negative work-related consequences​​. Additionally, the pursuit of authenticity can stifle dissent and innovation, as uniformity of thought undermines the benefits of a diverse workforce​​. 

The call for authentic self-expression in the workplace must be balanced with a conscious effort to foster an inclusive environment where diverse voices are not just heard but valued. This requires leadership that understands and actively manages the dynamics of group identity, as well as a collective effort among employees to embrace and respect differences​​. 

copy license