Is existentialism due for a comeback?

Today feels eerily like the age that spawned the philosophy of radical freedom in defiance of the absurdity of life. Perhaps it’s time for a revival.

Parenting during the Covid-19 pandemic involved many new and unwelcome challenges. Some were obvious, practical things, like having the whole family suddenly working and learning under one roof, and the disruptions caused by lockdowns, isolation, and being physically cut off from extended family and friends. 

But there were also what we might call the more existential challenges, the ones that engaged deeper questions of what to do in the face of radical uncertainty, absurdity and death. Words like “unprecedented” barely cover how shockingly the contingency of our social, economic and even physical lives were suddenly exposed. For me, one of the most confronting moments early in the pandemic was having my worried children ask me what was going to happen, and not being able to tell them. Feeling powerless and inadequate, all I could do was mumble something about it all being alright in the end, somehow. 

I’m not sure how I did as a parent, but as a philosopher, this was a dismal failure on my part. After all, I’d been training for this moment since I was barely an adult myself. Like surprisingly many academic philosophers, I was sucked into philosophy via an undergraduate course on existentialism, and I’d been marinating in the ideas of Søren Kierkegaard in particular, but also figures like Jean-Paul Sartre, Simone de Beauvoir, and Albert Camus, ever since. These thinkers had described better than anyone such moments of confrontation with our fragility in the face of an uncaring universe. Yet when “the stage sets collapse”, as Camus put it, I had no great insight to share beyond forced optimism.  

In fairness, the existentialists themselves weren’t great at giving advice to young people either. During World War II, Sartre was approached by a young pupil wrestling with whether to stay and look after his mother or join the army to fight for France. Sartre’s advice in reply was “You are free, therefore choose” – classic Sartre, in that it’s both stirringly dramatic and practically useless. But then, that’s all Sartre really could say, given his commitment to the unavoidability of radical choice.  

Besides, existentialism itself seems to have fallen out of style. For decades, fiction from The Catcher in the Rye through to Fight Club would valorise a certain kind of existential hero: someone who stood up against mindless conformity, exerting a freedom that others – the unthinking masses that Heidegger derisively called das Man, ‘the They’ – didn’t even realise they had. 

These days, however, that sort of hero seems passé. We still tell stories of people rejecting inauthentic social messages and asserting their freedom, but of an altogether darker sort; think Joaquin Phoenix’s take on the Joker, for example. Instead of existentialist heroes, we’ve got nihilists. 

I can understand why nihilism staged a comeback. In her classic existentialist manifesto, The Ethics of Ambiguity, Simone de Beauvoir tells us that “Nihilism is disappointed seriousness which has turned in upon itself.” For some time now, the 2020s have started to feel an awful lot like the 1920s: worldwide epidemic disease, rampant inflation and rising fascism. The future that was promised to us in the 1990s, one of ever-increasing economic prosperity and global peace (what Francis Fukuyama famously called the “end of history”) never arrived. That’s enough to disappoint anyone’s seriousness. Throw in the seemingly intractable threat of climate change, and the future becomes a source of inescapable dread.  

But then, that is precisely the sort of context in which existentialism found its moment, in the crucible of occupation and global war. At its worst, existentialism can read like naïve adolescent posturing, the sort of all-or-nothing philosophy you can only believe in until you’ve experienced the true limits of your freedom.

At its best, though, existentialism was a defiant reassertion of human dignity in the face of absurdity and hopelessness. As we hurtle into planetary system-collapse and growing inequality and authoritarianism, maybe a new existentialism is precisely what we need.

Thankfully, then, not all the existential heroes went away.

Seeking redemption

During lockdowns, after the kids had gone to bed, I’d often retreat to the TV to immerse myself in Rockstar Games’ epic open-world first-person shooter Red Dead Redemption II. The game is both achingly beautiful and narratively rich, and it’s hard not to become emotionally invested in your character: the morally conflicted, laconic Arthur Morgan, an enforcer for the fugitive Van Der Linde gang in the twilight of the Old West. [Spoiler ahead.] 

That’s why it’s such a gut-punch when, about two-thirds of the way through the game, Arthur learns he’s dying of tuberculosis. It feels like the game-makers have cheated you somehow. Game characters aren’t meant to die, at least not like this and not for good. Yet this is also one of those bracing moments of existential confrontation with reality. Kierkegaard spoke of the “certain-uncertainty” of death: we know we will die, but we do not know how or when. Suddenly, this certain-uncertainty suffuses the game-world, as your every task becomes one of your last. The significance of every decision feels amplified.  

Arthur, in the end, grasps his moment. He commits himself to his task and sets out to right wrongs, willingly setting out to a final showdown he knows that, one way or another, he will not survive. It’s a long way from nihilism, and in ‘unprecedented’ times, it was exactly the existentialist tonic this philosopher needed.  

We are, for good or ill, ‘living with Covid’ now. But the other challenges of our historical moment are only becoming more urgent. Eighty years ago, writing in his moment of oppression and despair, Sartre declared that if we don’t run away, then we’ve chosen the war. Outside of the Martian escape fantasies of billionaires, there is nowhere for us, now, to run. So perhaps the existentialists were right: we need to face uncomfortable truths, and stand and fight. 


Putting the ‘identity’ into identity politics

‘Identity politics’, much like ‘political correctness’, is being battered in the public square.

Rather than becoming a serious category of political critique, its obscurity of meaning makes it a useful tool for opposing political ideologies. It is used as a pejorative for the political left and, at the same time, as a euphemism for white supremacy (‘white identity’). Now, even notionally left-leaning figures have started railing against identity politics, calling it divisive.

The hackneyed left/right distinction will not help us to understand the debate over identity politics. Indeed, this conflict may well reflect and reveal deep, structural features of what it is to be a human.

 

 

What is identity politics?

Identity politics seeks to give political weight to the ways in which particular groups are marginalised by the structures of society. To be a woman, a person of colour, transgender or Indigenous is to be born into a set of social meanings and power relations that constrain what sort of life is possible for you.

Your experience of the world will be conditioned in very different ways from those on the other side of social power. Identity politics, at its simplest, is an attempt to expose and respond politically to this reality.

Importantly, identity politics has always been a reaction to liberalism, although both frameworks have the same end goals of equality and ending oppression. A liberal approach to social justice sees the demand for political equality as emanating from a shared human dignity. As such, it de-emphasises difference. If only we look past the superficial things that divide us, we’re told, we see a common humanity. It washes away any justifications for discrimination on the basis of race, gender or orientation.

But liberalism can tend to mask the circumstances that make us need liberating in the first place. By focusing on universalism –  the ways in which you and I are the same – I may well become more aware of our shared human equality and less sensitive to how different our lived experiences may be. So the forms of privilege I enjoy and you lack remain invisible to me.

Liberalism at its most strident treats every person as a sort of abstract locus of radical freedom and rationality. Identity politics resists that abstraction – an abstraction which, as identity politics points out, implicitly serves those on the privileged end of power imbalances. In doing so, liberalism makes it harder for us to see and dismantle the structures that perpetuate inequality.

The real gulf here isn’t between left and right or between minority and majority, it’s between two conceptions of how human beings stand in relation to their historical and cultural background. Are we defined by our place within society or do we somehow transcend it?

The philosophy of identity

“What are we?” should be the simplest question to answer, yet it is one of the most annoyingly intractable problems in philosophy. Are we minded animals or embodied minds? Are we souls? Bodies? Brains? Particular bits of brains? All of these answers have been tried, and all answer some of our everyday assumptions about personal identity while running afoul of others.

What they all have in common is they treat selves or persons as a type of object. They all regard the person from a third-person perspective. That’s the perspective we take on other people all the time, both in our everyday interactions and in trying to influence human behaviour via psychology, medicine, marketing, politics and so on. And we can take a third-person perspective on ourselves, too. Whenever you ask, “Why did I act like that?” you’re viewing yourself as an object and wondering what forces made you act one way instead of another.

Yet in your self-reflection you never, to borrow a phrase from Jean-Paul Sartre, ‘coincide’ with yourself. You view yourself as an object. Just by doing that you go beyond yourself, make yourself something different to the self you’re contemplating. Just as the eye can never catch sight of itself (but only, at best, a reflection of itself), you can only see yourself by being something more than yourself, so to speak.

You detach from all that you are – your history, your memory, your character – in order to observe yourself. That ability to detach is precisely what makes reflective endorsement or rejection of your concrete identity possible at all.

So which one are we? The observer or the observed? The object we contemplate in a field of other objects and forces –  a mass of psychological drives subject to cause and effect – or the conscious subject that somehow detaches from all that? The answer has to be ‘both’. We are pretty clearly physical objects, prone to various forms of constraint, influence and control.

Yet we cannot simply disavow our past, our language or the identities society imposes on us either. But we are also something more, something that can step back from what we are, something that appears to itself as free. We’re doomed to be both these things. Our destiny is internal division.

Making sense of the identity politics debate

How does any of this relate to the issue of identity politics? Well, consider the two political anthropologies sketched above – one view says we’re constrained by the identities we find ourselves born or built into, the other that we’re all free, rational, autonomous agents.

These are exaggerations of course – most liberals don’t think we’re completely unaffected by our social situation and most proponents of identity politics don’t think we’re completely lacking in autonomy. But they pick out a genuine point of disagreement.

Both identity politics and universalism answer different but real dimensions of human existence.

One way to think of that disagreement is as a clash between the type of self we might take ourselves to be – as an object determined by history and society, or as a free, undetached locus of consciousness. That is not the whole story, but it opens up one useful way to think about it. Both identity politics and universalism answer to different but real dimensions of human existence.

Rather than simply insisting on one approach to the complete exclusion of the other, we should consider how both might be responses to irreducible and contradictory aspects of what we are.

The question is where we go from there.


Online grief and the digital dead

For many of us, social media is not just a way to communicate. It’s part of the fabric of everyday life and the primary way we stay up to date in the lives of some of our friends and family.

When people die, their profiles continue to present their ‘face’ as part of our social world.

Such continued presence makes a difference to the moral world we continue to inhabit as living people. It forces us to make a decision about how to deal with the digital dead. We have to treat them in some way – but how?

What should be done with these ‘digital remains’? Who gets to make that decision? What happens when survivors (those still alive who will be affected by these decisions) disagree?

“Death obliterates a person’s consciousness but we have some power to ensure it doesn’t destroy everything about them.”

So far, tech companies have worked out their norms of ‘digital disposal’ on the fly. On Facebook, for instance, some profiles are actively deleted, others are simply abandoned, while others have been put into a memorialised state which lets existing ‘friends’ post on the deceased’s timeline.

Facebook allows you to report a deceased person and memorialise their account – Want to decide what happens to your own account if you pass away? You can let Facebook know in advance what you’d like to happen in the event of your death.

Given the digital dead could soon outnumber the digital living, it might be time to look more closely at the problem. This will require asking some fundamental questions: what is the relationship between an online profile and its creator and how might that matter in ethical terms?

I’ve said social media is one of the ways our friends stay present in our lives and in our ‘moral worlds’. The dead have in fact always been part of our moral world: we keep promises to them, speak well of them and go out of our way to preserve their memory and legacy as best we can.

Jeffrey Blustein gives us one reason why we might be obliged to remember the dead this way: memory is one way to “rescue the dead from insignificance”. Death obliterates a person’s consciousness but we have some power to ensure it doesn’t destroy everything about them.

As Goethe said, we die twice. First, when our hearts stop beating. And second, when the last person who loves us dies and we disappear from memory.

This gives digital artefacts ethical significance. We can’t stop that first death, but we can take steps to delay the second through a kind of ‘memory prosthetic’. A memorialised social media profile seems like exactly this kind of prosthetic. It allows something of the real, tangible presence of the dead to persist in the world of the living and makes the task of preserving them easier.

This gives us at least one reason not to delete dead peoples’ profiles: their deletion removes something of the dead from the world, thereby making them harder to remember.

The right of a deceased person not to have their profile deleted might still be trumped by the rights of the living. For instance, if a bereaved family find the ongoing existence of a Facebook profile distressing, that might be a good reason to delete it. But even reasons that are easily overridden still need to be taken into account.

“By recreating the dead instead of remembering them as they were, we risk reducing them to what they did for us.”

There might, however, be cause for concern about other kinds of memory that go beyond preservation and try to recreate the dead in the world of the living. For example, various (often ironically short-lived) startups like Lifenaut, Virtual Eternity and LivesOn aim to create a posthumous, interactive existence for the dead.

They hope to create an algorithm that can ‘learn’, either by analysing your online activity or through a script you fill in while you’re alive, how to post or speak in a way that sounds like you. This may be as simple as tweeting in your name, or as complex as an animated avatar speaking as you would have spoken, chatting, joking and flirting with your survivors.

Nobody has had much success with this to date. But as the technology improves and AI becomes increasingly competent, the likelihood of such a platform becoming viable increases. Should that happen, what might this do to our relationship to the dead?

This episode of Black Mirror is a moving and unsettling depiction of real avatars of the dead walking and talking among us.

Online avatars of this kind might seem like a simple extension of other memorialisation practices – a neat way of keeping the distinctive presence and style of the dead with us. But some, like philosopher Adam Buben, argue these online avatars are less about remembering the dead and more about replacing them. By recreating the dead instead of remembering them as they were, we risk reducing them to what they did for us and replacing them with something that can perform the same role. It makes those we love interchangeable with others.

If I can replace you with an avatar then I don’t love you for you but merely for what you can do for me, which an avatar could do just as well. To use a crass analogy: if memorialising an online profile is like getting your cat taxidermied, posthumous avatars are like buying a new, identical cat and giving it the same name as the old one.

Despite the danger, technology has a habit of outrunning our ethical responses to it, so it’s quite possible fully-functioning avatars will get here whether we want them or not. So here’s a modest proposal: if this technology becomes a reality, we should at least demand that it come with in-built glitches.

…if memorialising an online profile is like getting your cat taxidermied, posthumous avatars are like buying a new, identical cat and giving it the same name as the old one.

The reason we need glitches is because when technology works perfectly, we don’t notice it. We feel like we’re directly connected to someone through a phone line or a Skype connection because when these technologies work properly they don’t call attention to themselves. You hear the voice or see the face, and not the speaker or the screen. But glitches call our attention back to the underlying reality that our encounter is being mediated by a limited piece of technology.

If we’re going to have interactive avatars of the dead, let’s make them fail every so often, make them sputter or drop out – to remind us of who we’ve lost and the fact they are genuinely gone, no matter how realistic our memory devices are.


The undeserved doubt of the anti-vaxxer

For the last three years or so I’ve been arguing with anti-vaccination activists. In the process I’ve learnt a great deal – about science denial, the motivations of alternative belief systems and the sheer resilience of falsehood.

Since October 2012 I’ve also been actively involved in Stop the AVN (SAVN). SAVN was founded to counter the nonsense spread by the Australian Vaccination-skeptics Network. According to anti-vaxxers SAVN is a Big Pharma-funded “hate group” populated by professional trolls who stamp on their right to free speech.

I’m afraid the facts are far more prosaic. There’s no Big Pharma involvement – in fact there’s no funding at all. We’re just an informal group of passionate people from all walks of life (including several research scientists and medical professionals) who got fed up with people spreading dangerous untruths and decided to speak out.

When SAVN started in 2009, antivax activists were regularly appearing in the media for the sake of “balance”. This fostered the impression of scientific controversy where none existed. Nowadays, the media understand the harm of false balance and the antivaxxers are usually told to stay home.

There’s a greater understanding that scientists are best placed to say whether or not something is scientifically controversial. (Sadly we can’t yet say the same for the discussion around climate change.) And there’s much greater awareness of how wrong – and how harmful – antivax beliefs really are.

 

 

No Jab, No Pay

This shift in attitudes has been followed by significant legislative change. Last year NSW introduced ‘No Jab, No Play’ rules. These gave childcare centres the power to refuse to enrol non-vaccinated children. Queensland and Victoria are planning to follow suit.

In April, the Abbott government introduced ‘No Jab, No Pay‘ legislation. Conscientious objectors to vaccination could no longer access the Supplement to the Family Tax Benefit Part A payment.

The payment has been conditional on children being vaccinated since 2012, as was the payment it replaced. But until now vaccination refusers could still access the supplement by having a “conscientious objection” form signed by a GP or claiming a religious belief exemption. The new legislation removes all but medical exemptions.

The change closes loopholes that should never have been there in the first place. Claiming a vaccination supplement without vaccinating is rather like a childless person insisting on being paid the Baby Bonus despite being morally opposed to parenthood.

The new rules also make the Child Care Benefit (CCB) and Child Care Rebate (CCR) conditional on vaccinating children. That’s not a trivial impost – estimates at the time of the announcement suggested some families could lose around $15,000 over four years.

What should we make of this? A necessary response to an entrenched problem or a punitive overreaction?

Much of the academic criticism of the policy has been framed in terms of whether it will in fact improve vaccination rates. Conscientious objector numbers do now seem to be falling, although it remains to be seen whether this is due to the new policies.

Embedded in this line of criticism are three premises:

  • Improvements in the overall vaccination rate will come through targeting the merely “vaccine-hesitant” population.
  • Targeting the smaller group of hard core vaccine refusers, accounting for around 2% of families, would be counterproductive.
  • The hard core is beyond the reach of rational persuasion even via benefit cuts.

These are of course empirical questions and open to testing. I suspect the third assumption is true. It’s hard to see how someone who believes the entire medical profession and research sector is either corrupt, inept, or both, or that government and media deliberately hide “the Truth”, would ever be persuaded by evidence from just those sources.

A few antivaxxers even believe the germ theory of disease itself is false. In such cases no amount of time spent with a GP explaining the facts is going to help.

They base their “choices” on beliefs ranging from the ridiculous to the repugnant, but their fundamental objection is that the new policies are coercive.

In recent years, antivax activists have tended to frame their objections to legislation like No Jab, No Pay in terms of individual rights and freedom of choice.

Yes, they base their “choices” on beliefs ranging from the ridiculous to the repugnant (including the claim that Shaken Baby Syndrome is really the result of vaccination not child abuse), but their fundamental objection is that the new policies are coercive. They make the medical procedure of vaccination compulsory, which they regard as a violation of basic human rights.

Part of this isn’t in dispute – these measures are indeed coercive. Whether they amount to compulsory vaccination is a more complex question. In my view they do not, because they withhold payments rather than issuing fines or other sanctions, although that can still be a serious form of coercive pressure. Such moves also have a disproportionate impact on families who are less well-off, revealing a broader problem with using welfare to influence behaviour.

Nonetheless, it’s not particularly controversial that the state can use some coercive power in pursuit of public health goals. It does so in a range of cases – from taxing cigarettes to fining people for not wearing seatbelts. Of course there is plenty of room for disagreement about how much coercion is acceptable. Recent discussion in Canberra about so-called “nanny state” laws reflects such debate.

But vaccination doesn’t fall into the nanny state category because vaccination decisions aren’t just made by and for individuals. Several different groups rely on herd immunity to protect them. Herd immunity can only be maintained if vaccination rates within the community are kept at high levels. By refusing to contribute to a collective good they enjoy, vaccine refusers provide a classic example of the Free Rider Problem.

No Jab, No Pay legislation is not about people making vaccination decisions for themselves, but on behalf of their children. The suggestion that parents have some sort of absolute right to make health decisions for their children just doesn’t hold water. Children aren’t property, nor are our rights to parent our children how we see fit absolute. No-one thinks the choice to abuse or starve one’s child should be protected, for example.

And that gives lie to the “pro-choice” argument against these laws – not all choices deserve respect.

The suggestion that parents have some sort of absolute right to make health decisions for their children just doesn’t hold water. Children aren’t property, nor are our rights to parent our children how we see fit absolute.

Thinking in a vacuum

The pro-choice argument depends on the unspoken assumption there is room for legitimate disagreement about the harms and benefits of vaccination. That gets us to the heart of what motivates a great deal of anti-vaccination activism – the issue of who gets to decide what is empirically true.

Antivax belief may play on the basic human fears of hesitant parents but the specific contents of those beliefs don’t come out of nowhere. Much of it emerges from what sociologists have called the “cultic milieu” – a cultural space that trades in “forbidden” or “suppressed” knowledge. This milieu is held together by a common rejection of orthodoxy for the sake of rejecting orthodoxy. Believe whatever you want – so long as it’s not what the “mainstream” believes.

This sort of epistemic contrarianism might make you feel superior to the “sheeple”, the unawake masses too gullible, thick or corrupted to see what’s really going on. It might also introduce you to a network of like-minded people who can act as a buffer from criticism. But it’s also a betrayal of the social basis of knowledge – our radical epistemic interdependency.

The thinkers of the Enlightenment bid us sapere aude, to “dare to know” for ourselves. Knowledge was no longer going to be determined by religious or political authority, but by capital-r Reason. But that liberation kicked off a process of knowledge creation that became so enormous specialisation was inevitable. There is simply too much information now for any one of us to know it all.

Talk to antivaxxers and it becomes clear they’re stuck on page one of the Enlightenment project. As Emma Jane and Chris Fleming have recently argued, adherence to an Enlightenment conception of the individual autonomous knower drives much conspiracy theorising. It’s what happens when the Enlightenment conception of the individual as sovereign reasoner and sole source of epistemic authority confronts a world too complex for any individual to understand everything.

As a result of this complexity we are reliant on the knowledge of others to understand the world. Even suspicion of individual claims, persons, or institutions only makes sense against massive background trust in what others tell us.

Accepting the benefits of science requires us to do something difficult – it requires us to accept the word of people we’ve never met who make claims we can never fully assess.

Accepting the benefits of science requires us to do something difficult – something nothing in our evolutionary heritage prepares us to do. It requires us to accept that the testimony of our direct senses no longer has primary authority. And it requires us to accept the word of people we’ve never met who make claims we can never fully assess.

Anti-vaxxers don’t like that loss of authority. They want to think for themselves, but they don’t accept we can’t think in a vacuum. We do our thinking against the background of shared standards and processes of reasoning, argument and testimony. Rejecting those standards by making claims that go against the findings of science without using science isn’t “critical thinking”. No more than picking up the ball and throwing it is “better soccer”.

This point about authority tells us something ethically important too. Targeting the vaccine-hesitant rather than the hard core refusers makes a certain kind of empirical sense.

But it’s important to remember the hard core are the source of the misinformation that misleads the hesitant. In the end, the harm caused by antivax beliefs is due to people who abuse the responsibility that comes with free speech. Namely, the responsibility to only say things you’re entitled to believe are true.

Most antivaxxers are sincere in their beliefs. They honestly do think they’re doing the right thing by their children. That these beliefs are sincere, however, doesn’t entitle them to respect and forbearance. William Kingdon Clifford begins his classic 1877 essay The Ethics of Belief with a particularly striking thought experiment.

A shipowner was about to send to sea an emigrant-ship. He knew that she was old, and not well built at the first; that she had seen many seas and climes, and often had needed repairs. Doubts had been suggested to him that possibly she was not seaworthy. These doubts preyed upon his mind and made him unhappy; he thought that perhaps he ought to have her thoroughly overhauled and refitted, even though this should put him at great expense. Before the ship sailed, however, he succeeded in overcoming these melancholy reflections.

He said to himself that she had gone safely through so many voyages and weathered so many storms that it was idle to suppose she would not come safely home from this trip also. He would put his trust in Providence, which could hardly fail to protect all these unhappy families that were leaving their fatherland to seek for better times elsewhere. He would dismiss from his mind all ungenerous suspicions about the honesty of builders and contractors.

In such ways he acquired a sincere and comfortable conviction that his vessel was thoroughly safe and seaworthy; he watched her departure with a light heart, and benevolent wishes for the success of the exiles in their strange new home that was to be; and he got his insurance-money when she went down in mid-ocean and told no tales.

Note that the ship owner isn’t lying. He honestly comes to believe his vessel is seaworthy. Yet Clifford argues, “the sincerity of his conviction can in no way help him, because he had no right to believe on such evidence as was before him.”

In the 21st century nobody has the right to believe scientists are wrong about science without having earned that right through actually doing science. Real science, mind you, not untrained armchair speculation and frenetic googling. That applies as much to vaccination as it does to climate change, GMOs and everything else.

We can disagree about the policy responses to the science in these cases. We can also disagree about what financial consequences should flow from removing non-medical exemptions for vaccination refusers. But removing such exemptions sends a powerful signal.

We are not obliged to respect harmful decisions grounded in unearned beliefs, particularly not when this harms children and the wider community.