How to give your new year’s resolutions more meaning

It’s that time of year again. For many of us the New Year is a time to make our annual resolutions.

For others it’s a time to briefly toy with the idea of making a resolution or two but never commit. And for others, depending on how badly the previous evening’s events transpired, the New Year’s Day hangover inspires at least one resolution – to never ever drink again. 

Unfortunately, the majority of people don’t stick to their resolutions. Only 4-10% of people report following through on the resolutions they set at the beginning of the year (I wonder whether this figure includes those people that sarcastically claim that they have only one New Year resolution – to not make any New Year’s resolutions?).  

The majority of resolutions fail because, well, we’re human beings. There are at least three factors behind this:   

1. We overestimate the power of ‘will power.’ Often linked to adjacent concepts like resilience and impulse inhibition, the long-standing belief in psychology is that will power is a finite resource and people have varying reservoirs of it to draw on. The belief was that by bolstering our will power we’re better able to attain our goals and, by implication, those that don’t achieve their goals lack willpower. Unfortunately, it’s more complicated than that. Our ability to maintain choices and attain goals is as much about situational context, genetics, and socio-economic standing as it is about individual psychological traits.

2. As a result, any significant behaviour change requires long standing practice, environmental changes and thoughtfully designed behavioural cues in order to create pathways of reinforcement in order to form new habits. In short, even the simplest resolutions require habitual practice.

3. Most resolutions are goal focussed – stop smoking, lose weight – and for that reason they take on an instrumental importance, meaning they focus exclusively on achieving an end state or outcome. With the high level of difficulty in achieving these outcomes resolutions can become self-defeating – the end becomes the goal rather than a focus on the motivating reason that inspired the goal 

That’s not to say that New Year’s resolutions are a bad thing. We are hard-wired to demarcate life into phases. Birthdays, seasons, events, and the change of year are all relatively arbitrary events but are full of symbolic significance. They are moments that matter because we invest these occasions with meaning.

New Year’s resolutions at their best can provide a much-needed pause in the busyness of life to reflect and reassess what’s important to us.

So, as you pause and reflect on the year ahead, consider taking a different path when thinking about setting resolutions. Rather than listing goals, think instead about the qualities you’d like to cultivate in the year ahead. What qualities for you express the best aspects of a life well lived? Which of those would you choose to have more of in your life?

Known as virtues – those behaviours, skills or mindsets that are worthy to be regarded as features of living a good life. Wisdom, justice, and courage were on Aristotle’s shortlist. 

Rather than setting goals like read more books or spend less time on my phone – think instead of the quality you are calling more of into your world – like being curious.  

There are no limits or targets to being curious, there’s simply a conscious reflection which may lead to some different choices or new experiences.

Curiosity might lead you to pick up a book rather than slump on the couch in front of Netflix or it may prompt you to choose an interesting documentary about a topic you always wanted to know more on. 

Being curious this year might also benefit your relationships. Being curious involves behaviours like asking more questions, seeking out different points of views and perspectives and being more open to opinions and beliefs that are different to the ones you hold. Practicing this virtue may lead to new and unexpected connections, making new friendships or forging even stronger relationships. It may even reduce conflict by helping you to be less triggered when confronted with views that are contrary to our own.  

Being curious may also help with your lifestyle and eating habits. Rather than resolving to lose weight, be curious about trying different foods which may inspire some different meal choices from those that you might already know. By being curious you may be intrigued to occasionally switch out some of your habitual choices by actively seeking out different or healthier alternatives.

However, there’s no guarantee that being curious might not also lead you to consume a wider variety of chocolate or spend even more time on your phone as you may seek out even more information and distractions. That’s the challenge with cultivating qualities or virtues; you’ll need practical wisdom, as Aristotle calls it. It’s about finding the best balance between the virtue and its corresponding vices.  

Rather than focus on resolutions as goals and outcomes – often with arbitrary measures of success like read more, rest more, learn another language – focus instead on the ‘why’ behind the quality you are wanting to cultivate. Research shows that being purposive and intentional helps people maintain their motivation and achieve the goals they set.  

As you reflect on the year ahead take some time to think about those qualities you could do with a little more of. In fact, why wait for the New Year? I’m going to start today by practicing more gratitude. There’s no time like the present.


“I’m sorry *if* I offended you”: How to apologise better in an emotionally avoidant world

I’ve been wondering if we need to have a better look at the way we say sorry.

We live in a highly binary and individualistic world that struggles to repent well. Yet we are increasingly aware of – and flummoxed by – bad faith efforts at the gesture.  

Witness the fallout from former Prime Minister Scott Morrisons’ baffling response speech to being censured last week in which he refused to apologise to the nation. I reckon we ache to do better; we want true healing. 

We could start with looking at the way we so often insist on whacking the Almighty Absolving Qualifier “if” when we issue an “I’m sorry”. I’m sorry if you’re offended/upset/angry. We go and plug one in where a perfectly good “that” would do a far better job.  

But an “if” negates any repentant intent. Actually, worse. It gaslights. It puts up for dispute whether the hurt or offence is actually being felt and whether it is legitimate. Attention switches to the victim’s authenticity and their right to feel injured. Did you actually get hurt? Hmmm…. 

Things get even more disconcerting when the quasi-apologiser thinks they have done something gallant with their qualified “I’m sorry”. And will gaslight you again if you pull them up on the flimsiness of it. What, so you can’t even accept an apology!  

I had a rich, senior businessmen do the if-sorry job on me recently. “I’m sorry if you’re angry,” he said in a really rather small human way. Rather than standing there miffed, I replied, “Great! Yep, you definitely fucked up. And so I’m definitely and rightly angry. Now that’s established, sure, I’ll take on that you’d like to repent.” 

I heard a well-known doctor on the radio the other morning very consciously (it seemed) drop the if from the equation when he had to apologise for making remarks about a minority group (in error) in a previous broadcast. “I’m sorry I said those things. I was wrong. I’m not going to justify myself. There are no excuses. I was in the wrong,” he said. It was a good, textbook apology and he probably wouldn’t land in trouble for it. 

But, and it immediately begs, is that the point of an apology? 

For the wrongdoer to stay out of trouble? For them to neatly right a wrong by going through a small moment of awkward, vulnerable exposure? 

What about the victim? Where do they sit in apologies?

I recall listening to a radio discussion where all this was dissected. The point that grabbed me at the time was this: In our culture, the responsibility of ensuring that an apology is effective in bringing closure to a conflict mostly rests on the victim, the person being apologised to. No matter the calibre of the apology, it’s up to the person who has been wronged to be all “that’s ok, we’re sweet” about things. They are effectively responsible for making the perpetrator feel OK in their awkward vulnerable moment. (And to keep the pain shortlived.) 

And so a successful apology rests in the victim’s readiness to forgive. 

Which is all the wrong way around. At an individual-to-individual level it’s cheap grace. The wrongdoer gets absolved with so little accountability involved. 

At a macro level, say with injuries like racism or sexism, we can see the setup is about a minority class forgiving, or bowing once again, to the powerful. 

I managed to find the expert who’d led the discussion –  Rabbi Danya Ruttenberg, a New York-based Rabbi and scholar who’s written a book on the matter, On Repentance and Repair: Making Amends in an Unapologetic World, a title that says it all, right? 

Ruttenberg argues we are doing apologies inadequately and in a way that fails to repair the damage done precisely because we privilege forgiveness over repentance. 

So how to apologise like you mean it

Drawing on the 12th Century philosopher Maimonides, Ruttenberg sets out five steps to a proper apology. 

1. Confession

The wrongdoer fully owns that they did something wrong. There’s to be no blabbing of great intentions, or how “circumstances” conspired; no “if” qualifiers. You did harm, own it! Ideally, she says, the confession is done publicly.

2. Start to Change

Next, you the work to educate yourself, get therapy etc. Like, demonstrate you’re in the process of shifting your ways. You’re talk and trousers! 

3. Make amends

But do so with the victim’s needs in mind. What would make them feel like some kind of repair was happening? Cash? Donate to a charity they care about? 

4. OK, now we get to the apology!

The point of having the apology sitting right down at Step 4 is so that by the time the words “I’m sorry” are uttered, we, as the perpetrator, are engaged and own things. The responsibility is firmly with us, not the victim. By this late stage in the repenting process we are alive to how the victim felt and genuinely want them to feel seen. It’s not a ticking of a box kinda thing. Plus, we’ve taken the responsibility for bringing about closure, or healing, out of the victim’s hands. 

 5. Don’t do it again

OK, so this is a critical final step. But there’s a much better chance the injury won’t be repeated if the person who did the harm has complete the preceding four steps, according to Ruttenberg and Maimonides. 

Does forgiveness have to happen?

I went and read some related essays by Rabbi Ruttenberg just now. The other point that she makes is that whether or not the victim forgives the perpetrator is moot. When you apologise like you mean it (as per the five steps), I guestimate that 90 per cent of the healing required for closure has been done by the perpetrator. And it happens regardless of whether the harmed party forgives, because the harm-doer sat in the issue and committed to change. The spiritual or emotional or psychological shift has already occurred. 

I should think that, looking at it from a victim-centric perspective, this opened space allows the harmed party to feel more comfortable to forgive, should they choose to.

It’s a win-win, regardless of whether the aggrieved waves the forgiveness stick. 

(The Rabbi notes that in Judaism, as opposed to Christianity, there is no compulsion for the harmed party to forgive.) 

What *if* we offend or harm unintentionally?

I was presented with the above ethical quandary while writing this. Someone on social media commented that she’d wanted to approach me recently but felt she couldn’t because she had two kids in tow at the time. She figured I’d judge her for being a “procreator” given my climate activism work and anti-consumption stance. It was an unfortunate assumption. I had only last week written about how bringing population growth into the climate crisis blame-fest is wrong, ethically and factually (it’s not how many we are, it’s how we live). 

Of course, her self-conscious pain was real. But did I need to repent if I’d done nothing wrong, and certainly not intentionally (indeed, I’d not acted, in bad faith or otherwise). 

I decided there was still a very good opportunity to switch out an “if” for a “that”. I replied: “I’m sorry that you felt….”. And I was. I didn’t want her to have that impression of judgement from another, nor to feel so self-conscious. I was sorry in the broad sense of feeling bad for her. Feeling sorry can be a sense of tapping into a collective regret for the way things are, even if you are not directly responsible. 

The real opportunity here was to take on responsibility for healing any hurt, and to speed it up. If I’d listed out and justified why this person was mistaken (wrong) in feeling as she did, I’d have also missed an opportunity to be raw and open to the broader pain of the human condition. 

Doing good apology is essentially an act in correctly apportioning the tasks required to get the outcome that we are all after, which, for most adults, is growth, intimacy and expansion. Ruttenberg makes the point that some indigenous cultures work to this (more mature) style of repentance (as opposed to cheap grace), as well as various radical restorative justice movements. I note that the authors and elders who contributed to the Uluru Statement from the Heart often remind us that the document is an invitation to all Australians to grow into our next era.  


Should you celebrate Christmas if you’re not religious?

Religious holidays like Christmas are not just for believers. They involve rituals and customs that can help reinforce social bonds and bring people together, no matter what their beliefs.

What is Christmas all about? On the one hand, it’s a traditional Christian religious holiday that celebrates the birth of Jesus Christ, the son of God. On the other hand, it’s a commercial frenzy, where people battle crowds in vast shopping centres blanketed in themed decorations to buy disposable trinkets that are destined to be handed over to disappointed relatives straining, but failing, to smile with their eyes. 

Either way, if one is not overly devout or materialistic, Christmas might seem like something worth giving a miss. And if you’ve lost your religion entirely, it might seem appropriate to dispense with Christmas traditions.  

The same applies for major holidays celebrated by other religions. If you do abandon Christmas – or Hanukkah, Diwali, Ramadan, etc – you’ll be missing out on an opportunity to participate in a ritualistic practice that is about more than the scriptures suggest, and certainly more than the themed commercial advertising represents.  

This is because religious holidays like Christmas are not what they seem. They are not fundamentally about spirituality, as they purport to be. They’re certainly not about boosting economic activity, even if that’s what they’ve become. Rather, they’re about bringing people together to create shared meaning.  

Anyone can benefit from this effect if they participate in the rituals and traditions of a religious holiday, including people of that religion who are no longer believers, as well as those outside of the culture who are welcomed in. 

Do you believe in Santa Claus?

Religiosity is in decline in most countries around the globe, Australia included. In fact, Australia is in the top five nations in the world in terms of the proportion of self-declared atheists. And “no religion” is the fastest growing group, jumping from less than 1% of the population in 1966 to 38% in the latest Census figures from 2021. These numbers are even higher for young people, which suggests the move away from religion will continue into the future. 

For the ‘new atheists,’ this looks like a good thing. Thinkers such as Richard Dawkins, Christopher Hitchens and Sam Harris famously argued that religion is dangerous force that spreads unscientific beliefs and perpetuates social divisions. They have urged people to drop their spiritual beliefs and embrace a secular rational lifestyle.  

There are merits to their arguments, but their bundling of religions’ problematic metaphysics with the traditions and rituals they promote overlooks that the two are not necessarily connected: one need not believe that three wise men tracked a mystical light in the sky to the birthplace of Jesus so they could give him gold, frankincense and myrrh in order to hand over a present to a loved one. It’s like how we already agree that one need not believe in Santa Claus in order to stuff the odd stocking. 

The other thing the scorched Earth version of atheism overlooks is that the decline of religiosity has corresponded with an increase in social fragmentation in the modern world. Many people feel a sense of social isolation and disconnection from those around them, even if they live in the midst of a metropolis, which is contributing to the growing crisis in mental health. 

This is not a new phenomenon. A similar pattern was observed by sociologists such as Émile Durkheim and Ferdinand Tönnies in the late 19th century. They attributed it to the rise of individualism over community as modern societies expanded following the industrial revolution. The erosion of community meant that individuals were left to seek their own meaning and purpose in life, as well as forge their own social connections.  

This lean towards individualism affords each person some freedom in choosing what is meaningful for them, but it also involves a lot of work; it’s no mean feat to single-handedly create a grand structure to make sense of the world, to inform what we should value and what morals we should abide by. And there are many forces that are only too happy to sell their own vision to the individual, including all manner of pseudo-spiritual causes, conspiracy theories and self-help gurus. But the strongest force today is capitalism, which sells a vision of work and consumption that is superficially appealing but which many find to be ultimately hollow and unfulfilling. 

One of the mechanisms sociologists identified that helped to ameliorate the individualistic tendency towards social disconnection and isolation was religion. But it wasn’t the spiritual dimensions of religion that did the most work, it was the rituals and traditions that brought people together to reinforce social connections and create shared meaning.  

Rituals like gathering for an annual feast with family, which signifies a break from everyday consumption and gives us an opportunity to show care and respect for others as we feed them. Or gift-giving, which motivates us to think carefully about the most important people in our lives – who they are, what they care about, what they lack – and encourages us to find or make something, or use our wealth, to bring them joy. Even singing carols has an effect. The act of singing in unison is a potent way of bonding with those around us, and even the habit of complaining about our most hated Christmas tunes can bring people together. 

Come together

There are many more rituals involved with Christmas, and many analogous rituals associated with the festivals of other religions.  

While their surface details and spiritual justifications may differ, at their heart they’re all about one thing: bringing us closer together. They’re a form of social glue that is arguably much needed in today’s world.

While the differences between traditions can be a source of division, and the rise of multiculturalism and sensitivity towards other cultures can be a source of reservation when it comes to participating in rituals that are not ‘our own,’ we still have a lot to gain by continuing to practise festive rituals, even if we no longer subscribe to their supernatural pretexts. We also have a lot to offer if we welcome those of other cultural and religious traditions to share our rituals, and if we remain open to learn from and share in theirs. 

On the surface, Christmas looks like it’s about spirituality or like it’s been co-opted by the market. But deep down, it’s one of many religious festivals that we can draw on to enrich our lives, ground ourselves in our family and community, and a way to create meaningful experiences with others to help us all live a good life. 

Join Dr Tim Dean for The Ethics of Holidays on the 8th of December at 6:30pm. Tickets on sale now.


5 stars: The age of surveillance and scrutiny

While ratings systems may encourage good behaviour on the part of the provider and recipient, it’s a hungry business model that is both anxiety-inducing and untrustworthy. 

In Seth MacFarlane’s off-beat homage to Star Trek, The Orville, the increasingly earnest show becomes a series of cautionary tales. In one episode, Majority Rules, MacFarlane signals the dangers of the real world mirroring the online one. The crew of the Orville attempt to rescue a couple of imprisoned anthropologists from an Earth-like planet, where justice is meted out based on a system of public votes. In deep trouble, the public will determine their innocence with a ‘thumbs-up’ or ‘thumbs-down’. 

I didn’t love the show, but Majority Rules lingers in my mind because even though determining a person’s freedom by public votes seems ludicrous, isn’t this happening daily online already, to varying degrees of severity? 

There is, of course, the modern-day equivalent of the stocks. But instead of passers-by throwing fruit and jeering, people find ways to do it in 140 characters or less, hashtags optional.  

But seemingly more innocuous judgments are made elsewhere, and they affect how we live, work and engage with others. When you consider the services and experiences that make up your daily life, how many of them involve ratings? Businesses rely on reviews from Google, Yelp, Trip Advisor and so on, as do we as consumers.  

And of course, your transport and food delivery apps depend on them. The meal you ordered through a delivery app arrived soggy and not at all like it looked in the photo? Blame the rider who didn’t pedal fast enough, bypass the restaurant. Your food was terrible and the low-paid delivery rider is the low-hanging fruit. They get one star. Hated the music your Uber driver was playing? Give them a poor rating – though you should know, with Uber they can give you one back. 

In November this year, it was reported that a study from the University of Bristol and University of Oxford found seven out of 10 gig economy workers were in a constant state of worry about negative reviews and the impact they would have on their livelihood. The lead author and sociologist, Dr Alex Wood, Lecturer in Human Resource Management and Future of Work at Bristol, noted: “It was shocking how workers expressed continuous worry about the potential consequences of receiving a single bad rating from an unfair or malevolent client, and how this could leave them unable to continue making a living.” 

This anxiety over ratings is understandable. It’s not that criticism is a fresh concept (in the arts world, we are constantly subjected to review). But in the gig economy, not only can customer-generated scores sink or boost workers’ reputations, they culminate in an advertising and rewards system. The better you’re rated, the more accolades you receive, often in the form of badges that signal that you are a superstar. It’s a clever marketing tactic, because as consumers we follow the high ratings, but it’s also a way to encourage good behaviour all round because often apps rate both provider and consumer.

Ratings systems are surveillance and compliance systems, a very public message board, which mostly empower the consumer. But these ratings can also be used to falsely entice us.

Years ago, I employed the services of transcribers on a gig website. Despite having a catchy price point, the cost of the service, quite rightly, rose according to the needs of the job. And yet, the work was not done to a reasonable standard. I chose not to leave negative reviews, but the service providers pushed me to so I relented and gave them four out of five stars. I understood: they were trying to make a living. Then they sent me messages asking (borderline haranguing) me to change my review to a perfect score.  

They were jockeying for work but cutting corners then demanding positive reviews. I gave in, feeling guilty, knowing that they were boosting their reliability to secure more work they didn’t seem capable of actually completing.  

Meanwhile, try leaving an honest review on AirBnB, and you will understand why so many places are given rave reviews but fall short. No one wants to tell the truth when they are being judged back.  

What a circular mess. 

How can we trust ratings systems like this? Can scores be trusted given how freely, and sometimes anonymously, they can be applied? Meanwhile, even though ratings systems can be a useful barometer of a service provider’s reliability or competence, they may also be completely false endorsements. Increasingly, we are being warned about fake reviews, which set out to uplift or destroy a business. It pays to read comments carefully rather than rely on the rating itself. 

We are increasingly being taught to assess every service or experience, and it is not a thorough, or necessarily fair, feedback system for either party to a transaction.

No longer are workers simply ‘freelancers’ if they are self-employed; in a world of food delivery and transport services, of competitive freelance websites like Fiverr and Upwork, everyday commerce has been twisted and turned into a thriving, cut-throat marketplace. One where workers’ rights are blurred, where bad reviews are doled out hastily, spitefully or truthfully, but without any oversight to ascertain their veracity.  

The gig economy has long been examined for its flaws. Despite the opportunities and ease-of-access a casualised, sharing economy creates, with it comes crushing downsides: the dilution of employee rights, the lowering of fees just to secure the gig – and with that, quite likely, standards. Skilled workers get edged out of industries when they are undercut by less experienced people willing to do the job at a fraction of the actual cost.  

Ratings systems encourage good behaviour but we are becoming hyper vigilant and more critical in the process. While business is booming, this explosion in feedback is not making us better workers or customers. Time will tell if this is, ultimately, bad for business. We already know that it is taking its toll on providers.  


Ethics Explainer: Truth & Honesty

How do we know we’re telling the truth? If someone asks you for the time, do you ever consider the accuracy of your response? 

In everyday life, truth is often thought of as a simple concept. Something is factual, false, or unknown. Similarly, honesty is usually seen as the difference between ‘telling the truth’ and lying (with some grey areas like white lies or equivocations in between). ‘Telling the truth’ is somewhat of a misnomer, though. Since honesty is mostly about sincerity, people can be honest without being accurate about the truth. 

In philosophy, truth is anything but simple and weaves itself into a host of other areas. In epistemology, for example, philosophers interrogate the nature of truth by looking at it through the lens of knowledge.  

After all, if we want to be truthful, we need to know what is true. 

Figuring that out can be hard, not just practically, but metaphysically.  

Theories of Truth

There are several competing theories that attempt to explain what truth is, the most popular of which is the correspondence theory. Correspondence refers to the way our minds relate to reality. In it, truth is a belief or statement that corresponds to how the world ‘really’ is independent of our minds or perceptions of it. As popular as this theory is, it does prompt the question: how do we know what the world is like outside of our experience of it? 

Many people, especially scientists and philosophers, have to grapple with the idea that we are limited in our ability to understand reality. For every new discovery, there seems to be another question left unanswered. This being the case, the correspondence theory leads us to a problem of not being able to speak about things being true because we don’t have an accurate understanding of reality. 

Another theory of truth is the coherence theory. This states that truth is a matter of coherence within and between systems of beliefs. Rather than the truth of our beliefs relying on a relation to the external world, it relies on their consistency with other beliefs within a system.  

The strength of this theory is that it doesn’t depend on us having an accurate understanding of reality in order for us to speak about something being true. The weakness is that we can imagine there being several different comprehensive and cohesive system of beliefs that, and thus different people having different ‘true’ beliefs that are impossible to adjudicate between. 

Yet another theory of truth is pragmatist, although there are a couple of varieties, as with pragmatism in general. Broadly, we can think of pragmatist truth as a more lenient and practical correspondence theory.  

For pragmatists, what the world is ‘really’ like only matters as far as it impacts the usefulness of our beliefs in practice.  

So, pragmatist truth is in a sense malleable; it, like the scientific method it’s closely linked with, sees truth as a useful tool for understanding the world, but recognises that with new information and experiment the ‘truth’ will change. 

Ethical aspects of truth and honesty 

Regardless of the theory of truth that you subscribe to, there are practical applications of truth that have a significant impact on how to behave ethically. One of these applications is honesty.  

Honesty, in a simple sense, is speaking what we wholeheartedly believe to be true.  

Honesty comes up a lot in classical ethical frameworks and, as with lots of ethical concepts, isn’t as straightforward as it seems. 

In Aristotelian virtue ethics, honesty permeates many other virtues, like friendship, but is also a virtue in itself that lies between habitual lying and boastfulness or callousness. So, a virtue ethicist might say a severe lack of honesty would result in someone who is untrustworthy or misleading, while too much honesty might result in someone who says unnecessary truthful things at the expense of people’s feelings. 

A classic example is a friend who asks you for your opinion on what they’re wearing. Let’s say you don’t think what they’re wearing is nice or flattering. You could be overly honest and hurt their feelings, you could lie and potentially embarrass them, or you could frame your honesty in a way that is moderate and constructive, like “I think this other colour/fit suits you better”.  

This middle ground is also often where consequentialism lands on these kinds of interpersonal truth dynamics because of its focus on consequences. Broadly, the truth is important for social cohesion, but consequentialism might tell us to act with a bit more or a bit less honesty depending on the individual situations and outcomes, like if the truth would cause significant harm. 

Deontology, on the other hand, following in the footsteps of Immanuel Kant, holds honesty as an absolute moral obligation. Kant was known to say that honesty was imperative even if a murderer was at your door asking where your friend was! 

Outside of the general moral frameworks, there are some interesting ethical questions we can ask about the nature of our obligations to truth. Do certain people or relations have a stronger right to the truth? For example, many people find it acceptable and even useful to lie to children, especially when they’re young. Does this imply age or maturity has an impact on our right to the truth? If the answer to this is that it’s okay in a paternalistic capacity, then why doesn’t that usually fly with adults?  

What about if we compare strangers to friends and family? Why do we intuitively feel that our close friends or family ‘deserve’ the truth from us, while someone off the street doesn’t?  

If we do have a moral obligation towards the truth, does this also imply an obligation to keep ourselves well-informed so that we can be truthful in a meaningful way? 

The nature of truth remains elusive, yet the way we treat it in our interpersonal lives is still as relevant as ever. Honesty is a useful and easier way of framing lots of conversations about truth, although it has its own complexities to be aware of, like the limits of its virtue. 


When are secrets best kept?

Throughout the ages, people subject to the torments of even the most oppressive regimes have found solace in the fact that even when their bodies are controlled, their minds can remain free.  

People have the capacity to hold information and beliefs that cannot be discerned by any mind other than their own. Of course, in many cases (but not all) the mental reserves needed to preserve a secret can be destroyed by those who employ torture. However, only the most vicious and desperate resort to such despicable acts – and even then, they can never be sure that what they are told is actually true. But that is another topic for another time. 

For now, I want to highlight the remarkable strength of secrets – a strength conferred by their retention in regions of the human mind that are inaccessible to others. 

The fact that we cannot ‘read minds’ allows each of us a particular kind of freedom.

However, it would be a lonely existence if we were not also endowed with the capacity to share our thinking with others through all of the forms of communication available to us – physical, verbal, literal, and symbolic. So, for the most part, we liberally share our thoughts, feelings and beliefs in word and deed – while retaining some things entirely to ourselves. 

While this is the context in which secrets exist, it’s important to note the distinction between ‘having’ and ‘holding’ secrets. In the first case, secrets can be our own – something that we know we choose not to disclose to others. In the second case, secrets can ‘belong’ to someone else who has shared them with us – on the condition we preserve the secrecy of what has been disclosed. 

There are many examples of both kinds of secret. For example, a person may have suffered some kind of sexual assault in their youth but, for a range of reasons, may never disclose this to another soul. It will be their secret – and they will take it to the grave. Alternatively, if they share this secret with another person – on the condition that no other person ever know this truth – then the latter person will have agreed to hold the secret for as long as required to do so by the person whose secret has been shared with them. 

It’s easy to see in this example just some of the problems with secrets. Let’s suppose that the person who abused the youth is still at large – possibly still offending. Does the person who ‘holds’ the secret have an obligation to prevent harm that is greater than the obligation to protect their friend’s secret? One might hope that the friend would agree to reveal the identity of the malefactor. However, what if they refuse? What if a person at risk of abuse asks a direct question about the person whom you know to be a threat to them? Are you required to lie or to dissemble in order to keep the secret? 

Of course, the ability to have and to hold secrets can also enable great evil. For example, some secrets can obscure damaging, false beliefs that – even if sincerely held – present grave risks to individuals or whole communities. We can see such ‘secret knowledge’ at work in certain cults and conspiracy theories. Because secret, these sometimes deadly false beliefs cannot be challenged or amended by exposure to the ‘sunlight’ of open enquiry and debate. Deadly secrets can fester and grow in the dark to the point where they can poison whole sections of the community. 

What’s more, perverse forms of secrecy can be employed by powerful interests as a tool to control others. Whole regimes have been propped up by ‘secret police’, the cloaking of wrongdoing behind the veil of ‘official secrets’, and so on. 

The ethics of secrets have a practical bearing on matters affecting individuals, groups and whole societies. Core questions include: Is there a distinction between a ‘confidence’ and a ‘secret’? Do certain people have a right to know information that others wish to keep secret? Are we ever obliged to disclose another person’s secret? What, if anything, is a ‘legitimate secret’? Who decides questions of legitimacy? How does one balance the interests of individuals and society? 

 

Join Dr Simon Longstaff on Thur 23 Nov as he lifts the lid on secrets and their role in living an ethical life. The Ethics of Secrets tickets on sale now.  


Those regular folk are the real sickos: The Bachelor, sex and love

In 2021, the star of the US iteration of The Bachelorette, Katie Thurston, made international news off the back of one thirty second clip. In it, Thurston, all smiles and fey giggles, announced that she was forbidding the male contestants searching for her endless love from masturbating.

“I kind of had this idea I thought would be fun, where the guys in the house all have to agree to withhold their self-care as long as possible, if you know what I mean,” Thurston told the show’s two hosts, to a great deal of laughter and blushing. What was she was doing was what Bachelorette stars – and indeed many of those who feature in that brand of modern reality television focused on love and sex – have done for years.

Namely, she was upholding the show’s characteristic, and very strange, mix of euphemism and the explicit stating of norms that are so well-trodden in the culture that they’re not even acknowledged as norms at all.

Indeed, the most surprising thing about the clip was that it generated chatter, from both mainstream outlets and social media, in the first place. The Bachelorette’s habit of not so much ignoring the elephant in the corner, but ignoring the corner, and the walls connected to the corner, and perhaps even the entire room, has been part of its fabric from its very conception.

This is a show ostensibly about desire and love – which is a way of saying that it is about different states that circle around, and often lead to or follow from, sex – that shirks desperately away from most of the ways that we understand these things.

All we get on the desire front is a lot of people who pay a certain kind of attention to their bodies, occasionally – extremely occasionally – kissing one another. And all we get on the love front is a lot of talk about forever and eternity, along with roses, champagne flutes, and tears. Sex, meanwhile, lies far beyond the show’s window of acceptable or even conceivable behaviours. It’s there but it’s not there, a part of the very foundation of the show that’s still so taboo that if someone dares speak it aloud, as Thurston did, they’ll be the odd one. 

This backlash to a bizarre norm constructed and maintained by the cameras was taken to an extreme in the case of Abbie Chatfield, a contestant on the Australian version of the show. For daring to tell Bachelor Matt Agnew that she “really wanted” to have sex with him, and admitting that she was “really horny”, Chatfield drew ire from not only the usual anti-sex bores, but from the so-called “sensible mainstream centre.” She was called a slut; her behaviour designated outrageous.  

Such a backlash wasn’t just a policing of women’s bodies, though it was that. It was also a policing of the very standards of desire, part of a long attempt to prettify and clean up matters of sex and love, into “good” (read: socially acceptable) talk about these matters, and “bad” (read: unhinged, dangerous, impolite) talk about them.  

In a society with a healthier understanding of sexuality, Chatfield wouldn’t be the deviation. The whole strange apparatus around her would be. 

Whose Normal?

What makes The Bachelor and The Bachelorette such fascinating, internally frustrated objects is that their restating of the normal reaches such a volume, and resists so many specifics, that it reveals how utterly not-normal, arbitrary, and ill-defined most normal stuff is.

For instance, there is much talk in The Bachelor and The Bachelorette about romantic “compatibility”, a bizarre standard frequently talked about in the culture without ever being actually, you know, talked about. On this compatibility view of love, the pursuit of a significant other is a process of finding someone to fit into your life, as though you have one goal for how you want to be, and only one person who can help you achieve that. It’s that popular meme of the human being as an incomplete jigsaw puzzle, picking up pieces, one by one, and trying to slot them in. 

What The Bachelor and The Bachelorette usually reveal, however, is that actually working out who is “the one” for you is much more difficult than the show’s own repeated emphasis on compatibility implies.

The stars of these shows frequently love and desire multiple people at the same time – the entire dramatic tension of the show comes from their final selection of a partner being surprising and tense.

If this compatibility stuff was as simple as it often described –  or even clearly explicated – then we’d know after thirty seconds spent between potential two life partners that they’d end up together. There’d be no hook; no narrative arc. Eyes would lock, hearts would flutter, and the puzzle piece would just slot in.  

In actuality, on both of these shows, the decision to pick one person over another frequently feels deeply random, and the always vague star usually has to blur their explanations even further into the abstract to justify why they want to be with him, and not with him, or with him. 

The Bachelor and The Bachelorette are supposedly triumphant testaments to monogamy – almost all seasons of the show, except the one starring Nick Cummins, the Honey Badger, end with two and only two people walking off together.  

But actually, in their typically confused way, they also end up explicating the benefits of polyamory. Often, the stars of these shows have a lot of fun, and derive a lot of pleasure and purpose from being intimate and romantic with a number of people at the same time. When it comes time to choose their “one”, it is frequently with tears – on a number of occasions, the stars have said, in so many words, “why not both?” 

Get Those Freaks Away From Me

And why not both? Or more than both? The season of The Bachelor where no contestant is eliminated, everyone goes on dates together, and they all end up having sex and falling in love with one another, is no stranger than the season where only two walk into the sunset. 

Monogamy is a norm, which is to say that it is an utterly arbitrary thing spoken loudly enough to seem iron-wrought. Norms are forceful; they tell us that things are the way they are, and could be no other way. In fact, they are so forceful that they have to state not only their own definitional boundaries, but also the boundaries of the thing that they are not – not just pushing the alien away, but the very act of designating things alien in the first place.   

It was the philosopher Michel Foucault who noted this habit of branding certain objects, habits, or people as “other” in order to better understand and designate the normal. The Bachelor and The Bachelorette do this both frequently and implicitly, never drawing attention to the hand that is forever sketching abrupt and hurried lines in the sand.  

Just consider the things that would be astonishing in the shows’ worlds, without even having to be taboo. For instance, imagine a star being perfectly happy committing to none of the contestants, and merely having sex with a few of them, one after the other. Or a star choosing a contestant but, rather than speaking of their flawless connection together, emphasising “mere” fun, or “mere” pleasure. 

None of the preceding critique of these shows is a call to eradicate romantic and sexual norms altogether, if such an definitional cleansing were even possible. We have to make decisions about how we navigate the world together, and norms become a shorthand way of describing these decisions. What we should remember throughout, however, is that we are free to change this shorthand up whenever we like. And more than that, we should resist, wherever possible, the urge to create the other.

After all, if The Bachelor and The Bachelorette tell us anything, it’s that those regular folks are the real sickos. 


Appreciation or appropriation? The impacts of stealing culture

It’s Halloween season. Perhaps your child has just watched Encantoand they’ve asked to wear Bruno’s ruanaas a costume for trick-or-treating. Deciding how to answer requires traversing murky moral territory and unpacking the term ‘cultural appropriation.’  

Recently, there has been a serious shift in thinking about what makes for an ethically appropriate costume, attracting considerable media attention from the likes of The New York Times, The Atlantic, and The Conversation. The primary concern is that when white people dress-up in outfits removed from their original cultural context, this constitutes cultural appropriation. But what exactly makes cultural appropriation ethically problematic? And does this mean that certain Halloween costumes, such as Bruno’s ruana, are off-the-table for white people? 

At the Festival of Dangerous Ideas in September 2022, the session ‘Stealing Culture’ questioned whether cultural appropriation is an important ethical concept at all. This sounds like a strange enquiry since the answer seems like a clear ‘yes’. However, Luara Ferracioli, a philosopher at the University of Sydney, gave a surprising response: while charges of appropriation target serious moral wrongs, “we don’t need an umbrella term like ‘cultural appropriation’”. It is merely a catch-all phrase for a range of problematic behaviour that doesn’t capture anything morally distinctive.

When there is something genuinely wrong that a charge of cultural appropriation aims to pick out, Luara argues that we would do better simply to help ourselves to the variety of familiar and more precise ethical concepts already at our disposal, such as exploitation, misrepresentation, and causing offense. This results in a striking conclusion: the term ‘cultural appropriation’ is redundant, so we should eliminate it from our moral vocabulary.  

I disagree with Luara. The concept of cultural appropriation is an important resource for moral thinking because it allows us to identify a very specific way that marginalised cultures are subjected to erosion by outsiders and subsumed within dominant ways of life. And with Halloween just around the corner, we should be especially worried about the appropriation of culturally significant outfits.   

So what is cultural appropriation?

Starting with the second part of the term, appropriation involves taking something for one’s own use, often without permission. Cultural appropriation occurs when what is taken belongs to a culture that is not one’s own. A distinctive watermark of our time is the salience of this phenomenon in the public’s moral imagination, tending to focus on situations where a cultural material is taken out of context and worn solely for looks.  

Overwhelmingly, the moral concern of appropriation has been directed at the practices of white people, such as Justin Bieber’s wearing of dreadlocks, Timna Woollard’s mimicking of Indigenous art, and the use of tribal symbolism by the Washington ‘Redskins’. As we approach Halloween, costumes are becoming a primary source of worry. Animated films such as Moana, Coco,Aladdin, and Mulan are extremely popular with children, and a spring of inspiration for potential ‘costumes’—such as the ruanaworn by Bruno in Encanto. The main characters of these films are not white. And the films track the stories of protagonists engaging with distinctive forms of life, and the particular problems that emerge within them, may be quite unfamiliar to the typical white person. 

When a white child wears, say, a ruana to resemble Bruno, what causes uneasiness is not just that it might be offensive. Rather, what makes it troubling is its connection to the historical oppression responsible for existing systems of unjust hierarchy, such as egregious histories of settler colonialism, on-going practices of ethnic discrimination, and growing material inequalities that track skin-colour.  

What makes cultural appropriation ethically problematic?

Cultural appropriation is ethically problematic because of its unique way of exacerbating conditions of unjust inequality. Because of this, we must be extremely judicious about our choice of Halloween costume.

When a white person wears a ruanaas a costume, each instance might not appear to require much moral attention. But when these acts are repeated over time, what emerges is something dangerous. A causal feedback loop between the taker and the cultural material results in changing the material’s significance. For example, the ruana, which is native to Colombia, and initially made by its indigenous and Mestizo people, risks transformation from being a garment contained within Colombian culture and history, to a ‘costume’ available for white people to imbue with new cultural meaning.   

This is what make cultural appropriation a unique moral issue. Because cultural materials partly define the identity of a cultural group, such as the kippah for Jewish people or the kimono in Japanese culture, when these materials are appropriated by another group and imbued with new cultural meaning, the boundaries between the groups start to blur.

It becomes difficult to locate the fault lines between the culture that has taken and the culture that has been taken from. Continuing with our example, if the ruana becomes forcibly transfigured to meet the costume-related desires of white society, this will result in the ruana becoming a ‘shared’ cultural material; something that neither belongs to just one culture, but a common artefact that partly defines both.  

Perhaps this doesn’t seem like such a bad thing. Cultural exchange can be mutually beneficial, after all. But in the context of historical oppression, cultural appropriation is morally alarming. Consider Colombia. It was colonised by the Spanish in the 1500s, and with this came violence, genocide, disease and environmental destruction. The impacts of this history are still felt today, with socio-economic disparities, compromised life opportunities, insecurity and violence, political instability, mass displacement, and a struggle for the recognition and respect of indigenous peoples.  

Being sensitive to the history of oppression and its present impact means we must be morally on-guard against appropriation.

Colonisation in particular requires a special kind of wariness. Given how cultural appropriation can erode and obscure cultural identity boundaries, it is instrumental in furthering colonising projects. Specifically, the effect of cultural appropriation on cultures is asymmetrical: marginalised cultures become ‘subsumed’ within dominant ways of life. For example, the ruana, if continuously used by white people, could become a shared cultural artefact dominantly understood as a colourful ‘poncho’ to be worn at Halloween, rather than something at the heart of Colombian cultural practices.  

This unique way of exacerbating conditions of inequality means that cultural appropriation is a moral concept worth holding onto. Contrary to Luara’s scepticism, there isn’t anything else in our bag of moral terminology up to the task of capturing the distinctive wrong that historically marginalised cultures face when they are subjected to changes from the outside.  

Appreciate, but not appropriate

Does this mean we can never engage with unfamiliar cultural materials? In order to answer, we must consider the distinction between cultural appropriation and cultural appreciation. Where the former erodes cultural boundaries, the latter respects them. But appreciating culture takes considered effort.

Firstly, it’s important to learn the cultural significance of a material and whether its use is a contribution to an existing cultural practice, rather than playing a role in establishing a new one. Secondly, we should understand whether others will interpret their use of a cultural material in the same way as them. For example, when one wears a ruana to a traditional Colombian festival, one contributes to existing cultural practices, and one can be seen by others to be participating in this way. This keeps the ruana within its cultural domain rather than giving it new meaning that overshadows it original significance.  

When it comes to a child requesting to wear a culturally significant outfit for Halloween we need to be mindful of the context in which it is worn, and if it’s taken outside of its cultural context, then consider whether it could be a case of cultural appropriation.

And remember that there are kinds of costume that do not risk diluting other cultures or reinforcing historical injustices. By practising cultural awareness, we can enjoy events like Halloween and do so in a way that respects and appreciates other cultures.  

 

Visit FODI on demand for more provocative ideas, articles, podcasts and videos.


Age of the machines: Do algorithms spell doom for humanity?

The world’s biggest social media platform’s slide into a cesspit of fake news, clickbait and shouty trolling was no accident.  

“Facebook gives the most reach to the most extreme ideas. They didn’t set out to do it, but they made a whole bunch of individual choices for business reasons,” Facebook whistleblower Frances Haugen said. 

In her Festival of Dangerous Ideas talk Unmasking Facebook, data engineer Haugen explained that back in Facebook’s halcyon days of 2008, when it actually was about your family and friends, your personal circle wasn’t making enough content to keep you regularly engaged on the platform. To encourage more screentime, Facebook introduced Pages and Groups and started pushing them on its users, even adding people automatically if they interacted with content. Naturally, the more out-there groups became the more popular ones – in 2016, 65% of people who joined neo-Nazi groups in Germany joined because Facebook suggested them.  

By 2019 (if not earlier), 60% of all content that people saw on Facebook was from their Groups, pushing out legitimate news sources, bi-partisan political parties, non-profits, small businesses and other pages that didn’t pay to promote their posts. Haugen estimates content from Groups is now 85% of Facebook. 

I was working for an online publisher between 2013 and 2016, and our traffic was entirely at the will of the Facebook algorithm. Some weeks we’d be prominent in people’s feeds and get great traffic, other weeks it would change without warning and our traffic and revenue would drop to nothing. By 2016, the situation had gotten so bad that I was made redundant and in 2018 the website folded entirely and disappeared from the internet. 

Personal grievances aside, Facebook has also had sinister implications for democracy and impacts on genocide, as Haugen reminds us. The 2016 Trump election exposed serious privacy deficits at Facebook when 87 million users had their data leaked to Cambridge Analytica for targeted pro-Trump political advertising. Enterprising Macedonian fake news writers exploited the carousel recommended link function to make US$40 million pumping out insane – and highly clickable – alt-right conspiracy theories that undoubtedly played a part in helping Trump into the White House – along with the hackers spreading anti-Clinton hate from the Glavset in St Petersburg. 

Worse, the Myanmar government sent military officials to Russia to learn to use online propaganda techniques for their genocide of the Muslim Rohingya from 2016 onwards, flooding Facebook with vitriolic anti-Rohingya misinformation and inciting violence against them. As The Guardian reported, around that time Facebook had only two Burmese-speaking content moderators. Facebook has also been blamed for “supercharging hate speech and inciting ethnic violence” (Vice) in Ethiopia over the past two years, with engagement-based ranking pushing the most extreme content to the top and English-first content moderation systems being no match for linguistically diverse environments where Facebook is the internet.  

There are design tools that can drive down the spread of misinformation, like forcing people to click on an article before they blindly share it and putting up a barrier between fourth person plus sharers, so they must copy and paste content before they can share or react to it. These have the same efficacy at preventing misinformation spread as third-party fact-checkers and work multi-lingually, Haugen said, and we can mobilise as nations and customers to put pressure on companies to implement them.  

But the best thing we can do is insist on having humans involved in the decision-making process about where to focus our attention, because AI and computers will always automatically opt for the most extreme content that gets the most clicks and eyeballs. 

For technology writer Kevin Roose, though, in his talk Caught in a Web, we are already surrounded by artificial intelligence and algorithms, and they’re only going to get smarter, more sophisticated, and more deeply entrenched. 

70% of our time on YouTube is now spent watching videos suggested by recommendation engines, and 30% of Amazon page views are from recommendations. We let Netflix preference shows for us, Spotify curate radio for us, Google Maps tell us which way to drive or walk, and with the Internet of Things, smart fridges even order milk and eggs for us before we know we need them. 

A commercialised tool one AI researcher told Roose about called pedestrian reidentification can identify you from multiple CCTV feeds, put that together with your phone’s location data and bank transactions and figure out to serve you an ad for banana bread as you’re getting off the train and walking towards your favourite café.   

And in news that will horrify but not surprise journalists, Roose said we’re entering a new age of ubiquitous synthetic media, in which articles written by machines will be hyper personalised at point of click for each reader by crawling your social media profiles.

After 125 years of the reign of ‘all the news that’s fit to print’, we’re now entering the era of “all the news that’s dynamically generated and personalised by machines to achieve business objectives.” 

How can we fight back and resist this seemingly inevitable drift towards automation, surveillance and passivity? Roose highlights three things to do:  

Quoting Socrates, Know Thyself. Know your own preferences and whether you’re choosing something because you like it or because the algorithm suggested it to you.
 

Resist Machine Drift – this is where we unconsciously hand over more and more of our decisions to machines, and “it’s the first step in losing our autonomy.” He recommends “preference mapping” – writing down a list of all your choices in a day, from what you ate or listened to, to what route you took to work. Did you make the decisions, or did an app help you?
 

Invest in Humanity. By this he means investing time in improving our deeply human skills that computers aren’t good at, like moral courage, empathy and divergent, creative thinking. 

Roose is optimistic in this regard – as AI gets better at understanding us and insinuating its way into our lives, he thinks we’re going to see a renewed reverence for humanism and the things machines can’t do. That means more appreciation for the ‘soft’ skills of health care workers, teachers, therapists, and even an artisanal journalism movement written by humans. 

I can’t be quite as optimistic as Roose – these soft skills have never been highly valued in capitalism and I can’t see it changing (I really hope I’m wrong), but I do agree with him that each new generation of social media app (e.g. Tik Tok and BeReal), in the global West at least, will be less toxic than the one before it, driven by the demands of Millennials and Generation Z, and those to come. 

Eventually, this generational movement away from the legacy social media platforms, which have become infected with toxicity, will cause them to either collapse or completely reshape their business model to be like newer apps if they’re going to keep operating in fragile countries and emerging economies.  

And that’s one reason to not let the machines win. 

 

Visit FODI on demand for more provocative ideas, articles, podcasts and videos.


Should we abolish the institution of marriage?

As it stands, in the western world, marriage is the legal union between two people who are typically romantic or sexual partners. Some philosophers are now revisiting the institution of marriage and asking what can be done to reform it, and if it should exist at all.

The institution of marriage has been around for over 4,000 years. Historians first see instances of marriages popping up around 2350 BCE in Mesopotamia, or modern day Iraq. Marriage turned a woman into a man’s object whose primary purpose was producing legitimate offspring.  

Throughout the following centuries and millennia, the institution of marriage evolved. As the Roman Catholic church grew in power throughout the 6th, 7th, and 8th centuries, marriage became widely accepted as a sacrament, or a ceremony that imparted divine grace on two people. During the Middle Ages, as land ownership became an important part of wealth and status, marriage was about securing male heirs to pass down wealth and increasing family status by having a daughter marrying a land-owning man.

“The property-like status of women was evident in Western societies like Rome and Greece, where wives were taken solely for the purpose of bearing legitimate children and, in most cases, were treated like dependents and confined to activities such as caring for children, cooking, and keeping house.”

The thinking that a marriage should be about love really only began in the 1500s, during a period now known as the Renaissance. Not much improved with regard to equality for women, but the movement did put forth the idea that two parties should enter a marriage consensually. Instead of women being viewed as property to be bought and sold with a dowry, women had more autonomy which elevated their social status. Into the 1700s, while the working class were essentially free to marry who they wanted (as long as they married people in the same social class), girls born into aristocratic families were betrothed as infants and married as teenagers in financial alliances between families. 

But marriage doesn’t look like this anymore, right? It’s easy to forget that interracial marriage was illegal in the United States until 1967 and until 1985 in South Africa. Marital rape only became a crime in all American states in 1993. Australia only legalised gay marriage at the end of 2017, making it one of 31 countries to do so. In the 4,000 years of marriage, most legalised marriage equality has happened in the last 50 years. 

The nasty history of marriage has prompted some philosophers to ask: is it time to get rid of the institution of marriage? Or, is it possible to reform?

An argument for the abolition of marriage

“Freedom for women cannot be won without the abolition of marriage.” 

In the last hundred years, there has been plenty of discourse about where marriage fits into modern life. One notable voice, Sheila Cronan, a feminist activist who participated actively in the second wave feminist movement in America, argued that marriage is comparable to slavery, as women performed free labour in the home and were reliant on their husbands for financial and social protection. 

“Attack on such issues as employment discrimination are superfluous; as long as women are working for free in the home we cannot expect our demands for equal pay outside the home to be taken seriously.”

Cronan believed that it would be impossible to achieve true gender equality as long as marriage remained a dominant institution. The comparison of marriage to slavery was hugely controversial, though, because white women had significantly better living conditions and security than slaves. In a modern, western context, Cronan’s article may seem like a bit of an overstatement on the woes of marriage. 

Is there an alternative to abolition?

Another contribution to the philosophy of marriage is the work of Elizabeth Brake. Instead of abolishing marriage, she puts forward a theory in her 2010 paper What Political Liberalism Implies for Marriage Law called “minimal marriage,” which claims that any people should be allowed to get married and enjoy all the legal rights that come with it, regardless of the kind of relationship they are in or the number of people in it. 

Brake argues that when the state allows some kinds of marriages but not other kinds, the state is asserting one kind of relationship as more morally acceptable than another. Marriage in the western world provides a number of legal benefits: visitation rights in hospitals if someone gets sick, fewer complications around joint bank accounts, and the right to inherit the estate if a partner dies, to name a few. It is also often viewed as a better or superior kind of relationship, so those who are not allowed to get married are seen as being in an inferior kind of relationship. 

Take for example the case of two elderly women who are close friends and have lived together for the last 30 years. If one of the elderly women falls ill and needs to go to hospital, her friend might not be able to visit her because she is not a spouse or next-of-kin. If she passes away and her friend is not in her will, then the friend will have no say over what happens to her estate. The reason these friends were not married was because they felt no romantic attraction to each other. But Brake asks us: why should their relationship be seen as less valuable or less important than a romantic one? Why should their caring relationship not be afforded the legal rights of marriage? 

Many legal rights are tied to marriage. In the US, there are over 1000 federal “statutory provisions,” or clauses written in the law, in which marital status is a factor in determining who gets a benefit, privilege, or right. Brake argues that reforming marriage to be “minimal” is the best way to ensure that as many people as possible have these legal rights. 

So, what should the future of marriage be?

Many people today will say that the day they got married was one of the best days of their lives. However, just because we have a more positive view of it now does not erase the thousands of years of discriminatory history. In addition, practices such as child marriage and arranged marriages that no longer occur in the western world are still the norm in other parts of the world. 

While Cronan presents a strong argument for abolishing marriage and Brake presents a strong argument for its reform , we need to examine the underlying social ills that make marriage so complicated. Additionally, there’s no guarantee that abolishing or reforming marriage will eliminate the sexism, racism, and homophobia that create the conditions for marriage to be so discriminatory in the first place. Marriage may not be creating inequality as much as it is a symptom of inequality. While the question of what to do with marriage is worth interrogating, it’s important to consider the larger role it might play in creating social change and working towards equality.