The Three Graces statue exemplifies classical beauty. Marble sculpture showing three nude women embracing, exploring ethics of beauty standards.

Ethics Explainer: Beauty

Three Graces sculpture embodying beauty, representing charm, grace, and elegance. Marble statue exemplifies classical art.

Research shows that physical appearance can affect everything from the grades of students to the sentencing of convicted criminals – are looks and morality somehow related?

Ancient philosophers spoke of beauty as a supreme value, akin to goodness and truth. The word itself alluded to far more than aesthetic appeal, implying nobility and honour – it’s counterpart, ugliness, made all the more shameful in comparison.

From the writings of Plato to Heraclitus, beautiful things were argued to be vital links between finite humans and the infinite divine. Indeed, across various cultures and epochs, beauty was praised as a virtue in and of itself; to be beautiful was to be good and to be good was to be beautiful.

When people first began to ask, ‘what makes something (or someone) beautiful?’, they came up with some weird ideas – think Pythagorean triangles and golden ratios as opposed to pretty colours and chiselled abs. Such aesthetic ideals of order and harmony contrasted with the chaos of the time and are present throughout art history.


Leonardo da Vinci, Vitruvian Man, c.1490 

These days, a more artificial understanding of beauty as a mere observable quality shared by supermodels and idyllic sunsets reigns supreme. 

This is because the rise of modern science necessitated a reappraisal of many important philosophical concepts. Beauty lost relevance as a supreme value of moral significance in a time when empirical knowledge and reason triumphed over religion and emotion.  

 Yet, as the emergence of a unique branch of philosophy, aesthetics, revealed, many still wondered what made something beautiful to look at – even if, in the modern sense, beauty is only skin deep.  

Beauty: in the eye of the beholder?

In the ancient and medieval era, it was widely understood that certain things were beautiful not because of how they were perceived, but rather because of an independent quality that appealed universally and was unequivocally good. According to thinkers such as Aristotle and Thomas Aquinas, this was determined by forces beyond human control and understanding. 

Over time, this idea of beauty as entirely objective became demonstrably flawed. After all, if this truly were the case, then controversy wouldn’t exist over whether things are beautiful or not. For instance, to some, the Mona Lisa is a truly wonderful piece of art – to others, evidence that Da Vinci urgently needed an eye check.  

Leonardo da Vinci, The Mona Lisa, 1503, Photographed at The Louvre, present day 

Consequently, definitions of beauty that accounted for these differences in opinion began to gain credence. David Hume famously quipped that beauty “exists merely in the mind which contemplates”. To him and many others, the enjoyable experience associated with the consumption of beautiful things was derived from personal taste, making the concept inherently subjective.  

This idea of beauty as a fundamentally pleasurable emotional response is perhaps the closest thing we have to a consensus among philosophers with otherwise divergent understandings of the concept. 

Returning to the debate at hand: if beauty is not at least somewhat universal, then why do hundreds and thousands of people every year visit art galleries and cosmetic surgeons in pursuit of it? How can advertising companies sell us products on the premise that they will make us more beautiful if everyone has a different idea of what that looks like? Neither subjectivist nor objectivist accounts of the concept seem to adequately explain reality.  

According to philosophers such as Immanuel Kant and Francis Hutcheson, the answer must lie somewhere in the middle. Essentially, they argue that a mind that can distance itself from its own individual beliefs can also recognize if something is beautiful in a general, objective sense. Hume suggests that this seemingly universal standard of beauty arises when the tastes of multiple, credible experts align. And yet, whether or not this so-called beautiful thing evokes feelings of pleasure is ultimately contingent upon the subjective interpretation of the viewer themselves. 

Looking good vs being good

If this seemingly endless debate has only reinforced your belief that beauty is a trivial concern, then you are not alone! During modernity and postmodernity, philosophers largely abandoned the concept in pursuit of more pressing matters – read: nuclear bombs and existential dread. Artists also expressed their disdain for beauty, perceived as a largely inaccessible relic of tired ways of thinking, through an expression of the anti-aesthetic. 

Marcel Duchamp, Fountain, 1917

Nevertheless, we should not dismiss the important role beauty plays in our day-to-day life. Whilst its association with morality has long been out of vogue among philosophers, this is not true of broader society. Psychological studies continually observe a ‘halo effect’ around beautiful people and things that see us interpret them in a more favourable light, leading them to be paid higher wages and receive better loans than their less attractive peers.  

Social media makes it easy to feel that we are not good enough, particularly when it comes to looks. Perhaps uncoincidentally, we are, on average, increasing our relative spending on cosmetics, clothing, and other beauty-related goods and services.

Turning to philosophy may help us avoid getting caught in a hamster wheel of constant comparison. From a classical perspective, the best way to achieve beauty is to be a good person. Or maybe you side with the subjectivists, who tell us that being beautiful is meaningless anyway. Irrespective, beauty is complicated, ever-important, and wonderful – so long as we do not let it unfairly cloud our judgements. 

 

Step through the mirror and examine what makes someone (or something) beautiful and how this impacts all our lives. Join us for the Ethics of Beauty on Thur 29 Feb 2024 at 6:30pm. Tickets available here.


Black and white portrait of Kate Manne, a Big Thinker, wearing glasses and looking to the side with a thoughtful expression.

Big Thinker: Kate Manne

Black and white portrait of Kate Manne, a Big Thinker, wearing glasses and looking to the side against a textured background.

Kate Manne (1983 – present) is an Australian philosopher who works at the intersection of feminist philosophy, metaethics, and moral psychology.

While Manne is an academic philosopher by training and practice, she is best known for her contributions to public philosophy. Her work draws upon the methodology of analytic philosophy to dissect the interrelated phenomena of misogyny and masculine entitlement.

What is misogyny?

Manne’s debut book Down Girl: The Logic of Misogyny (2018),  develops and defends a robust definition of misogyny that will allow us to better analyse the prevalence of violence and discrimination against women in contemporary society. Contrary to popular belief, Manne argues that misogyny is not a “deep-seated psychological hatred” of women, most often exhibited by men. Instead, she conceives of misogyny in structural terms, arguing that it is the “law enforcement” branch of patriarchy (male-dominated society and government), which exists to police the behaviour of women and girls through gendered norms and expectations.

Manne distinguishes misogyny from sexism by suggesting that the latter is more concerned with justifying and naturalising patriarchy through the spread of ideas about the relationship between biology, gender and social roles.

While the two concepts are closely related, Manne believes that people are capable of being misogynistic without consciously holding sexist beliefs. This is because misogyny, much like racism, is systemic and capable of flourishing regardless of someone’s psychological beliefs.

One of the most distinctive features of Manne’s philosophical work is that she interweaves case studies from public and political life into her writing to powerfully motivate her theoretical claims.

For instance, in Down Girl, Manne offers up the example of Julia Gillard’s famous misogyny speech from October 2012 as evidence of the distinction between sexism and misogyny in Australian politics. She contends that Gillard’s characterisation of then Opposition Leader Tony Abbott’s behaviour toward her as both sexist and misogynistic is entirely apt. His comments about the suitability of women to politics and characterisation of female voters as immersed in housework display sexist values, while his endorsement of statements like “Ditch the witch” and “man’s bitch” are designed to shame and belittle Gillard in accordance with misogyny.

Himpathy and herasure

One of the key concepts coined by Kate Manne is “himpathy”. She defines himpathy as “the disproportionate or inappropriate sympathy extended to a male perpetrator over his similarly, or less privileged, female targets in cases of sexual assault, harassment, and other misogynistic behaviour.”

According to Manne, himpathy operates in concert with misogyny. While misogyny seeks to discredit the testimony of women in cases of gendered violence, himpathy shields the perpetrators of that misogynistic behaviour from harm to their reputation by positioning them as “good guys” who are the victims of “witch hunts”. Consequently, the traumatic experiences of those women and their motivations for seeking justice are unfairly scrutinised and often disbelieved. Manne terms the impact of this social phenomenon upon women, “herasure.”

Manne’s book Entitled: How Male Privilege Hurts Women (2020) illustrates the potency of himpathy by analysing the treatment of Brett Kavanaugh during the Senate Judiciary Committee’s investigation into allegations of sexual assault levelled against Kavanaugh by Professor Christine Blassey Ford. Manne points to the public’s praise of Kavanaugh as a brilliant jurist who was being unfairly defamed by a woman who sought to derail his appointment to the Supreme Court of the United States as an example of himpathy in action.

She also suggests that the public scrutiny of Ford’s testimony and the conservative media’s attack on her character functioned to diminish her credibility in the eyes of the law and erase her experiences. The Senate’s ultimate endorsement of Justice Kavanaugh’s appointment to the Supreme Court proved Manne’s thesis – that male entitlement to positions of power is a product of patriarchy and serves to further entrench misogyny.

Evidently, Kate Manne is a philosopher who doesn’t shy away from thorny social debates. Manne’s decision to enliven her philosophical work with empirical evidence allows her to reach a broader audience and to increase the accessibility of philosophy for the public. She represents a new generation of female philosophers – brave, bold, and unapologetically political.


Squirrel peeks from behind a tree. Ethics Explainer: Pragmatism. The squirrel has reddish-brown fur and long whiskers, with green and yellow leaves.

Ethics Explainer: Pragmatism

Squirrel peeks from tree. Ethics Explainer: Pragmatism concept image. Wildlife animal with bright eyes, bushy tail, and curious expression.

Pragmatism is a philosophical school of thought that, broadly, is interested in the effects and usefulness of theories and claims.

Pragmatism is a distinct school of philosophical thought that began at Harvard University in the late 19th century. Charles Sanders Pierce and William James were members of the university’s ‘Metaphysical Club’ and both came to believe that many disputes taking place between its members were empty concerns. In response, the two began to form a ‘Pragmatic Method’ that aimed to dissolve seemingly endless metaphysical disputes by revealing that there was nothing to argue about in the first place.

How it came to be

Pragmatism is best understood as a school of thought born from a rejection of metaphysical thinking and the traditional philosophical pursuits of truth and objectivity. The Socratic and Platonic theories that form the basis of a large portion of Western philosophical thought aim to find and explain the “essences” of reality and undercover truths that are believed to be obscured from our immediate senses.

This Platonic aim for objectivity, in which knowledge is taken to be an uncovering of truth, is one which would have been shared by many members of Pierce and James’ ‘Metaphysical Club’. In one of his lectures, James offers an example of a metaphysical dispute:

A squirrel is situated on one side of a tree trunk, while a person stands on the other. The person quickly circles the tree hoping to catch sight of the squirrel, but the squirrel also circles the tree at an equal pace, such that the two never enter one another’s sight. The grand metaphysical question that follows? Does the man go round the squirrel or not?

Seeing his friends ferociously arguing for their distinct position led James to suggest that the correctness of any position simply turns on what someone practically means when they say, ‘go round’. In this way, the answer to the question has no essential, objectively correct response. Instead, the correctness of the response is contingent on how we understand the relevant features of the question.

Truth and reality

Metaphysics often talks about truth as a correspondence to or reflection of a particular feature of “reality”. In this way, the metaphysical philosopher takes truth to be a process of uncovering (through philosophical debate or scientific enquiry) the relevant feature of reality.

On the other hand, pragmatism is more interested in how useful any given truth is. Instead of thinking of truth as an ultimately achievable end where the facts perfectly mirror some external objective reality, pragmatism instead regards truth as functional or instrumental (James) or the goal of inquiry where communal understanding converges (Pierce).

Take gravity, for example. Pragmatism doesn’t view it as true because it’s the ‘perfect’ understanding and explanation for the phenomenon, but it does view it as true insofar as it lets us make extremely reliable predictions and it is where vast communal understanding has landed. It’s still useful and pragmatic to view gravity as a true scientific concept even if in some external, objective, all-knowing sense it isn’t the perfect explanation or representation of what’s going on.

In this sense, truth is capable of changing and is contextually contingent, unlike traditional views.. Pragmatism argues that what is considered ‘true’ may shift or multiply when new groups come along with new vocabularies and new ways of seeing the world.

To reconcile these constantly changing states of language and belief, Pierce constructed a ‘Pragmatic Maxim’ to act as a method by which thinkers can clarify the meaning of the concepts embedded in particular hypotheses. One formation of the maxim is:

Consider what effects, which might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of those effects is the whole of our conception of the object.

In other words, Pierce is saying that the disagreement in any conceptual dispute should be describable in a way which impacts the practical consequences of what is being debated. Pragmatic conceptions of truth take seriously this commitment to practicality. Richard Rorty, who is considered a neopragmatist, writes extensively on a particular pragmatic conception of truth.

Rorty argues that the concept of ‘truth’ is not dissimilar to the concept of ‘God’, in the way that there is very little one can say definitively about God. Rorty suggests that rather than aiming to uncover truths of the world, communities should instead attempt to garner as much intersubjective agreement as possible on matters they agree are important.

Rorty wants us to stop asking questions like, ‘Do human beings have inalienable human rights?’, and begin asking questions like, ‘Should we work towards obtaining equal standards of living for all humans?’.  The first question is at risk of leading us down the garden path of metaphysical disputes in ways the second is not. As the pragmatist is concerned with practical outcomes, questions which deal in ‘shoulds’ are more aligned with positing future directed action than those which get stuck in metaphysical mud.

Perhaps the pragmatists simply want us to ask ourselves: Is the question we’re asking, or hypothesis that we’re posing, going to make a useful difference to addressing the problem at hand? Useful, as Rorty puts it, is simply that which gets us more of what we want, and less of what we don’t want. If what we want is collective understanding and successful communication, we can get it by testing whether the questions we are asking get us closer to that goal, not further away.


big-thinker-david-hume

Big Thinker: David Hume

big-thinker-david-hume

There are few philosophers whose work has ranged over such vast territory as David Hume (1711—1776).

If you’ve ever felt underappreciated in your time, let the story of David Hume console you: despite being one of the most original and profound thinkers of his or any era, the Scottish philosopher never held an academic post. Indeed, he described his magnum opus, A Treatise of Human Nature, as falling “stillborn from the press.” When he was recognized at all during his lifetime, it was primarily as a historian – his multi-volume work on the history of the British monarchy was heralded in France, while in his native country, he was branded a heretic and a pariah for his atheistic views.

Yet, in the many years since his passing, Hume has been retroactively recognised as one the most important writers of the Early Modern era. His works, which touch on everything from ethics, religion, metaphysics, economics, politics and history, continue to inspire fierce debate and admiration in equal measure. It’s not hard to see why. The years haven’t cooled off the bracing inventiveness of Hume’s writing one bit – he is as frenetic, wide-ranging and profound as he ever was.

Empathy

The foundation of Hume’s ethical system is his emphasis on empathy, sometimes referred to as “fellow-feeling” in his writing. Hume believed that we are constantly being shaped and influenced by those around us, via both an imaginative, perspective-taking form of empathy – putting ourselves in other’s shoes – and a “mechanical” form of empathy, now called emotional contagion.

Ever walked into a room of laughing people and found yourself smiling, even though you don’t know what’s being laughed at? That’s emotional contagion, a means by which we unconsciously pick up on the emotional states of those around us.

Hume emphasised these forms of fellow-feeling as the means by which we navigate our surroundings and make ethical decisions. No individual is disconnected from the world – no one is able to move through life without the emotional states of their friends, lovers, family members and even strangers getting under their skin. So, when we act, it is rarely in a self-interested manner – we are too tied up with others to ever behave in a way that serves only ourselves.

The Nature of the Self

Hume is also known for his controversial views on the self. For Hume, there is no stable, internalised marker of identity – no unchanging “me”. When Hume tried to search inside himself for the steady and constant “David Hume” he had heard so much about, he found only sensations – the feeling of being too hot, of being hungry. The sense of self that others seemed so certain of seemed utterly artificial to him, a tool of mental processing that could just as easily be dispatched.

Hume was no fool – he knew that agents have “character traits” and often behave in dependable ways. We all have that funny friend who reliably cracks a joke, the morose friend who sees the worst in everything. But Hume didn’t think that these character traits were evidence of stable identities. He considered them more like trends, habits towards certain behaviours formed over the course of a lifetime.

Such a view had profound impacts on Hume’s ethics, and fell in line with his arguments concerning empathy. After all, if there is no self – if the line between you and I is much blurrier than either of us initially imagined – then what could be seen as selfish behaviours actually become selfless ones. Doing something for you also means doing something for me, and vice versa.

On Hume’s view, we are much less autonomous, sure, forever buffeted around by a world of agents whose emotional states we can’t help but catch, no sense of stable identity to fall back on. But we’re also closer to others; more tied up in a complex social web of relationships, changing every day.

Moral Motivation

Prior to Hume, the most common picture of moral motivation – one initially drawn by Plato – was of rationality as a carriage driver, whipping and controlling the horses of desire. According to this picture, we act after we decide what is logical, and our desires then fall into place – we think through our problems, rather than feeling through them.

Hume, by contrast, argued that the inverse was true. In his ethical system, it is desire that drives the carriage, and logic is its servant. We are only ever motivated by these irrational appetites, Hume tells us – we are victims of our wants, not of our mind at its most rational.

Reason is, and ought only to be the slave of the passions and can never pretend to any other office than to serve and obey them.

At the time, this was seen as a shocking inversion. But much of modern psychology bears Hume out. Consider the work of Sigmund Freud, who understood human behaviour as guided by a roiling and uncontrollable id. Or consider the situation where you know the “right” thing to do, but act in a way inconsistent with that rational belief – hating a successful friend and acting to sabotage them, even when on some level you understand that jealousy is ugly.

There are some who might find Hume’s ethics somewhat depressing. After all, it is not pleasant to imagine yourself as little more than a constantly changing series of emotions, many of which you catch from others – and often without even wanting to. But there is great beauty to be found in his ethical system too. Hume believed he lived in a world in which human beings are not isolated, but deeply bound up with each other, driven by their desires and acting in ways that profoundly affect even total strangers.

Given we are so often told our world is only growing more disconnected, belief in the possibility to shape those around you – and therefore the world – has a certain beauty all of its own.


Ethics Explainer: Autonomy

Autonomy is the capacity to form beliefs and desires that are authentic and in our best interests, and then act on them.

What is it that makes a person autonomous? Intuitively, it feels like a person with a gun held to their head is likely to have less autonomy than a person enjoying a meandering walk, peacefully making a choice between the coastal track or the inland trail. But what exactly are the conditions which determine someone’s autonomy?

Is autonomy just a measure of how free a person is to make choices? How might a person’s upbringing influence their autonomy, and their subsequent capacity to act freely? Exploring the concept of autonomy can help us better understand the decisions people make, especially those we might disagree with.

The definition debate

Autonomy, broadly speaking, refers to a person’s capacity to adequately self-govern their beliefs and actions. All people are in some way influenced by powers outside of themselves, through laws, their upbringing, and other influences. Philosophers aim to distinguish the degree to which various conditions impact our understanding of someone’s autonomy.

There remain many competing theories of autonomy.

These debates are relevant to a whole host of important social concerns that hinge on someone’s independent decision-making capability. This often results in people using autonomy as a means of justifying or rebuking particular behaviours. For example, “Her boss made her do it, so I don’t blame her” and “She is capable of leaving her boyfriend, so it’s her decision to keep suffering the abuse” are both statements that indirectly assess the autonomy of the subject in question.

In the first case, an employee is deemed to lack the autonomy to do otherwise and is therefore taken to not be blameworthy. In the latter case, the opposite conclusion is reached. In both, an assessment of the subject’s relative autonomy determines how their actions are evaluated by an onlooker.

Autonomy often appears to be synonymous with freedom, but the two concepts come apart in important ways.

Autonomy and freedom

There are numerous accounts of both concepts, so in some cases there is overlap, but for the most part autonomy and freedom can be distinguished.

Freedom tends to broader and more overt. It usually speaks to constraints on our ability to act on our desires. This is sometimes also referred to as negative freedom. Autonomy speaks to the independence and authenticity of the desires themselves, which directly inform the acts that we choose to take. This is has lots in common with positive freedom.

For example, we can imagine a person who has the freedom to vote for any party in an election, but was raised and surrounded solely by passionate social conservatives. As a member of a liberal democracy, they have the freedom to vote differently from the rest of their family and friends, but they have never felt comfortable researching other political viewpoints, and greatly fear social rejection.

If autonomy is the capacity a person has to self-govern their beliefs and decisions, this voter’s capacity to self-govern would be considered limited or undermined (to some degree) by social, cultural and psychological factors.

Relational theories of autonomy focus on the ways we relate to others and how they can affect our self-conceptions and ability to deliberate and reason independently.

Relational theories of autonomy were originally proposed by feminist philosophers, aiming to provide a less individualistic way of thinking about autonomy. In the above case, the voter is taken to lack autonomy due to their limited exposure to differing perspectives and fear of ostracism. In other words, the way they relate to people around them has limited their capacity to reflect on their own beliefs, values and principles.

One relational approach to autonomy focuses on this capacity for internal reflection. This approach is part of what is known as the ‘procedural theory of relational autonomy’. If the woman in the abusive relationship is capable of critical reflection, she is thought to be autonomous regardless of her decision.

However, competing theories of autonomy argue that this capacity isn’t enough. These theories say that there are a range of external factors that can shape, warp and limit our decision-making abilities, and failing to take these into account is failing to fully grasp autonomy. These factors can include things like upbringing, indoctrination, lack of diverse experiences, poor mental health, addiction, etc., which all affect the independence of our desires in various ways.

Critics of this view might argue that a conception of autonomy is that is broad makes it difficult to determine whether a person is blameworthy or culpable for their actions, as no individual remains untouched by social and cultural influences. Given this, some philosophers reject the idea that we need to determine the particular conditions which render a person’s actions truly ‘their own’.

Maybe autonomy is best thought of as merely one important part of a larger picture. Establishing a more comprehensively equitable society could lessen the pressure on debates around what is required for autonomous action. Doing so might allow for a broadening of the debate, focusing instead on whether particular choices are compatible with the maintenance of desirable societies, rather than tirelessly examining whether or not the choices a person makes are wholly their own.


Five subversive philosophers: Woman with dreadlocks poses with Socrates statue and a 17th-century portrait.

Five subversive philosophers throughout the ages

Five subversive philosophers: Composite image of Socrates, a contemporary woman, and a historical portrait. Black and white.

Philosophy helps us bring important questions, ideas and beliefs to the table and work towards understanding. It encourages us to engage in examination and to think critically about the world. 

Here are five philosophers from various time periods and walks of life that demonstrate the importance and impact of critical thinking throughout history.

 

Ruha Benjamin

Ruha Benjamin (1978present), while not a self-professed philosopher, uses her expertise in sociology to question and criticise the relationship between innovation and equity. Benjamin’s works focus on the intersection of race, justice and technology, highlighting the ways that discrimination is embedded in technology, meaning that technological progress often heightens racial inequalities instead of addressing themOne of the most prominent of these is her analysis of how “neutral” algorithms can replicate or worsen racial bias because they are shaped by their creators’ (often unconscious) biases.

“The default setting of innovation is inequity.”

 

J. J. C. Smart

J.J.C. Smart (1920-2012) was a British-Australian philosopher with far-reaching interests across numerous subfields of philosophy. Smart was a Foundation Fellow of the Australian Academy of the Humanities at its establishment in 1969. In 1990, he was awarded the Companion in the General Division of the Order of AustraliaIn ethics, Smart defended “extreme” act utilitarianism – a type of consequentialism – and outwardly opposed rule utilitarianism, dubbing it “superstitious rule workshop”, contributing to its steadily decline in popularity.

“That anything should exist at all does seem to me a matter for the deepest awe. But whether other people feel this sort of awe, and whether they or I ought to, is another question. I think we ought to.”

 

Elisabeth of the Bohemia

Princess Elisabeth of Bohemia (16181680) was a philosopher who is best known for her correspondence with René DescartesAfter meeting him while he was visiting in Holland, the two exchanged letters for several years. In the letters, Elisabeth questions Descartes’ early account of mind-body dualism (the idea that the mind can exist outside of the body)wondering how something immaterial can have any effect on the body. Her discussion with Descartes has been cited as the first argument for physicalism. In later letters, her criticisms prompted him to develop his moral philosophy – specifically his account of virtue. Elisabeth has featured as a key subject in feminist history of philosophy, as she was at once a brilliant and critical thinker, while also having to live with the limitations imposed on women at the time.

“Inform your intellect, and follow the good it acquaints you with.”

 

Socrates

Socrates (470 BCE399 BCE) is widely considered to be one of the founders of Western philosophy, though almost all we know of him is derived from the work of others, like Plato, Xenophon and Aristophanes. Socrates is known for bringing about a huge shift in philosophy away from physics and toward practical ethics – thinking about how we do live and how we should live in the world. Socrates is also known for bringing these issues to the public. Ultimately, his public encouragement of questioning and challenging the status quo is what got him killed. Luckily, his insights were taken down, taught and developed for centuries to come.

“The unexamined life is not worth living.”

 

Francesca Minerva
Francesca Minerva

Francesca Minerva is a contemporary bioethicist whose work includes medical ethics, technological ethics, discrimination and academic freedom. One of Minerva’s most controversial (if misunderstood) contributions to ethics is her paper, co-written with Alberto Giubilini in 2012, titled “After-birth Abortion: why should the baby live?”. In it, the pair argue that if it’s permissible to abort a foetus for a reason, then it should also be permissible to “abort” (i.e., euthanise) a newborn for the same reason.  Minerva is also a large proponent of academic freedom and co-founded the Journal of Controversial Ideas in an effort to eliminate the social pressures that threaten to impede academic progress.

The proper task of an academic is to strive to be free and unbiased, and we must eliminate pressures that impede this.”


Boardroom interior; long table with pink chairs under pendant lights. Large windows offer a view of trees. Decisions in the boardroom concept.

Making the tough calls: Decisions in the boardroom

Boardroom interior with a long table, pink chairs, and large windows. Sunlight streams in, highlighting the space for making tough decisions.

The scenario is familiar to us all. Company X is in crisis. A series of poor management decisions set in motion a sequence of events that lead to an avalanche of bad headlines and public outcry.

When things go wrong for an organisation – so wrong that the carelessness or misdeeds revealed could be considered ethical failure – responsibility is shouldered by those who are the final decision makers. They are and should be held accountable.

Boards of organisations, and the individual directors that comprise them, collectively make decisions about strategy, governance and corporate performance. Decisions that involve the interests of shareholders, employees, customers, suppliers and the wider community. They will also involve competing values, compromises and tradeoffs, information gaps and grey areas.

In the recent 2021 Future of the Board report from The Governance Institute of Australia, respondents were surveyed to consider the most valued attributes for future board directors. Strategic and critical thinking were once again ranked the highest, closely followed by the values of ethics and culture as the two most important areas that boards need to focus on to prevent corporate failure. A culture of accountability, transparency, trust and respect were viewed as a top factor determining a healthy dynamic between boards and management.  

Ethics plays a central role in the decisions that face Boards and directorssuch as:

  • What constitutes a conflict of interest and how should it be managed?
  • How aggressive should tax strategies be?
  • What incentive structures and sales techniques will create a healthy and ethical organisational culture?
  • What about investments in organisations that profit from arms and weaponry?
  • How should organisations manage the effects technology has on their workforce?
  • What obligation do organisations have to protect the environment and human rights?

Together, The Australian Institute of Company Directors (AICD) and The Ethics Centre have developed a decision-making guide for directors.

Ethics in the Boardroom provides directors with a simple decision-making framework which they can use to navigate the ethical dimensions of any decision. Through the insights of directors, academics and subject matter experts, the guide also provides four lenses to frame board conversations. These lenses give directors the best chance of viewing decisions from different perspectives. Rather than talking past each other, they will help directors pinpoint and resolve disagreement.

  • Lens 1: General influences – Organisations are participants in society through the products and services they offer and their statuses as employers and influencers. The guide invites directors to seek out the broadest possible range of perspectives to enhance their choices and decisions. It also suggests that organisations should strive for leadership. What do you think about companies that take a stance on matters like climate change and same sex marriage?
  • Lens 2: The board’s collective culture and character – In ethical decision making, directors are bound to apply the values and principles of their organisation. As custodians, they must ensure that culture and values are aligned. The guide invites directors to be aware that ethical decision-making in the boardroom must be tempered. Decision making shouldn’t be driven by: form over substance, passion over reason, collegiality over concurrence, the need to be right, or legacy. Just because a particular course of action is legal, does that make it right? Just because a company has always done it that way, should they continue?
  • Lens 3: Interpersonal relationships and reasoning – Boards are collections of individuals who bring their own individual decision-making ‘style’ to the board table. Power dynamics exist in any group, with each person influencing and being influenced by others. Making room for diversity and constructive disagreement is vital. How can chairs and other directors empower every director to stand up for what is right? How do boards ensure that the person sitting quietly, with deep insights into ethical risk, has the courage to speak?
  • Lens 4: The individual director – Directors bring their own wisdom and values to decision making. But they also might bring their own motivations that biases. The guide invites directors to self-reflect and bring the best of themselves to the board table. How can we all be more reflective in our own decision making?

This guide is a must-read for anyone who has an interest in the conduct of any board-led organisation. That includes schools, sports clubs, charities and family businesses as well as large corporations.

Behind each brand and each company, there are people making decisions that affect you as a consumer, employee and citizen. Wouldn’t you rather that those at the top had ethics at the front of their mind in the decisions that they make?

Click here to view or download a copy of the guide.


Back view of a woman in a dark coat with fingers crossed, standing in front of colorful posters. The posters advertise creative workshops and events.

Ethics Explainer: Lying

Fingers crossed behind back. Person in a dark coat with their fingers crossed, possibly indicating deception or a wish for good luck.

Lying is something we’ve all done at some point and we tend to take its meaning for granted, but what are we really doing when we lie, and is it ever okay?  

A person lies when they: 

  1. knowingly communicate something false 
  2. purposely communicate it as if it was true 
  3. do so with an intention to deceive. 

The intention to deceive is an essential component of lying. Take a comedian, for example – they might intentionally present a made-up story as true when telling a joke, engaging in satire, etc. However, the comedian’s purpose is not to deceive but to entertain.  

Lying should be distinguished from other deviations from the truth like: 

  • Falsehoods – false claims we make while believing what we say to be true 
  • Equivocations – the use of ambiguous language that allows a person to persist in holding a false belief. 

While these are different to lying, they can be equally problematic. Accidentally communicating false information can still result in disastrous consequences. People in positions of power (e.g., government ministers) have an obligation to inform themselves about matters under their control or influence and to minimise the spread of falsehoods. Having a disregard for accuracy, while it is not lying, should be considered wrong – especially when as a result of negligence or indifference. 

The same can be said of equivocation. The intention is still there, but the quality of exchange is different. Some might argue that purposeful equivocation is akin to “lying by omission”, where you don’t actively tell a lie, but instead simply choose not to correct someone else’s misunderstanding.  

Despite lying being fairly common, most of our lives are structured around the belief that people typically don’t do it.

We believe our friends when we ask them the time, we believe meteorologists when they tell us the weather, we believe what doctors say about our health. There are exceptions, of course, but for the most part we assume people aren’t lying. If we didn’t, we’d spend half our days trying to verify what everyone says! 

In some cases, our assumption of honesty is especially important. Democracies, for example, only function legitimately when the government has the consent of its citizens. This consent needs to be: 

  • free (not coerced) 
  • prior (given before the event needing consent) 
  • informed (based on true and accessible information) 

Crucially, informed consent can’t be given if politicians lie in any aspects of their governance. 

So, when is lying okay? Can it be justified?

Some philosophers, notably Immanuel Kant, argue that lying is always wrong – regardless of the consequences. Kant’s position rests on something called the “categorical imperative”, which views lying as immoral because:  

  1. it would be fundamentally contradictory (and therefore irrational) to make a general rule that allows lying because it would cause the concepts of lies and truths to lose their meaning 
  2. it treats people as a means rather than as autonomous beings with their own ends 

In contrast, consequentialists are less concerned with universal obligations. Instead, their foundation for moral judgement rests on consequences that flow from different acts or rules. If a lie will cause good outcomes overall, then (broadly speaking) a consequentialist would think it was justified. 

There are other things we might want to consider by themselves, outside the confines of a moral framework. For example, we might think that sometimes people aren’t entitled to the truth in principle. For example, during a war, most people would intuit that the enemy isn’t entitled to the truth about plans and deployment details, etc. This leads to a more general question: in what circumstances do people forfeit their right to the truth? 

What about “white lies”? These lies usually benefit others (sometimes at the liar’s expense!) or are about trivial things. They’re usually socially acceptable or at least tolerated because they have harmless or even positive consequences. For example, telling someone their food is delicious (even though it’s not) because you know they’ve had a long day and wouldn’t want to hurt their feelings. 

Here are some things to ask yourself if you’re about to tell a white lie: 

  • Is there a better response that is truthful?  
  • Does the person have a legitimate right to receive an honest answer? 
  • What is at stake if you give a false or misleading answer? Will the person assume you’re telling the truth and potentially harm themselves as a result of your lie? Will you be at fault? 
  • Is trust at the foundation of the relationship – and will it be damaged or broken if the white lie is found out? 
  • Is there a way to communicate the truth while minimising the hurt that might be caused? For example, does the best response to a question about an embarrassing haircut begin with a smile and a hug before the potentially hurtful response? 

Lying is a more complex phenomenon than most people consider. Essentially, our general moral aversion to it comes down to its ability to inhibit or destroy communication and cooperation – requirements for human flourishing. Whether you care about duties, consequences or something else, it’s always worth questioning your intentions to check if you are following your moral compass.  


Statue of Plato, the Big Thinker. The philosopher has a beard and is wearing a toga. The statue is made of white stone and is very detailed.

Big Thinker: Plato

Statue of Plato, a big thinker, rendered in monochrome. The ancient Greek philosopher is depicted with a beard and draped clothing, gazing thoughtfully.

Plato (~428 BCE—348 BCE) is commonly considered to be one of the most influential writers in the history of philosophy.

Along with his teacher, Socrates, and student, Aristotle, Plato is among the most famous names in Western philosophy – and for good reason. He is one of the only ancient philosophers whose entire body of work has passed through history in-tact over the last 2,400 years, which has influenced an incredibly wide array of fields including ethics, epistemology, politics and mathematics. 

Plato was a citizen of Athens with high status, born to an influential, aristocratic family. This led him to be well-educated in several fields – though he was also a wrestler! 

Influences and writing

Plato was hugely influenced by his teacher, Socrates. Luckily, too, because a large portion of what we know about Socrates comes from Plato’s writings. In fact, Plato dedicated an entire text, The Apology of Socrates, to giving a defense of Socrates during his trial and execution.  

The vast majority of Plato’s work is written in the form of a dialogue – a running exchange between a few (often just two) people.  

Socrates is frequently the main speaker in these dialogues, where he uses consistent questioning to tease out thoughts, reasons and lessons from his “interlocutors”. You might have heard this referred to as the “Socratic method”.  

This method of dialogue where one person develops a conversation with another through questioning is also referred to as dialectical. This sort of dialogue is supposed to be a way to criticise someone’s reasoning by forcing them to reflect on their assumptions or implicit arguments. It’s also argued to be a method of intuition and sometimes simply to cause puzzlement in the reader because it’s unclear whether some questions are asked with a sense of irony. 

Plato’s revolutionary ideas span many fields. In epistemology, he contrasts knowledge (episteme) with opinion (doxa). Interestingly, he says that knowledge is a matter of recollection rather than discovery. He is also said to be the first person to suggest a definition of knowledge as “justified true belief”.  

Plato was also very vocal about politics, though many of his thoughts are difficult to attribute to him given the third person dialogue form of his writings. Regardless, he seems to have had very impactful perspectives on the importance of philosophy in politics: 

“Until philosophers rule as kings or those who are now called kings and leading men genuinely and adequately philosophize, that is, until political power and philosophy entirely coincide, while the many natures who at present pursue either one exclusively are forcibly prevented from doing so, cities will have no rest from evils, … nor, I think, will the human race.” 

Allegories

You might have also heard of The Allegory of the Cave. Plato reflected on the idea that most people aren’t interested in lengthy philosophical discourse and are more drawn to storytelling. The Allegory of the Cave is one of several stories that Plato created with the intent to impart moral or political questions or lessons to the reader.  

The Ring of Gyges is another story of Plato’s that revolves around a ring with the ability to make the wearer invisible. A character in the Republic proposes this idea and uses it to discuss the ethical consequences of the item – namely, whether the wearer would be happy to commit injustices with the anonymity of the ring.  

This kind of ethical dilemma mirrors contemporary debates about superpowers or anonymity on the internet. If we aren’t able to be held accountable, and we know it, how is that likely to change our feelings about right and wrong? 

The Academy

The Academy was the first institution of higher learning in the Western world. It was founded by Plato some time after he turned 30, after inheriting the property. It was free and open to the public, at least during Plato’s time, and study there consisted of conversations and problems posed by Plato and other senior members, as well as the occasional lecture. The Academy is famously where Aristotle was educated.  

After Plato’s death, the Academy continued to be led by various philosophers until it was destroyed in 86 BC during the First Mithridatic War. However, Platonism (the philosophy of Plato) continued to be taught and revived in various ways and has had a lasting impact on many areas of life continuing today.  


Woman reading Proverbs. Ethics and epistemology concept, exploring wisdom. Open book with text, focus on learning.

Ethics Explainer: Epistemology

Woman reading Proverbs. Ethics and epistemology concept shown with open book, knowledge seeking, and spiritual learning.

Mostly, we take “knowledge” or “knowing” for granted, but the philosophical study of knowledge has had a long and detailed history that continues today.

We constantly claim to ‘know things’. We know the sun will rise tomorrow. We know when we drop something, it will fall. We know a factoid we read in a magazine. We know our friend’s cousin’s girlfriend’s friend saw a UFO that one time. 

You might think that some of these claims aren’t very good examples of knowledge, and that they’d be better characterised as “beliefs” – or more specifically, unjustified beliefs. Well, it turns out that’s a pretty important distinction. 

“Epistemology” comes from the Greek words “episteme” and “logos”. Translations vary slightly, but the general meaning is “account of knowledge”, meaning that epistemology is interested in figuring out things like what knowledge is, what counts as knowledge, how we come to understand things and how we justify our beliefs. In turn, this links to questions about the nature of ‘truth’. 

So, what is knowledge? 

A well-known, though still widely contentious, view of knowledge is that it is justified true belief.

This idea dates all the way back to Plato, who wrote that merely having a true belief isn’t sufficient for knowledge. Imagine that you are sick. You have no medical expertise and have not asked for any professional advice and yet you believe that you will get better because you’re a generally optimistic person. Even if you do get better, it doesn’t follow that you knew you were going to get better – only that your belief coincidentally happened to be true.  

So, Plato suggested, what if we added the need for a rational justification for our belief on top of it being true? In order for us to know something, it doesn’t just need to be true, it also needs to be something we can justify with good reason.  

Justification comes with its own unique problems, though. What counts as a good reason? What counts as a solid foundation for knowledge building?  

The two classical views in epistemology are that we should rely on the perceptual experiences we gain through our senses (empiricism) or that we should rely first and foremost on pure reason because our senses can deceive us (rationalism). Well-known empiricists include John Locke and David Hume; well-known rationalists include René Descartes and Baruch Spinoza. 

Though Plato didn’t stand by the justified true belief view of knowledge, it became quite popular up until the 20th century, when Edmund Gettier blew the problem wide open again with his paper “Is Justified True Belief Knowledge?”.  

Since then, there has been very little consensus on the definition, with many philosophers claiming that it’s impossible to create a definition of knowledge without exceptions.

Some more modern subfields within epistemology are concerned with the mechanics of knowledge between people. Feminist epistemology, and social epistemology more broadly, deals with a lot of issues that raise ethical questions about how we communicate and perceive knowledge from others.  

Prominent philosophers in this field include Miranda Fricker and José Medina. Fricker developed the concept of “epistemic injustice”, referring to injustices that involve the production, communication and understanding of knowledge.  

One type of knowledge-based injustice that Fricker focuses on, and that has large ethical considerations, is testimonial injustice. These are injustices of a kind that involve issues in the way that testimonies – the act of telling people things – are communicated, understood, believed. It largely involves the interrogation of prejudices that unfairly shape the credibility of speakers.  

Sometimes we give people too much credibility because they are attractive, charismatic or hold a position of power. Sometimes we don’t give people enough credibility because of a race, gender, class or other forms of bias related to identity.  

These types of distinctions are at the core of ethical communication and decision-making.

When we interrogate our own views and the views of others, we want to be asking ourselves questions such as: Have I made any unfair assumptions about the person speaking? Are my thoughts about this person and their views justified? Is this person qualified? Did I get my information from a reliable source?  

In short, a healthy degree of scepticism (and self-examination) should be used to filter through information that we receive from others and to question our initial attitudes towards information that we sometimes take for granted or ignore. In doing this, we can minimise misinformation and make sure that we’re appropriately treating those who have historically been and continue to be silenced and ignored. 

Ethics draws attention to the quality and character of the decisions we make. We typically hold that decisions are better if well-informed … which is another way of saying that when it comes to ethics, knowledge matters!