Ethics Explainer: Hope
We hope for fine weather on weekends and the best for our buddies – an obvious statement that hardly screams ethics.
But within our everyday desires for good things, lies a duty to each other and ourselves to only act on reasonably held hopes.
The ethics of hope
One of Immanuel Kant’s simple but resonant maxims is ‘ought implies can’. In other words, if you believe someone has an ethical responsibility to do something, it must be possible. No person is under any obligation to do what is impossible. You might call me a bad person for failing to fly through the sky and save someone falling from a great height. But your condemnation will be rejected as ill-founded for the simple reason only fictional characters can perform that feat.
Many other things – including extremely difficult things – are reasonably expected of others. A person might promise to climb Mount Everest (or at least make a serious attempt) prior to their 50thbirthday. This might present the greatest challenge imaginable. Yet we know scaling the heights of Everest is possible. As such, the person who made this promise is bound to honour their commitment.
Of course, at the time of making such a promise, no person can know with absolute certainty they will be able to meet the obligation they have taken on. There are just too many variables outside of their control that can frustrate their best laid plans. Weather conditions might lead to the closure of the mountain. The need to provide personal care to a loved one could extend well beyond any anticipated period. Given this, our ethical commitments are almost always tinged with a measure of hope.
What is hope?
Hope is an expectation that some desirable circumstance will arise. Hope sometimes blends into something closer to ‘faith’ – where belief about a state of affairs cannot be proven. However, for most people, most of the time, ‘hope’ is a reasonable expectation.
For example, if a person makes commitments that critically depend on other people keeping their promises, that person cannot know for certain they can honour their word. Yet, if these people are known and trusted, perhaps based on past experience, then a hopeful dependence on their performance would be reasonable.
The same can be said of other commitments, such as promising to meet for a picnic on a particular day. You might make the plan in the hopeful expectation of fine weather and do so with good grounds based on a checked forecast predicting clear skies.
There are two things to be noted here. First, some aspects of hope depend (for their reasonableness) on the ethical commitments of other people (for example, to keep promises). It follows there will often be a reciprocal ethical aspect to the practice of ‘reasonable’ hoping.
Second, it’s not enough to be naively hopeful. Instead, one needs to take reasonable efforts to ensure there is some basis for relying on a hoped-for circumstance. This is especially so if the hoped-for circumstance is of critical importance to matters of grave ethical significance – such as making a promise to someone.
Given this, there may be good grounds to calibrate commitments in line with the degree to which you might reasonably hope for a particular circumstance to prevail. For example, rather than making an open commitment to meet for a particular picnic on a particular day it might be better to qualify the point by saying, “I promise to meet you if the weather is fine”.
‘It’s not enough to be naively hopeful.’
We often see the absence of this kind of forethought when it comes to the promises made by politicians during elections. They will make promises – probably based on hopeful projections about the future – only to find themselves accused of lying or having acted in bad faith when the promise is not honoured.
It’s insufficient for the politician to say they merely ‘hoped’ to be able to keep their word and that they now find their situation to be unexpectedly different. It would have been far better and far more responsible to qualify the promise in line with what might explicitly and reasonably be hoped for.
Two final comments. First, it should be understood a person often has some control over whether or not their hopes can be realised. As such, each person is responsible for those of their actions that impinge on the way they meet their obligations – we are not simply ‘bystanders’ who can idly hope for certain outcomes without lifting a finger to make them manifest.
Second, given our inability to know what the future holds, hope always plays a role in the process of making ethical commitments. The key thing is to be reasonable in what we hope for and to calibrate our commitments accordingly.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Politics + Human Rights, Relationships, Society + Culture
The sticky ethics of protests in a pandemic
Explainer
Relationships
Ethics Explainer: The Other
Opinion + Analysis
Politics + Human Rights, Relationships
We’re being too hard on hypocrites and it’s causing us to lose out
Opinion + Analysis
Relationships
When are secrets best kept?
BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Ethics Explainer: Anarchy
Anarchy isn’t just street protests and railing against the establishment.
It’s a serious political philosophy that believes people will flourish in a leaderless society without a centralised government. It may sound like a crazy ideal, but it’s almost worked before.
A hastily circled letter Aetched into bus windows and spray painted on walls. The vigilante anarchist character known only as V from V for Vendetta. The Sex Pistols’ Johnny Rotten singing, “I wanna be anarchy”.
Think of anarchy and you might just imagine an 80s punk throwing a Molotov cocktail in a street protest. Easily conjuring rich imagery with a railing-against-the-orthodoxy rebelliousness, there’s more to anarchy than cool cachet. At the heart of this ideology is decentralisation.
Disorder versus political philosophy
The word anarchy is often used as an adjective to describe a state of public chaos. You’ll hear it dropped in news reports of civil unrest and riots with flavours of vandalism and violence. But anarchists aren’t traditionally looters throwing bricks through shop windows.
Anarchy is a political philosophy. Philographics – a series that defines complex philosophical concepts with a short sentence and simple graphic – describes anarchy as:
“A range of views that oppose the idea of the state as a means of governance, instead advocating a society based on non-hierarchical relationships.”
Instead of structured governments enforcing laws, anarchists believe people should be free to voluntarily organise and cooperate as they please. And because governments around the world are already established states with legal systems, many anarchists see their work is to abolish them.
The word anarchy derives from the ancient Greek term anarchia, which basically means “without leader” or “without authority”. Some literal translations put it as “not hierarchy”.
That may conjure notions of disorder, but the founder of anarchy imagined it to be a peaceful, cooperative thing.
“Anarchy is order without power” – Pierre-Joseph Proudhon, the ‘father of anarchy’
The “father of anarchy”
The first known anarchist was French philosopher and politician Pierre-Joseph Proudhon. It’s perhaps notable he took office after the French Revolution of 1848 overthrew the monarchy. Eight years prior, Proudhon published the defining theoretical text that influenced later anarchist movements.
“Property is theft!”, Proudhon declared in his book, What is Property? Or, an Inquiry into the Principle of Right and Government. His starting point for this argument was the Christian point of view that God gave Earth to all people.
This idea that natural resources are for equal share and use is also referred to as the universal commons. Proudhon felt it followed that private ownership meant land was stolen from everyone who had a right to benefit from it.
This premise is a crucial basis to Proudhon’s anarchist thesis because it meant people weren’t rightfully free to move in and use lands as they wished or required. Their means of production had been taken from them.
Anarchy’s heyday: the Spanish Civil War
Anarchy has usually been a European pursuit and it has waxed and waned in popularity. It had its most influence and reach in the years leading up to and during the Spanish Civil War (1936-1939), a time of great unrest and inequality between the working classes and ruling elite – which turned out to be a breeding ground for revolutionary thought.
Like the communist and socialist movements that grew alongside them, anarchists opposed the monarchy, land owning oligarchs and the military general Francisco Franco, who eventually took power.
Many different threads of the ideology gained popularity across Spain – some of it militant, some of it peaceful – and its sentiment was widely shared among everyday people.
Anarchist terrorists
While violence was never part of Proudhon’s ideal, it did become a key feature of some of the more well known examples of anarchy. First there was Spain which, perhaps by the nature of a civil war, saw many violent clashes between armed anarchists and the military.
Then there were the anarchist bomb attackers who operated around the world, perhaps most notably in late 19thand early 20thcentury America. They were basically yesteryear’s lone wolf terrorists.
Luigi Galleani was an Italian pro-violence anarchist based in the United States. He was eventually deported for taking part in and inspiring many bomb attacks. Reportedly, his followers, called Galleanists, were behind the 1920 Wall Street bombing that killed over 30 people and injured hundreds – the most severe terror attack in the US at the time.
No one ever claimed responsibility or was arrested for this bombing but fingers have long pointed at anti-capitalist anarchists inspired by post WWI conditions.
Could it come back?
While the law-breaking mayhem that can accompany a protest and the chaos of a collapsing society are labelled anarchy, there’s more to this sociopolitical philosophy. And if the conditions are right, we may just see another anarchist age.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Explainer
Politics + Human Rights
Ethics Explainer: Rights and Responsibilities
Opinion + Analysis
Politics + Human Rights
If politicians can’t call out corruption, the virus has infected the entire body politic
Opinion + Analysis
Politics + Human Rights, Relationships
We’re being too hard on hypocrites and it’s causing us to lose out
Opinion + Analysis
Climate + Environment, Politics + Human Rights
Increase or reduce immigration? Recommended reads
BY Kym Middleton
Former Head of Editorial & Events at TEC, Kym Middleton is a freelance writer, artistic producer, and multi award winning journalist with a background in long form TV, breaking news and digital documentary. Twitter @kymmidd
Ethics Explainer: Scepticism
Scepticism is an attitude that treats every claim to truth as up for debate.
Religion, philosophy, science, history, psychology – generally, sceptics believe every source of knowledge has its limits, and it’s up to us to figure out what those are.
Sometimes confused with cynicism, a general suspicion of people and their motives, ethical scepticism is about questioning if something is right just because others say it is. If not, what will make it so?
Scepticism has played a crucial role in refining our basic understandings of ourselves and the world we live in. It is behind how we know everything is made of atoms, time isn’t linear, and that since Earth is a sphere, it’s quicker for planes to fly towards either pole instead of in a straight line.
Ancient ideas
In Ancient Greece, some sceptics went so far as to argue since nothing can claim truth it’s best to suspend judgement as long as possible. This enjoyed a revival in 17th century Europe, prompting one of the Western canon’s most famous philosophers, René Descartes, to mount a forceful critique. But before doing so, he wanted to argue for scepticism in as holistic a fashion as possible.
Descartes wanted to prove certain truths were innate and could not be contested. To do so, he started to pick out every claim to truth he could think of – including how we see the world – and challenge it.
For Descartes, perception was unreliable. You might think the world around you is real because you can experience it through your senses, but how do you know you’re not dreaming? After all, dreams certainly feel real when you’re in them. For a little modern twist, who’s to say you’re not a brain in a vat connected up to a supercomputer, living in a virtual reality uploaded into your buzzing synapses?
This line of thinking led Descartes to question his own existence. In the midst of a deeply valuable intellectual freak out, he eventually came to realise an irrefutable claim – his doubting proved he was thinking. From here, he deduced that ‘if I think’, then I exist.
“I think, therefore I am.”
It’s the quote you see plastered over t-shirts, mugs, and advertising for schools and universities. In Latin it reads, “Cogito ergo sum”.
Through a process of elimination, Descartes created a system of verifying truth claims through deduction and logic. He promoted this and quiet reflection as a way of living and came to be known as a rationalist.
The arrival of the empiricist
In the 18th century, a powerful case was made against rationalism by David Hume, an empiricist. Hume was sceptical of logical deduction’s ability to direct how people live and see the world. According to Hume, all claims to truth arise from experiences, custom and habit – not reason.
If we followed Descartes’ argument to its conclusion and assessed every single claim to truth logically, we wouldn’t be able to function. Navigating throughout the world requires a degree of trust based on past experiences. We don’t know for sure that the ground beneath us will stay solid. But considering it generally does, we trust (through inductive reasoning) that it will stay that way.
Hume argued memories and “passions” always, eventually, overrule reason. We are not what we think, but what we experience.
Perhaps you don’t question the nature of existence at the level of Descartes, but on some level, we are all sceptics. Scepticism is how we figure out who to confide in, what our triggers are, or if the next wellness fad is worth trying out. Acknowledging how powerful our habits and emotions are is key to recognising when we’re tempted to overlook the facts in favour of how something makes us feel.
But part of being a sceptic is knowing what argument will convince you. Otherwise, it can be tempting to reduce every claim to truth as a challenge to your personal autonomy.
Scepticism, in its best form, has opened up mind-boggling ways of thinking about ourselves and the world around us. Using it to be combative is a shortsighted and corrosive way to undermine the difficult task of living a well examined life.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Relationships
Who are you? Why identity matters to ethics
Big thinker
Politics + Human Rights, Relationships
Big Thinker: Michel Foucault
Opinion + Analysis
Health + Wellbeing, Relationships
Should parents tell kids the truth about Santa?
Opinion + Analysis
Relationships
If you condemn homosexuals, are you betraying Jesus?
BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Ethics Explainer: Stoicism
What do boxers, political figures, and that guy who’s addicted to Reddit all have in common?
They’ve probably employed the techniques of stoicism. It’s an ancient Greek philosophy that offers to answer that million dollar question, what is the best life we can live?
Hard work, altruism, prayer or relationships don’t take the top dog spot for the stoic. Instead, they zoom out and divide the world (if you’ll forgive the simplicity) into black and white: what you can control and what you can’t.
It’s the lemonade school of philosophy. The central tenet being:
“When life gives you lemons, make lemonade.”
Stoics believe that everything around us operates according to the law of cause and effect, creating a rational structure of the universe called logos. This structure meant that something as awful as all your worldly possessions sinking into the ocean (Zeno of Cyprus), or as annoying as missing the last bus by a quarter of a minute, don’t make your life any worse. Your life remains as it is, nothing more, nothing less.
If you suffer, you suffer because of the judgements you’ve made about them. The ideal life where that didn’t happen is a fantasy, and there’s no point focusing on it. Just expect that pain, grief, disappointment and injustice are going to happen. It’s what you do in response to them that counts.
This philosophy was founded by said Zeno, who preached the virtues of tolerance and self control on a stone colonnade called the stoa poikile. It’s where stoicism found its name. It flourished in the Roman empire, with one of its most famous students being the emperor himself, Marcus Aurelius. Fragments of his personal writing survive in Meditations, revealing counsel remarkably humble and chastising for a man of his power.
Stoicism on emotion
Emotion presents an opportunity. There’s a reactive, immediate response, like blaming others when you feel ashamed, or panicking when you feel anxious. But there is a better reaction the stoics aim for. It matches the degree of impact, is appropriate for the context, is rationally sound, and in line with a good character.
Being angry at your partner for forgetting to put away the dishes isn’t the same as being angry at an oppressive government for torturing its citizens. But if it’s the emotion you focus on, all your good intentions aren’t guaranteed to stop you from messing up. After all, the red haze is formidable.
By practising this “slow thinking” and making it a habit, you can cultivate the same self discipline to develop virtues like courage and justice. It’s these that will ultimately give your life meaning.
For others, emotions are more like the weather. It rains and it shines, and you just deal with it.
Stoicism assumes that focusing on control and analysing emotion is how virtues are forged. But some critics, including philosopher Martha Nussbaum, say that approach misses a fundamental part of being human. After all, control is transient too. Emotions – loving and caring for someone or something to such an extent that losing it devastates you – doesn’t make you less human. It’s part of being human in the first place.
Other critics say that it leads to apathy, something collective political action can’t afford. Sandy Grant, philosopher at University of Cambridge, says stoicism’s “control fantasy” is ridiculous in our interdependent, globalised world. “It is no longer a matter of ‘What can I control?’ but rather of ‘Given that I, as all others, am implicated, what should I do?”
Controlling emotion to navigate through life cautiously may not be desirable to you. But it is easy to see how channelling stoicism in certain situations can help us manage life’s unfortunate moments – whether they be missing the bus or something more harrowing.
Follow The Ethics Centre on Twitter, Facebook, Instagram and LinkedIn.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Relationships
Uncivil attention: The antidote to loneliness
Opinion + Analysis
Relationships
Why we find conformity so despairing
Opinion + Analysis
Health + Wellbeing, Relationships
Mutuality of care in a pandemic
Explainer
Health + Wellbeing, Relationships
Ethics Explainer: Eudaimonia
BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Ethics Explainer: Authenticity
Is the universe friendly? Is it fundamentally good? Peaceful? Created with a purpose in mind?
Or is it distant and impersonal? Indifferent to what you want? A never ending meaningless space? We all have ideas of how the world truly is. Maybe that’s been influenced by your religion, your school, your government, or even the video games you played as a kid.
Whatever the case is, how we think about ourselves and what we consider a life well spent, has a lot to do with the relationship we have with the world. And that brings us to this month’s Ethics Explainer.
Authenticity
To behave authentically means to behave in a way that responds to the world as it truly is, and not how we’d like it to be. What does this mean?
Well, this question takes us to two different schools of thought in philosophy, with two very different ideas of the nature of the world we live in. The first one is essentialism. Now, essentialism is a belief that find its roots in Ancient Greece, and in the writings of Socrates and Plato.
They took it as a given that everything that exists has its own essence. That is, a certain set of core properties that are necessary, or essential, for it to be what it is. Take a knife. It doesn’t matter if it has a wooden handle or a metal one. But once you take the blade away, it becomes not-a-knife. The blade is its essential property because it gives the knife its defining function.
Plato and Aristotle believed that people had essences as well, and that these existed before they did. This essence, or telos, was only acquired and expressed properly through virtuous action, a process that formed the ideal human. According to the Greeks, to be authentic was to live according to your essence. And you did that by living ethically in the choices you make and the character you express.
By developing intellectual virtues like curiosity or critical thinking, and character virtues like courage, wisdom, and patience, it’d get easier to tell what you should or shouldn’t do. This was the standard view of the world until the early 19th century, and is still the case for many people today.
The rise of existentialism
But some thinkers began to wonder, what if that wasn’t true? What if the universe has no inherent purpose? What if we don’t have one either? What if we exist first, then create our own purpose?
This belief was called existentialism. Existentialists believe that neither us nor the universe has an actual, predetermined purpose. We need to create it for ourselves. Because of this, nothing we do or are is actually inherently meaningful. We were free to do whatever we wanted – a fate Jean Paul-Sartre, French existential philosopher, found quite awful.
Being authentic meant facing the full weight of this shocking freedom, and staying strong. To simply follow what your religious leader, parent, school, or boss told you to do would be to act in “bad faith”. It’s like burying your head in the sand and pretending that something out there has meaning. Meaning that doesn’t exist.
By accepting that any meaning in life has to be given by you, and that ‘right’ and ‘wrong’ are just a matter of perspective, your choices become all you have. And ensuring that they are chosen by the values you accept to live by, instead of any predetermined ones etched in stone, makes them authentic.
This extends beyond the individual. If the world is going to have any of the things most of us value, like justice and order, we’re going to have to put it there ourselves.
Otherwise, they won’t exist.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Relationships, Society + Culture
Barbie and what it means to be human
Opinion + Analysis
Health + Wellbeing, Relationships
Should parents tell kids the truth about Santa?
Opinion + Analysis
Relationships
How to respectfully disagree
Big thinker
Politics + Human Rights, Relationships
Big Thinker: Eleanor Roosevelt
BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Ethics Explainer: The Turing Test
Much was made of a recent video of Duplex – Google’s talking AI – calling up a hair salon to make a reservation. The AI’s way of speaking was uncannily human, even pausing at moments to say “um”.
Some suggested Duplex had managed to pass the Turing test, a standard for machine intelligence that was developed by Alan Turing in the middle of the 20th century. But what exactly is the story behind this test and why are people still using it to judge the success of cutting edge algorithms?
Mechanical brains and emotional humans
In the late 1940s, when the first digital computers had just been built, a debate took place about whether these new “universal machines” could think. While pioneering computer scientists like Alan Turing and John von Neumann believed that their machines were “mechanical brains”, others felt that there was an essential difference between human thought and computer calculation.
Sir Geoffrey Jefferson, a prominent brain surgeon of the time, argued that while a computer could simulate intelligence, it would always be lacking:
“No mechanism could feel … pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or miserable when it cannot get what it wants.”
In a radio interview a few weeks later, Turing responded to Jefferson’s claim by arguing that as computers become more intelligent, people like him would take a “grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine.”
The following year, Turing wrote a paper called ‘Computing Machinery and Intelligence’ in which he devised a simple method by which to test whether machines can think.
The test was a proposed a situation in which a human judge talks to both a computer and a human through a screen. The judge cannot see the computer or the human but can ask them questions via the computer. Based on the answers alone, the human judge had to determine which is which. If the computer was able to fool 30 percent of judges that it was human, then the computer was said to have passed the test.
Turing claimed that he intended for the test to be a conversation stopper, a way of preventing endless metaphysical speculation about the essence of our humanity by positing that intelligence is just a type of behaviour, not an internal quality. In other words, intelligence is as intelligence does, regardless of whether it done by machine or human.
Does Google Duplex pass?
Well, yes and no. In Google’s video, it is obvious that the person taking the call believes they are talking to human. So, it does satisfy this criterion. But an important thing about Turing’s original test was that to pass, the computer had to be able to speak about all topics convincingly, not just one.
In fact, in Turing’s paper, he plays out an imaginary conversation with an advanced future computer and human judge, with the judge asking questions and the computer providing answers:
Q: Please write me a sonnet on the subject of the Forth Bridge.
A: Count me out on this one. I never could write poetry.
Q: Add 34957 to 70764.
A: (Pause about 30 seconds and then give as answer) 105621.
Q Do you play chess?
A: Yes.
Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?
A: (After a pause of 15 seconds) R-R8 mate.
The point Turing is making here is that a truly smart machine has to have general intelligence in a number of different areas of human interest. As it stands, Google’s Duplex is good within the limited domain of making a reservation but would probably not be able to do much beyond this unless reprogrammed.
The boundaries around the human
While Turing intended for his test to be a conversation stopper for questions of machine intelligence, it has had the opposite effect, fuelling half a century of debate about what the test means, whether it is a good measure of intelligence, or if it should still be used as a standard.
Most experts have come to agree, over time, that the Turing test is not a good way to prove machine intelligence, as the constraints of the test can easily be gamed, as was the case with the bot Eugene Goostman, who allegedly passed the test a few years ago.
But the Turing test is nevertheless still considered a powerful philosophical tool to re-evaluate the boundaries around what we consider normal and human. In his time, Turing used his test as a way to demonstrate how people like Jefferson would never be willing to accept a machine as being intelligence not because it couldn’t act intelligently, but because wasn’t “like us”.
Turing’s desire to test boundaries around what was considered “normal” in his time perhaps sprung from his own persecution as a gay man. Despite being a war hero, he was persecuted for his homosexuality, and convicted in 1952 for sleeping with another man. He was punished with chemical castration and eventually took his own life.
During these final years, the relationship between machine intelligence and his own sexuality became interconnected in Turing’s mind. He was concerned the same bigotry and fear that hounded his life would ruin future relationships between humans and intelligent computers. A year before he took his life he wrote the following letter to a friend:
“I’m afraid that the following syllogism may be used by some in the future.
Turing believes machines think
Turing lies with men
Therefore machines do not think
– Yours in distress,
Alan”
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Relationships, Science + Technology
We are being saturated by knowledge. How much is too much?
Opinion + Analysis
Relationships, Science + Technology
If humans bully robots there will be dire consequences
Opinion + Analysis
Business + Leadership, Science + Technology
Big tech knows too much about us. Here’s why Australia is in the perfect position to change that
Opinion + Analysis
Business + Leadership, Science + Technology
One giant leap for man, one step back for everyone else: Why space exploration must be inclusive
BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Ethics Explainer: Perfection
We take perfection to mean flawlessness. But it seems we can’t agree on what the fundamental human flaw is. Is it our attachment to things like happiness, status, or security – things that are about as solid as a tissue? Our propensity for evil? Or is it our body and its insatiable appetite for satisfaction?
Four different philosophical traditions have answered this in their own ways and tell us how we can achieve perfection.
Platonism
Plato’s idea of perfection is articulated in his Theory of Forms. The Forms represent the abstract, ideal moulds of all things and concepts in existence, rather than actual things themselves. In short, the idea of something is more perfect than the tangible thing itself.
Take ‘red’ for example. Each of us will have a different understanding of what this means – red lipstick, a red brick house, a red cricket ball… But these are all different manifestations of red so which is the perfect one? For Platonists, the perfect, ideal, universal ‘red’ exists outside of space and time and is only discoverable through lots and lots of philosophical reflection.
Plato wrote:
He will do this [perceive the Forms] most perfectly who approaches the object with thought alone, without associating any sight with his thoughts, or dragging in any sense perception with his reasoning, but who, using pure thought alone, tries to track down each reality pure and by itself, freeing himself as far as possible from eyes, ears, and in a word, from the whole body, because the body confuses the soul and does not allow it to acquire truth and wisdom whenever it is associated with it.
In Platonist thought, the body is a distraction from the abstract thought necessary for philosophical speculation. Its fundamental flaw? Its carnal desires that shackle the soul.
Perfection for the individual, to sum up, is the arrival back to the soul’s state of pure contemplation of the Forms. This out of body state of contemplation is far from the idea of the perfect face and physique we often think about today. Indeed, a perfect physical form for Palto is impossible. It is all in the imagination.
Hinduism
In Hinduism’s Advaita school of philosophy, perfection means the full comprehension and acceptance of Oneness.
It’s when you realise your soul (or your atman) is the same as everyone else’s and you are all part of the one, unchanging, metaphysical reality (the Brahman). In this state of realisation, the always changing, material world of maya reveals itself to be an illusion and anything attached to this world, including your actions, are illusions as well. (There are parallels with this and the idea of Plato’s Cave, which is narratively conceived of in The Matrix.)
To attain this status of perfection, an individual must surrender to their caste role and perform that duty to excellence. No matter what they did, they would understand their actions had no effect on the Brahman and to believe so, was a trick of the ego. They focused on renouncing all earthly desires and striving to become completely detached from the world through the specific rituals of their caste role.
Krishna said:
A man obtains perfection by being devoted to his own proper action. Hear then how one who is intent on his own action finds perfection. By worshipping him [Brahman], from whom all beings arise and by whom all this is pervaded, through his own proper action, a man attains perfection … He whose intelligence is unattached everywhere, whose self is conquered, who is free from desire, he obtains, through renunciation, the supreme perfection of actionlessness. Learn from me, briefly, O Arjuna, how he who has attained perfection, also attains to Brahman, the highest state of wisdom.
By “actionlessness”, Krishna means the supreme effort of surrendering everything, including your own actions, so they become “non-action”. If everything is an illusion in the face of Brahman, we mean everything.
Christianity
A sainted bishop named Gregory of Nyssa classified perfection as being and acting just like God’s human form, Christ – that is, completely free of evil. Nyssa said:
This, therefore, is perfection in the Christian life in my judgement, namely, the participation of one’s soul and speech and activities in all of the names by which Christ is signified, so that the perfect holiness, according to the eulogy of Paul, is taken upon oneself in “the whole body and soul and spirit”, continuously safeguarded against being mixed with evil. Perfection lies in the total transformation of the individual. He/she must live, act, and essentially, be all that Christ was, meaning that, as Christ was God manifest in human form, completely free from evil, so too the Christian individual must sever all evil from his/her being.
While the Socratic ideal of perfection requires pure “abstract” thought, and the Hindu ideal requires sublimating individualism into Oneness, the Christian ideal requires cultivating the characteristics of Christ and expelling all that is unlike him from yourself.
Sufism
The Sufi scholar Ibn ‘Arabi had a concept of perfection that echoes the three discussed above. For him, perfection is the individual’s complete knowledge of the abstract and the material, leading to a prophetic (or Christ like) character.
Let’s break that down. He says:
The image of perfection is complete only with knowledge of both the ephemeral and the eternal, the rank of knowledge being perfected only by both aspects. Similarly, the various other grades of existence are perfected, since being is divided into eternal and non-eternal or ephemeral. Eternal Being is God’s being for Himself, while non-eternal being is the being of God in the forms of the latent Cosmos.
The beginning of the passage states that perfection requires knowledge of the eternal and the material. The eternal is God in Himself, and the non-eternal is the Cosmos, including humanity, who in striving for the perfection of the eternal, expresses it.
But Ibn ‘Arabi stresses neither of these knowledges negate the other. In fact, by learning only of the eternal, or only of the material, both would be incomplete since they are simply different manifestations of the same Being. Perfection, then, is not about negation – but continual striving to transformation. Onwards and upwards!
Follow The Ethics Centre on Twitter, Facebook, Instagram and LinkedIn.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Big thinker
Relationships
Big Thinker: Steven Pinker
Opinion + Analysis
Society + Culture, Relationships
Discomfort isn’t dangerous, but avoiding it could be
Explainer
Relationships
Ethics Explainer: Negativity bias
Opinion + Analysis
Politics + Human Rights, Relationships
Would you kill baby Hitler?
BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Ethics Explainer: Universal Basic Income
Ethics Explainer: Universal Basic Income
ExplainerBusiness + LeadershipPolitics + Human Rights
BY The Ethics Centre 21 MAY 2018
The idea of a UBI isn’t new. In fact, it has deep historical roots.
In Thomas More’s Utopia, published in 1516, he writes that instead of punishing a poor person who steals bread, “it would be far more to the point to provide everyone with some means of livelihood, so that nobody’s under the frightful necessity of becoming, first a thief, and then a corpse”.
Over three hundred years later, John Stuart Mill also supported the concept in Principles of Political Economy, arguing that “a certain minimum [income] assigned for subsistence of every member of the community, whether capable of labour or not” would give the poor an opportunity to lift themselves out of poverty.
In the 20th century, the UBI gained support from a diverse array of thinkers for very different reasons. Martin Luther King, for instance, saw a guaranteed payment as a way to uphold human rights in the face of poverty, while Milton Friedman understood it as a viable economic alternative to state welfare.
Would a UBI encourage laziness?
Yet, there has always been strong opposition to implementing basic income schemes. The most common argument is that receiving money for nothing undermines work ethic and encourages laziness. There are also concerns that many will use their basic income to support drug and alcohol addiction.
However, the only successfully implemented basic income scheme has shown these fears might be unfounded. In the 1980s, Alaska implemented a guaranteed income for long term residents as a way to efficiently distribute dividends from a commodity boom. A recent study of the scheme found full-time employment has not changed at all since it was introduced and the number of Alaskans working part-time has increased.
The success of this scheme has inspired other pilot projects in Kenya, Scotland, Uganda, the Netherlands, and the United States.
The rise of the robots
The growing fear that robots are going to take most of our jobs over the next few decades has added an extra urgency to the conversation around UBI. A number of leading technologists, including Elon Musk, Mark Zuckerberg, and Bill Gates, have suggested some form of basic income might be necessary to alleviate the effects of unemployment caused by automation.
In his bestselling book Rise of the Robots, Martin Ford argues that a basic income is the only way to stimulate the economy in an automated world. If we don’t distribute the abundant wealth generated by machines, he says, then there will be no one to buy the goods that are being manufactured, which will ultimately lead to a crisis in the capitalist economic model.
In their book Inventing the Future, Nick Srnicek and Alex Williams agree that full automation will bring about a crisis in capitalism but see this as a good thing. Instead of using UBI as a way to save this economic system, the unconditional payment can be seen as a step towards implementing a socialist method of wealth distribution.
The future of work
Srnicek and Williams also claim that UBI would not only be a political and economic transformation, but a revolution of the spirit. Guaranteed payment, they say, will give the majority of humans, for the first time in history, the capacity to choose what to do with their time, to think deeply about their values, and to experiment with how to live their lives.
Bertrand Russell made a similar argument in his famous treatise on work, In Praise of Idleness. He writes that in a world where no one is compelled to work all day for wages, all will be able to think deeply about what it is they want to do with their lives and then pursue it. For many, he says, this idea is scary because we have become dependent on paid jobs to give us a sense of value and purpose.
So, while many of the debates about UBI take place between economists, it is possible that the greatest obstacle to its implementation is existential.
A basic payment might provide us with the material conditions to live comfortably, but with this comes the confounding task of re-thinking what it is that gives our lives meaning.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Reports
Politics + Human Rights
The Cloven Giant: Reclaiming Intrinsic Dignity
Opinion + Analysis
Politics + Human Rights, Science + Technology
Who’s to blame for Facebook’s news ban?
Opinion + Analysis
Business + Leadership
Sell out, burn out. Decisions that won’t let you sleep at night
Opinion + Analysis
Health + Wellbeing, Politics + Human Rights
Don’t throw the birth plan out with the birth water!
BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Ethics Explainer: Moral Absolutism
Moral absolutism is the position that there are universal ethical standards that apply to actions regardless of context.
Where someone might deliberate over when, why, and to whom they’d lie, for example, a moral absolutist wouldn’t see any of those considerations as making a difference – lying is either right or wrong, and that’s that!
You’ve probably heard of moral relativism, the view that moral judgments can be seen as true or false according to a historical, cultural, or social context. According to moral relativism, two people with different experiences could disagree on whether an action is right or wrong, and they could both be right. What they consider right or wrong differs according to their contexts, and both should be accepted as valid.
Moral absolutism is the opposite. It argues that everything is inherently right or wrong, and no context or outcome can change this. These truths can be grounded in sources like law, rationality, human nature, or religion.
Deontology as moral absolutism
The text(s) that a religion is based on is often taken as the absolute standard of morality. If someone takes scripture as a source of divine truth, it’s easy to take morally absolutist ethics from it. Is it ok to lie? No, because the Bible or God says so.
It’s not just in religion, though. Ancient Greek philosophy held strains of morally absolutist thought, but possibly the most well-known form of moral absolutism is deontology, as developed by Immanuel Kant, who sought to clearly articulate a rational theory of moral absolutism.
As an Enlightenment philosopher, Kant sought to find moral truth in rationality instead of divine authority. He believed that unlike religion, culture, or community, we couldn’t ‘opt out’ of rationality. It was what made us human. This was why he believed we owed it to ourselves to act as rationally as we could.
In order to do this, he came up with duties he called “categorical imperatives”. These are obligations we, as rational beings, are morally bound to follow, are applicable to all people at all times, and aren’t contradictory. Think of it as an extension of the Golden Rule.
One way of understanding the categorical imperative is through the “universalisability principle”. This mouthful of a phrase says you should act only if you’d be willing to make your act a universal law (something that everyone is morally bound to following at all times no matter what) and it wouldn’t cause contradiction.
What Kant meant was before choosing a course of action, you should determine the general rule that stands behind that action. If this general rule could willingly be applied by you to all people in all circumstances without contradiction, you are choosing the moral path.
An example Kant proposed was lying. He argued that if lying was a universal law then no one could ever trust anything anyone said but, moreover, the possibility of truth telling would no longer exist, rendering the very act of lying meaningless. In other words, you cannot universalise lying as a general rule of action without falling into contradiction.
By determining his logical justifications, Kant came up with principles he believed would form a moral life, without relying on scripture or culture.
Counterintuitive consequences
In essence, Kant was saying it’s never reasonable to make exceptions for yourself when faced with a moral question. This sounds fair, but it can lead to situations where a rational moral decision contradicts moral common sense.
For example, in his essay ‘On a Supposed Right to Lie from Altruistic Motives’, Kant argues it is wrong to lie even to save an innocent person from a murderer. He writes, “To be truthful in all deliberations … is a sacred and absolutely commanding decree of reason, limited by no expediency”.
While Kant felt that such absolutism was necessary for a rationally grounded morality, most of us allow a degree of relativism to enter into our everyday ethical considerations.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Explainer
Relationships, Society + Culture
Ethics Explainer: Beauty
Opinion + Analysis
Relationships, Science + Technology
Making friends with machines
Big thinker
Relationships
Big Thinker: René Descartes
Opinion + Analysis
Health + Wellbeing, Relationships
Are there any powerful swear words left?
BY The Ethics Centre
The Ethics Centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life.
Ethics Explainer: Post-Humanism
Ethics Explainer: Post-Humanism
ExplainerRelationshipsScience + Technology
BY The Ethics Centre 22 FEB 2018
Late last year, Saudi Arabia granted a humanoid robot called Sophia citizenship. The internet went crazy about it, and a number of sensationalised reports suggested that this was the beginning of “the rise of the robots”.
In reality, though, Sophia was not a “breakthrough” in AI. She was just an elaborate puppet that could answer some simple questions. But the debate Sophia provoked about what rights robots might have in the future is a topic that is being explored by an emerging philosophical movement known as post-humanism.
From humanism to post-humanism
In order to understand what post-humanism is, it’s important to start with a definition of what it’s departing from. Humanism is a term that captures a broad range of philosophical and ethical movements that are unified by their unshakable belief in the unique value, agency, and moral supremacy of human beings.
Emerging during the Renaissance, humanism was a reaction against the superstition and religious authoritarianism of Medieval Europe. It wrested control of human destiny from the whims of a transcendent divinity and placed it in the hands of rational individuals (which, at that time, meant white men). In so doing, the humanist worldview, which still holds sway over many of our most important political and social institutions, positions humans at the centre of the moral world.
Post-humanism, which is a set of ideas that have been emerging since around the 1990s, challenges the notion that humans are and always will be the only agents of the moral world. In fact, post-humanists argue that in our technologically mediated future, understanding the world as a moral hierarchy and placing humans at the top of it will no longer make sense.
Two types of post-humanism
The best-known post-humanists, who are also sometimes referred to as transhumanists, claim that in the coming century, human beings will be radically altered by implants, bio-hacking, cognitive enhancement and other bio-medical technology. These enhancements will lead us to “evolve” into a species that is completely unrecognisable to what we are now.
This vision of the future is championed most vocally by Ray Kurzweil, a chief engineer of Google, who believes that the exponential rate of technological development will bring an end to human history as we have known it, triggering completely new ways of being that mere mortals like us cannot yet comprehend.
While this vision of the post-human appeals to Kurzweil’s Silicon Valley imagination, other post-human thinkers offer a very different perspective. Philosopher Donna Haraway, for instance, argues that the fusing of humans and technology will not physically enhance humanity, but will help us see ourselves as being interconnected rather than separate from non-human beings.
She argues that becoming cyborgs – strange assemblages of human and machine – will help us understand that the oppositions we set up between the human and non-human, natural and artificial, self and other, organic and inorganic, are merely ideas that can be broken down and renegotiated. And more than this, she thinks if we are comfortable with seeing ourselves as being part human and part machine, perhaps we will also find it easier to break down other outdated oppositions of gender, of race, of species.
Post-human ethics
So while, for Kurzweil, post-humanism describes a technological future of enhanced humanity, for Haraway, post-humanism is an ethical position that extends moral concern to things that are different from us and in particular to other species and objects with which we cohabit the world.
Our post-human future, Haraway claims, will be a time “when species meet”, and when humans finally make room for non-human things within the scope of our moral concern. A post-human ethics, therefore, encourages us to think outside of the interests of our own species, be less narcissistic in our conception of the world, and to take the interests and rights of things that are different to us seriously.
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Relationships
Struggling with an ethical decision? Here are a few tips
Opinion + Analysis
Business + Leadership, Health + Wellbeing, Relationships
Ending workplace bullying demands courage
Opinion + Analysis
Relationships
Violent porn and feminism
Opinion + Analysis
Politics + Human Rights, Relationships, Society + Culture