Does ethical porn exist?

It’s hard to separate violence and sex in lots of today’s internet pornography. Easily accessible content includes simulated rape, women being slapped, punched, and subject to slews of misogynistic insults.

It’s also harder than ever to deny that pornography use, given its addictive, misogynistic, and violent nature, has a range of negative impacts on consumers. First exposure to internet porn in Western countries takes place before puberty for a significant fraction of children today.  A disturbingly high proportion of teenage boys and young men today believe rape myths as a result of porn exposure. There is also evidence suggesting exposure to violent, X-rated material leads to a dramatic increase in the perpetration of sexual violence.

Before we can answer questions about the ethics of porn we need to address fundamental questions about the ethics of sex.

It is also difficult to deny that the practices of the porn industry are exploitative to performers themselves. Stories such as the Netflix documentary Hot Girls Wanted depict cases of female performers agreeing to shoot a scene involving a particular act, only to be coerced on the spot by the producers into a more hard-core scene not previously agreed to. Anecdotes suggest this isn’t uncommon.

While these facts about disturbing content and exploitative practices lead some people to believe consumption of internet porn is unethical or anti-feminist, it prompts others to ask whether there could be such a thing as ethical porn. Are the only objections to pornography circumstantial – based in violent content, exploitation or particular types of pornography? Or is there some deeper fact about porn – any porn – that renders it ethically objectionable?

Suppose the kind of porn commonly found online:

  • Depicted realistic, consensual, non-misogynistic, and safe sex – condoms and all.
  • Was free of exploitation (a pipe-dream, but let’s imagine).
  • Performers fully and properly consented to everything filmed.
  • Regulation ensured only people who were educated and had other employment options were allowed to perform.
  • Performers did not have a history of sexual abuse or underage porn exposure
  • Pristine sexual health was a prerequisite for becoming a porn performer.
  • The porn industry cut any ties they are alleged to have with sex trafficking and similarly exploitative activity.

If all this came true, would any plausible ethical objections to the production and consumption of pornography remain?

Before we can answer questions about the ethics of porn, we need to address fundamental questions about the ethics of sex.

One question is this: is sex simply another bodily pleasure, like getting a massage, or do sex acts have deeper significance?  Philosopher Anne Barnhill describes sexual intercourse as a type of body language. She thinks that when you have sex with a person, you are not just going through physically pleasurable motions, you are expressing something to another person.

If you have sex with someone you care for deeply, this loving attitude is expressed through the body language of sex. But using the expressive act of sex for mere pleasure with a person you care little about can express a range of callous or hurtful attitudes. It can send the message that the other person is simply an object to be used.

Even if not, the messages can be confusing. The body language of tender kissing, close bodily contact and caresses say one thing to a sexual partner, while the fact that one has few emotional strings attached to them – especially if this is stated beforehand – says another.

We know that such mixed messages are often painful. The human brain is flooded with oxytocin – the same bonding chemical responsible for attaching mothers to their children – when humans have sex.  There is a biological basis to the claim that ‘casual sex’ is a contradiction in terms. Sex bonds people to each other, whether we want this to happen or not. It is a profound and relationally significant act.

Porn consumption can become a refuge that prevents people otherwise capable of the daunting but character-building work of seeking a meaningful sexual relationship with a real person.

Let’s bring these ideas about the specialness of sex back to the discussion about porn. If the above ideas about sex are correct, then there is cause for doubt over the idea that it is the sort of thing that people in a casual or even non-existent relationship should be paid for.  So long as there are ethical problems with casual sex itself, there will be ethical problems with consuming filmed casual sex.

So what should we say about porn made by adults in a loving relationship, as much ‘amateur’ (unpaid) pornography is?  Suppose we have a film made by a happily married couple who love each other deeply and simply want to film and show realistic, affectionate, loving sex.  Could consumption of such material pass as ethical?

Maybe it could, but many doubts remain. Porn consumption can become a refuge that prevents people otherwise capable of the daunting but character-building work of seeking a meaningful sexual relationship with a real person from doing so.  Porn (even of the relatively wholesome kind described above) carries no risk of rejection, requires no relational effort and doesn’t demand consideration of another person’s sexual wishes or preferences.

Because it promises high reward for little effort, porn has the potential to prolong adolescence – that phase of life dominated by lone sexual fantasies – and be a disincentive to grow into the complicated, sexual relationship building of adulthood.

Based on this line of thinking, there may still be something unvirtuous about the consumption of porn, even that was produced ethically.  Perhaps the only truly ethical, sexually explicit film would be of people in a loving relationship, which is seen only by them.


ANZAC DAY

Can we celebrate Anzac Day without glorifying war?

ANZAC DAY

Michael Salter’s article on masculinity and the Anzac story asks important questions of the Anzac story in the Australian consciousness. He paints a compelling story of how militarism threatens to taint both the traditions of Anzac and modern Australian masculinity.

Salter points to William James’ calls for a “moral equivalent of war”. James sought a way to inspire young people to commit to their shared sense of duty without needing to rely on militarism and a general affection for war. It’s crucial, James believed, to provide a social narrative that drives citizens beyond the selfish “pleasure economy” wherein we forget tradition, civic duty and our responsibility to the community.

In today’s Australia, the Anzac story is a crucial aspect of that narrative.

James described the military character as a “pure piece of perfection”. It’s pretty easy to see echoes of this idea in the modern image of the Digger – sacrificial, loyal, hardy, well-humoured, and courageous. Everything a good Australian should be.

The problem is when this story cannot be separated from the military context – when the virtues of the warrior lead us to believe war itself is virtuous and intrinsically worthwhile. That’s what James (and Salter) mean by ‘militarism’ – the mentality that war is good because it teaches virtue.

It’s the threat of militarism that led James to call for a moral equivalent of war. Salter points to the difficult tension for a society that rallies around military stories and figures while also facing an epidemic of (male) violence against women. Is the threat of militarism too great for the Anzac ethos to overcome?

William James described the military character as a “pure piece of perfection”.

I’m not convinced. I think it makes a good case for expanding the kinds of stories we listen to in silent contemplation on Anzac Day, but it doesn’t follow the need to do away with the story altogether.

What we might need to do is counterbalance our social interest in the virtue and nobility of the warrior character (including, dare we say, the warrior ethos that lived in Australia for 40,000 or so years before Anzac) with some honest reflections about the reality of war.

study compiled by McCrindle Research in 2015 – which Salter also cited – suggests more than a third of Australian men would sign up to a war equivalent to WWI if it occurred today. That should be concerning – for most nations involved in the conflict the ethical justifications are hazy. Just war theorist Nicholas Fotion says, “of the major powers to enter the war in 1914, Britain is the only one that does not constitute a difficult case”.

Given this, there is good reason to believe many Australians are confusing the always tragic, sometimes necessary ethics of just war theory with the idea that participating in war is always virtuous. Do they need to stop being taught about war or do they need to be informed of its tragic reality?

Perhaps we’ve remembered the valour of soldiers from decades ago while overlooking the testimony of many of Australia’s modern Anzacs.

According to some reports, former US President Theodore Roosevelt hoped his sons might lose a limb in war as a mark of valour. This militarism though, disappears somewhat from his later letters after his sons were injured. Although he still believes his sons fought with valour, he is more circumspect than what he may have been earlier.

And this presents us with a possible solution – to broaden our war stories to include the horror and suffering it inflicts alongside our praise for the valour and sacrifice of the Anzacs.

Perhaps we’ve remembered the valour of soldiers from decades ago while overlooking the testimony of many of Australia’s modern Anzacs. Those whose character has been profoundly affected by war in ways William James never imagined – PTSD, traumatic brain injury, homelessness, substance abuse, unemployment, family breakdown – also need to be thought of and reflected upon.

If the readiness to use force when necessary is indeed a virtue, it needs to incorporate a full understanding of the personal and moral costs of violence.

This serves the intended purpose of Anzac – to bear silent witness to the sacrifices of Australian military men and women. As an added bonus, it might counteract the rampant militarism James feared – and which might be infecting us more than we know.


Anzac Day: militarism and masculinity don’t mix well in modern Australia

In 2016, the then Prime Minister Tony Abbott penned a passionate column on the relevance of Anzac Day to modern Australia. For Abbott, the Anzacs serve as the moral role models that Australians should seek to emulate. He wrote, “We hope that in striving to emulate their values, we might rise to the challenges of our time as they did to theirs”.

The notion that Anzacs embody a quintessentially Australian spirit is a century old. The official World War I journalist C.E.W. Bean wrote Gallipoli was the crucible in which the rugged resilience and camaraderie of (white) Australian masculinity, forged in the bush, was decisively tested and proven on the world stage.

At the time, this was a potent way of making sense out of the staggering loss of 8000 Australian lives in a single military campaign. Since then, it has been common for politicians and journalists to claim that Australia was ‘baptised’ in the ‘blood and fire’ of Gallipoli.

The dark side to the Anzac myth is a view of violence as powerful and creative.

However, public interest in Anzac Day has fluctuated over the course of the 20th century. Ambivalence over Australia’s role in the Vietnam War had a major role in dampening enthusiasm from the 1970s.

The election of John Howard in 1996 signalled a new era for the Anzac myth. The ‘digger’ was, for Prime Minister Howard, the embodiment of Australian mateship, loyalty and toughness. Since then, government funding has flowed to Anzac-related school curricula as well as related books, films and research projects. Old war memorials have been refurbished and new ones built. Attendance at Anzac events in Australia and overseas has swelled.

On Anzac Day, we are reminded how crucial it is for individuals to be willing to forgo self-interest in exchange for the common national good. Theoretically, Anzac Day teaches us not to be selfish and reminds us of our duties to others. But it does so at a cost. Because military role models bring with them militarism – which sees the horror and tragedy of war as a not only justifiable but desirable way to solve problems.

The dark side to the Anzac myth is a view of violence as powerful and creative. Violence is glorified as the forge of masculinity, nationhood and history. In this process, the acceptance and normalisation of violence culminates in celebration.

The renewed focus on the Anzac legend in Australian consciousness has brought with it a pronounced militarisation of Australian history, in which our collective past is reframed around idealised incidents of conflict and sacrifice. This effectively takes the politics out of war, justifying ongoing military deployment in conflict overseas, and stultifying debate about the violence of invasion and colonisation at home.

In the drama of militarism, the white, male and presumptively heterosexual soldier is the hero. The Anzac myth makes him the archetypical Australian, consigning the alternative histories of women, Aboriginal and Torres Strait Islanders, and sexual and ethnic minorities to the margins. I’d argue that for right-wing nationalist groups, the Anzacs have come to represent their nostalgia for a racially purer past. They have aggressively protested against attempts to critically analyse Anzac history.

Turning away from militarism does not mean devaluing the military or forgetting about Australia’s military history.

Militarism took on a new visibility during Abbott’s time as Prime Minister. Current and former military personnel have been appointed to major civilian policy and governance roles. Police, immigration, customs, and border security staff have adopted military-style uniforms and arms. The number of former military personnel entering state and federal politics has risen significantly in the last 15 years.

The notion that war and conflict is the ultimate test of Australian masculinity and nationhood has become the dominant understanding not only of Anzac day but, arguably, of Australian identity. Any wonder that a study compiled by McCrindle Research reveals that 34% of males, and 42% of Gen Y males, would enlist in a war that mirrored that of WWI if it occurred today.

This exaltation of violence sits uncomfortably alongside efforts to reduce and ultimately eradicate the use of violence in civil and intimate life. Across the country we are grappling with epidemic of violence against women and between men. But when war is positioned as the fulcrum of Australian history, when our leaders privilege force in policy making, and when military service is seen as the penultimate form of public service, is it any wonder that boys and men turn to violence to solve problems and create a sense of identity?

The glorification of violence in our past is at odds with our aspirations for a violence-free future.

In his writings on the dangers of militarism, psychologist and philosopher William James called for a “moral equivalent of war” – a form of moral education less predisposed to militarism and its shortcomings.

Turning away from militarism does not mean devaluing the military or forgetting about Australia’s military history. It means turning away from conflict as the dominant lens through which we understand our heritage and shared community. It means abjuring force as a means of solving problems and seeking respect. However, it also requires us to articulate an alternative ethos weighty enough to act as a substitute for militarism.

At a recent domestic violence conference in Sydney, Professor Bob Pease called for the rejection of the “militarisation of masculinity”, arguing that men’s violence in war was linked to men’s violence against women. At the same time, however, he called on us to foster “a critical ethic of care in men”, recognising that men who value others and care for them are less prone to violence.

For as long as militarism and masculinity are fused in the Australian imagination, it’s hard to see how this ethos of care can take root. It seems that the glorification of violence in our past is at odds with our aspirations for a violence-free future. The question is whether we value this potential future more than an idealised past.


Academia’s wicked problem

What do you do when a crucial knowledge system is under-resourced, highly valued, and is having its integrity undermined? That’s the question facing those working in academic research and publishing. There’s a risk Australians might lose trust in one of the central systems on which we rely for knowledge and innovation.

It’s one of those problems that defies easy solutions, like obesity, terrorism or a tax system to suit an entire population. Academics call these “wicked problems” – meaning they’re resistant to easy solutions, not that they are ‘evil’.

Charles West Churchman, who first coined the term, described them as:

That class of social system problems which are ill-formulated, where the information is confusing, where there are many clients and decision makers with conflicting values and where the ramifications in the whole system are thoroughly confusing.

The wicked problem I face day-to-day is that of research and publication ethics. Though most academics do their best within a highly pressured system I see many issues, which span a continuum, starting with cutting corners in the preparation of manuscripts and ending with outright fraud.

It’s helpful to know whether the problem we are facing is a wicked one or not. It can help us to rethink the problem, understand why conventional problem-solving approaches have failed and encourage novel approaches, even if solutions are not readily available.

Though publication ethics, which considers academic work submitted for publication, has traditionally been considered a problem solely for academic journal editors and publishers, it is necessarily entwined with research ethics – issues related to the actual performance of the academic work. For example, unethical human experimentation may only come to light at the time of publication though it clearly originates much earlier.

Given the pressure editors are under, the system is vulnerable to subversion.

Consider the ethical issues surrounding peer review, the process by which academic experts (peers) assess the work of others.

Though imperfect, formalisation of peer review has become an important mark of quality for a journal. Peer review by experts, mediated by journal editors, usually determines whether a paper is published. Though seemingly simple, there are many points where the system can be gamed or even completely subverted – a major one being in the selection of reviewers.

As the number of academics and submissions to journals increase, editors face a logistical challenge in keeping track of an ever-increasing number of submissions needing review. However, as the rise in submissions has not corresponded with a rise in editors – many of whom are volunteers – these journals are overworked and often don’t have a big enough circle of reviewers to call on for the papers being submitted.

A simple approach to increase the pool of reviewers adopted by a number of journals is to allow authors to suggest reviewers for their paper via the online peer review system. These suggestions can be valuable if overseen by editors who can assess reviewers’ credentials. But they are already overworked and often handling work at the edge of their area of expertise, meaning time is at a premium.

Given the pressure editors are under, the system is vulnerable to subversion. It’s always been a temptation for authors to submit the name of reviewers who they believed would view their work favourably. Recently, a small number took it a step further, suggesting fake reviewer names for their papers.

These fake reviewers (usually organised via a third party) promptly submitted favourable reviews which led to papers inappropriately being accepted for publication. The consequences were severe – papers had to be retracted with consequent potential reputational damage to the journal, editors, authors and their institutions. Note how a ‘simple’ view of a ‘wicked’ problem – under resourced editors can be helped by authors suggesting their reviewers – led to new and worse problems than before.

Removing the ability of authors to suggest reviewers … would be treating a symptom rather than the cause.

But why would some authors go to such extreme ends as submitting fake reviews? The answer takes us into a related problem – the way authors are rewarded for publications.

Manipulating peer review gives authors a higher chance of publication – and academic publications are crucial for being promoted at universities. Promotion often provides higher salary, prestige, perhaps less teaching allocation and other fringe benefits. So for those at the extreme, who lack the necessary skills to publish (or even firm command of academic English), it’s logical to turn to those who understand how to manipulate the system.

We could easily fix the problem of fake reviews by removing an author’s ability to suggest reviewers but this would be treating a symptom rather than the cause. Namely, a perverse reward system for authors.

Removing author suggestions does nothing to help overworked editors deal more easily with the huge amount of submissions they receive. Nor do editors have the power to address the underlying problem – an inappropriate system of academic incentives.

There are no easy solutions but accepting the complexity may at least help to understand what it is that needs to be solved. Could we change the incentive structure to reward authors for more than merely being published in a journal?

There are moves to understand these intertwined problems but all solutions will fail unless we come back to the first requirement for approaching a wicked problem – agreement it’s a problem shared by many. So while the issues are most notable in academic journals, we won’t find all the solutions there.


‘Hear no evil’ – how typical corporate communication leaves out the ethics

Evidence from the 2018 Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry was not the first and won’t be the last revelations of unethical behaviour in business. In fact, it’s been a busy few years for anyone interested in business ethics.

We have seen the Panama Papers and Unaoil scandals play out, the muddied relationship between Clive Palmer and Queensland Nickel (who was in charge of the company, really?), managers falsely inflating earnings at Target and an admission of fraud by a senior manager at Seven Network.

Ethical issues involving accusations of dishonesty, bribery, corruption, fraud and theft are, sadly, never too far away from the news. Sometimes that ethical failure has an easily identifiable cause – someone who negligently steered a course into moral hazard or selfishly set out to do something they knew was wrong. It’s also easy to identify a solution: we deal with those people through education, punishment or both.

But what about those more commonplace ethical slip-ups – the ones that don’t fall into the #epicfail bucket or make headlines, at least not immediately? Where it’s not so easy to find a guilty person in need of punishment? It’s useful to think of these as instances of ethical drift – where an organisation unconsciously drifts away from its ethical True North.

How does ethical drift happen?

A big factor could simply be the way people communicate within an organisation. Ethical context, insight and commentary is easily lost in day-to-day business communications, and it can happen in a number of ways:

The ethical framework is nowhere to be seen

Most organisations have a mission statement about their purpose, values and principles, which is expected to provide the overall direction for the company. But this ethical framework is rarely localised or given the same status as other performance indicators. That makes it hard for people to stand back and assess if, for instance, a change management project is on track to reinforce the organisation’s values as well as meeting other objectives.

Emphasis on short time periods

Internal reporting is time-driven. The emphasis on monthly, quarterly or yearly figures makes it seem irrelevant to include commentary about longer term ethical symptoms or effects. As a result, the ethics of an activity are not assessed with the same regularity and urgency.

Managing up

Managers do manage up. As reports go up the line they narrow the focus of the reader and set the agenda for what might need to be understood. Such reports tend to leave out any information that might go against the usual approach or beliefs, be unclear or prompt questions. On one hand – fair enough. Who wants to get a management report and be confused? But the downside is that the reader might be being well managed toward a certain conclusion rather than being well informed.

The glut of communication

We are drowning in information, so wherever possible reporting is abbreviated and metricated. Qualitative assessments are expected to be backed by hard figures and compared against something – a benchmark, a previous period or a competitor’s results. Assessing whether an organisation is still heading in the right ethical direction isn’t something that lends itself to metrification. And if a report’s format doesn’t include a space for ethical insights, it sends a signal that it’s not important or welcome.

Misplaced emphasis on annual staff surveys

Whether an organisation is on course for its True North is often determined by an annual staff survey. Frequently, such surveys ask people to put a numerical score (say, one to 10) on how well their team lives the ethics of the organisation. This can act as a quick point-in-time morale check, but it hardly lets people question an organisation’s accepted norms. It takes an extra level of sophistication for an organisation to change its routine reporting to capture ethical insights and measures, and to put them on an equal footing with routine performance measures.

For organisations to function at their ethical best, they need to have proactive, fearless but humble debate. But it’s hard to foster debate in an environment where reporting tools are very narrowly defined and don’t link back to the organisation’s ethical framework.

Instead, organisations need a culture where questioning is not treated as a ‘gotcha’ opportunity. Where leaders welcome information that indicates all might not be simple and rosy. Where ambiguity creates interest rather than fear. And where numerically insignificant data or exceptions are not confused with ethical insignificance.


Gender quotas for festival line-ups: equality or tokenism?

This article was originally published on THUMP for VICE. Read the original article here.

Diversity matters. Slowly but surely, we’re becoming increasingly conscious of the ethics of representation. From #OscarsSoWhite to the recommendation that ABC’s Q&A increases the amount of women present on the program, there’s a growing sense that if non-white, non-male professionals are to succeed, they need to see others who look like them succeeding. As Marie Wright Edelman wrote, “You can’t be what you can’t see”.

Nor can you be what you can’t hear.

Last year, The Guardian reported that, from a sample of 12 UK music festivals – including major ones such as Glastonbury and Creamfields – 86 percent of performers were male. Australia doesn’t do much better. The 2015 VIVID Live festival was criticised when, of more than 50 acts, only three featured women in any capacity. And the numbers aren’t very different in the US.

These are but a few instances of a growing conversation about gender diversity in the music industry. In one sense, we shouldn’t be surprised this conversation is going on. After all, gender equality in corporate workplaces has been the subject of widespread debate for more than a decade. Why should music be any different? And, if there isn’t any difference, should music festivals accept some social responsibility and impose gender quotas on their line-ups?

First things first; should music be any different when it comes to our expectations of gender diversity? The arguments in favour of gender diversity at festivals seem to be the same as they are elsewhere – they’ve been listed in detail in a report by the Centre for Ethical Leadership. In short, encouraging women’s presence in industries broadens market appeal, attracts more women to participate in the industry or event, and supports women’s rights to equal treatment, participation and representation. So why not pursue it?

Opponents might argue that actively forcing diversity is tokenism, that choosing in favour of women means potentially ignoring more qualified male acts who also deserve to be there. After all, it’s not their fault they were born men, is it? Men’s rights activists unite!

Is it the responsibility of festival producers to change our tastes for us any more than it’s the job of Macca’s to get us craving kale chips rather than fries?

This argument is hard to make in music, though. For one thing, what does it mean to be ‘qualified’? And how might we decide which of two similarly popular acts is more entitled to perform? Furthermore, the whole ‘tokenism’ argument presumes diversity isn’t intrinsically valuable, but that claim needs to be argued for.

There’s every likelihood that three male acts might share a large chunk of audience. So, even if all three outperform a female act in terms of ticket sales, if the women’s act has an entirely different audience they’d then be the better choice, wouldn’t they? Just like if a board of directors is looking for a variety of insights, they would be foolish to hire a bunch of similarly qualified white guys. Even if each of them deserves to be there on merit, it doesn’t follow that all of them deserve to be there together.

Another concern is that festival producers aren’t convinced diversity leads to broader market appeal or, more crucially, greater profits. Festival organisers want guaranteed ticket sellers – and for reasons feminists have been talking about for decades – the top ticket sellers are usually men. Is it the responsibility of festival producers to change our tastes for us any more than it’s the job of Macca’s to get us craving kale chips rather than fries?

When we picture a music artist, what do they look like? For many… they’re young and white.

The argument that ethics comes second to profits isn’t a new one, and it can seem easily dismissed – but if it’s a genuine question of survival, you can see where the organisers are coming from. They’re taking on all the financial risk, so why should they take on any more? If people start buying more tickets to female acts, they’ll book them!

So he question becomes who is responsible for bringing diversity to the industry? Organisers claim it’s the audience who buy the tickets. Many musicians believe they could sell more tickets if festivals had the courage to blood some diverse acts. And most listeners won’t concede to having any gender bias in their listening habits, even if, coincidentally, most of their favourite acts are men.

And here’s the rub – most of the barriers to diversity in representation, in any sphere,  aren’t deliberate acts of oppression. They’re the product of unconscious bias. When we picture a music artist, what do they look like? For many, I’d hazard they’re young and white. In some genres – hip hop, for instance – it might be different, but the dominance of men is likely to remain. This is despite the huge success of some female artists in a range of different genres.

The tricky thing about unconscious biases is that it’s harder to specify who’s responsible for countering them. Many will hold that it’s the people bearing the bias, but if they’re not aware they’re biased to begin with, it’s likely to be a slow burn.

People who are seen to benefit from quota systems are often seen as less qualified than those appointed ‘on merit’ – even by other people who have benefited from quotas.

And thus the argument for quotas – by enforcing a minimum standard for representation we force the issue. Festivals make their commitment to diversity public and transparent – and artists and listeners can hold them accountable. Plus, we don’t need to wait around for listeners to wake up to their own biases.

But quotas are no panacea. People who are seen to benefit from quota systems are often seen as less qualified than those appointed ‘on merit’ – even by other people who have benefited from quotas. This suggests the ‘tokenism’ narrative around quotas is hard to shake, and might even be creating negative self-appraisals in the very people quotas are designed to help.

So rather than having arbitrary thresholds for diversity, maybe it’s preferable for festivals to include diversity alongside other values – fun, integrity, artistry and so on – as one of the defining aspects of a festival. This means seeking diversity (and not just diversity of gender) as intrinsically valuable, rather than implementing quotas that make it seem like a necessary evil. 


Capitalism is global, but is it ethical?

Does the dominant economic system of the Western world withstand moral scrutiny? Trevor Treharne asks leading moral philosophers and experts.

While economics are seldom discussed in directly ethical terms, it is through the spirit of moral inquiry that today’s capitalist societies were originally imagined.

Adam Smith, the 18th century thinker known as the father of modern economics and capitalism, was first and foremost a moral philosopher.

Smith’s famous metaphor of ‘the invisible hand’ attempted to describe the wider social benefits that result from individual actions. Capitalism was designed to be ethical, but is it?

The achievement of capitalism

Assuming society has certain obligations – the reduction of poverty, the improvement of health and the extension of human happiness – capitalism plays an important role.

“The best things about capitalism are its mind-boggling productivity and its exquisite sensitivity to what people want and need”, says John Bishop, a moral philosopher at Trent University in the UK and editor of the book Ethics and Capitalism.

Bishop argues that historically and globally, capitalism has caused the life expectancy of people to rise from about 28 years to over 70 years.

“Much of this has been through reducing infant and child mortality – a most ethical goal – and lifting hundreds of millions of people out of abject poverty.”

“Capitalism creates net new wealth on a scale the world has never before seen”, he says.

Harvard cognitive scientist Steven Pinker says that it’s hard to have an intelligent discussion about capitalism because too many people confuse “capitalism” with “unregulated capitalism with no social welfare”. Their criticisms have nothing to do with capitalism itself but about whether it’s a good idea for governments to regulate economic activity to provide social benefits. This is completely compatible with capitalism, as the capitalist economies of Scandinavia, Canada, and New Zealand prove.

“Putting aside that red herring, there are several advantages to capitalist economies, apart from generating wealth that makes rich and poor alike better off”, Pinker says.

“Countries that trade with each other are less likely to start wars with each other, because with effective markets it’s cheaper to buy things than to steal them.”

“Also, in a market economy, other people are more valuable to you alive than dead. All of this reduces some of the exploitative incentives of war and conquest”, Pinker adds.

The issues with capitalism

Bishop warns that capitalism has a tendency to distribute its benefits in an extremely unequal fashion.

“It also has the inability to value important things that do not have market value such as human dignity, caregiving, the climate, the environment, and people who have nothing to offer the market, such as children, the severely disabled, and the elderly”, he says.

Bishop says capitalism also fails to account for the needs of future generations.

“Given this, our ethical duty is to mitigate the harms and omissions of capitalism without serious disruption of its immense productivity and wealth creation.”

Simon Tormey, a political theorist at The University of Sydney, says the problems of capitalism depend on the governing system it operates within.

“What has tended historically to dictate which end of the [ethical] spectrum capitalism appears on is the ability of ordinary people to rein back capitalism’s excesses through the actions of the state on the one hand, and of social movements such as trade unions on the other”, he says.

“Countries with strong states and strong social movements are able to develop forms of capitalism that are quite ethical in this respect and Scandinavia would perhaps offer the most complete examples.”

“However, countries where there is authoritarian governance, where trade unions and other social movements are weak, are often characterised by a highly unethical and obnoxious form of capitalism that prays on individual weakness to generate profits for a small minority.”

Tormey adds that unfortunately much of the evidence of the past 40 years suggests a progressively slippery slope to domination by “the 1%” and thus to “unethical capitalism”.

Not perfect, but superior

Society is ordered by picking a preference from a series of competing systems, all of which flourish and flounder in varying degrees.

It is not sensible to overthrow a system such as capitalism on the mere basis of a few potential pitfalls.

But noting the issues can start a conversation about its reform or adaption.

“Is capitalism ethical? As compared to what?” asks moral philosopher Peter Singer.

“So far, none of the alternatives tried have done nearly as good a job as capitalism of keeping most of the population out of poverty and even providing them with a reasonable level of comfort.”

“Until we have evidence that there is another system that can do better, the sensible course seems to be to stick with capitalism and attempt to deal with its flaws rather than to abandon it”, Singer adds.


Philosophy must (and can) thrive outside universities

A recent article in ABC Religion by Steve Fuller described philosophy being “at the crossroads”. The article explores philosophy’s relationship to universities and what living a “philosophical life” really looks like.

Reading it, my mind whisked me back to some of my earliest days at The Ethics Centre. Before returning to Sydney, I enjoyed the good fortune to complete my doctorate at Cambridge – one of the great universities of the world. While there, I mastered the disciplines of academic philosophy. However, I also learned the one lesson that my supervisor offered me at our first meeting – I should always “go for the jugular”. As it happens, I was quite good at drawing blood.

Perhaps this was a young philosopher’s sport because, as I grew older and read more deeply, I came to realise what I’d learned to do was not really consistent with the purpose and traditions of philosophy at all. Rather, I had become something of an intellectual bully – more concerned with wounding my opponents than with finding the ‘truth’ in the matter being discussed.

This realisation was linked to my re-reading of Plato – and his account of the figure of Socrates who, to this day, remains my personal exemplar of a great philosopher.

The key to my new understanding of Socrates lay in my realisation that, contrary to what I had once believed, he was not a philosophical gymnast deliberately trying to tie his interlocutors in knots (going for the jugular). Rather, he was a man sincerely wrestling, with others, some of the toughest questions faced by humanity in order to better understand them. What is justice? What is a good life? How are we to live?

The route to any kind of answer worth holding is incredibly difficult – and I finally understood (I was a slow learner) that Socrates subjected his own ideas to the same critical scrutiny he required of others.

In short, he was totally sincere when he said that he really did not know anything. All of his questioning was a genuine exploration involving others who, in fact, did claim to ‘know’. That is why he would bail up people in the agora (the town square) who were heading off to administer ‘justice’ in the Athenian courts.

Surely, Socrates would say, if you are to administer justice – then you must know what it is. As it turned out, they did not.

The significance of Socrates’ work in the agora was not lost on me. Here was a philosopher working in the public space. The more I looked, the more it seemed that this had been so for most of the great thinkers.

So that is what I set out to do.

One of my earliest initiatives was to head down to Martin Place, in the centre of Sydney, where I would set up a circle of 10 plastic chairs and two cardboard signs that said something like, “If you want to talk to a philosopher about ideas, then take a seat”. And there I would sit – waiting for others.

Without fail they would come – young, old, rich, poor – wanting to talk about large, looming matters in their lives. I remember cyclists discussing their place on our roads, school children discussing their willingness to cheat in exams (because they thought the message of society is ‘do whatever it takes’).

Occasionally, people would come from overseas – having heard of this odd phenomenon. A memorable occasion involved a discussion with a very senior and learned rabbi from Amsterdam – the then global head (I think) of Progressive Judaism. On another occasion, a woman brought her mother (visiting from England) to discuss her guilt at no longer believing in God. I remember we discussed what it might mean to feel guilt in relation to a being you claimed not to exist. There were few answers – but some useful insights.

Anyway, I came to imagine a whole series of philosophers’ circles being dotted around Martin Place and other parts of Sydney (and perhaps Australia). After all, why should I be the only philosopher pursuing this aspect of the philosophical life. So I reached out to the philosophy faculty at Sydney University – thinking (naively as it turned out) I would have a rush of colleagues wishing to join me.

Alas – not one was interested. The essence of their message was that they doubted the public would be able to engage with ‘real philosophy’ – that the techniques and language needed for philosophy would be bewildering to non-philosophers. I suspect there was also an undeclared fear of being exposed to their fellow citizens in such a vulnerable position.

Actually, I still don’t really know what led to such a wholesale rejection of the idea.

However, I think it was a great pity other philosophers should have felt more comfortable within the walls of their universities rather than out in the wider world.

I doubt that anything I write or say will be quoted in the centuries to come. However, I would not, for a moment, change the choice I made to step outside of the university and work within the agora. Life then becomes messy and marvellous in equal measure. Everything needs to be translated into language anyone can understand (and I have found that this is possible without sacrificing an iota of philosophical nuance).

I think it was a great pity other philosophers should have felt more comfortable within the walls of their universities rather than out in the wider world.

You constantly need to challenge unthinking custom and practice most people simply take for granted. This does not make you popular. You are constantly accused of being ‘unethical’ because you entertain ideas one group or another opposes. You please almost nobody. You cannot aim to be liked. And you have to deal with the rawness of people’s lives – discovering just how much the issues philosophers consider (especially in the field of ethics) really matter.

This is not to say that ‘academic’ philosophy should be abandoned. However, I can see no good reason why philosophers should think this is the only (or best) way to be a philosopher. Surely, there is room (and a need) for philosophers to live larger, more public lives.

You constantly need to challenge unthinking custom and practice most people simply take for granted. This does not make you popular.

I have scant academic publications to my name. However, at the height of the controversy surrounding the introduction of ethics classes for children not attending scripture in NSW, I enjoyed the privilege of being accused of “impiety” and “corrupting the youth” by the Anglican and Catholic Archbishops of Sydney. Why a ‘privilege’? Because these were precisely the same charges alleged against Socrates. So far, I have avoided the hemlock. For a philosopher, what could be better than that?


“Animal rights should trump human interests” – what’s the debate?

Are the ways humans subject animals to our own needs and wants justified?

Humans regularly impose our own demands on the animal world, whether it’s eating meat, scientific testing, keeping pets, sport, entertainment or protecting ourselves. But is it reasonable and ethical to do so?

Humans and animals

We often talk about humans and animals as though they are two separate categories of being. But aren’t humans just another kind of animal?

Many would say “no”, claiming humans have greater moral value than other animals. Humans possess the ability to use reason while animals act only on instinct, they say. This ability to think this way is held up as the key factor that makes humans uniquely worthy of protection and having greater moral value than animals.

“Animals are not self-conscious and are there merely as means to an end. That end is man.” – Immanuel Kant

Others argue that this is “speciesism” because it shows an unjustifiable bias for human beings. To prove this, they might point to cases where a particular animal shows more reason than a particular human being – for example, a chimpanzee might show more rational thought than a person in a coma. If we don’t grant greater moral value to the animal in these cases, it shows that our beliefs are prejudicial.

Some will go further and suggest that reason is not relevant to questions of moral value, because it measures the value of animals against human standards. In determining how a creature should be treated, philosopher Jeremy Bentham wrote, “… the question is not ‘Can they reason?’, nor ‘Can they talk?’, but ‘Can they suffer?’”

So in determining whether animal rights should trump human interests, we first need to figure out how we measure the value of animals and humans.

Rights and interests

What are rights and how do they correspond to interests? Generally speaking, you have a right when you are entitled to do something or prevent someone else from doing something to you. If humans have the right to free speech, this is because they are entitled to speak freely without anyone stopping them. The right protects an activity or status you are entitled to.

Rights come in a range of forms – natural, moral, legal and so on – but violating someone’s right is always a serious ethical matter.

“Animals are my friends. I don’t eat my friends.” – George Bernard Shaw

Interests are broader than rights and less serious from an ethical perspective. We have an interest in something when we have something to gain or lose by its success or failure. Humans have interests in a range of different projects because our lives are diverse. We have interests in art, medical research, education, leisure, health…

When we ask whether animal rights should trump human interests, we are asking a few questions. Do animals have rights? What are they? And if animals do have rights, are they more or less important than the interests of humans? We know human rights will always trump human interests, but what about animal rights?

Animal rights vs animal welfare

A crucial point in this debate is understanding the difference between animal rights and animal welfare. Animal rights advocates believe animals deserve rights to prevent them from being treated in certain ways. The exploitation of animals who have rights is, they say, always morally wrong – just like it would be for a human.

Animal welfare advocates, on the other hand, believe using animals can be either ethical or, in practice, unavoidable. These people aim to reduce any suffering inflicted on animals, but don’t seek to end altogether what others regard as exploitative practices.

As one widely used quote puts it, “Animal rights advocates are campaigning for no cages, while animal welfarists are campaigning for bigger cages”.

Are they mutually exclusive? What does taking a welfarist approach say about the moral value of animals?

Animal rights should trump human intereststook place on 3 May 2016 at the City Recital Hall in Sydney.


‘Eye in the Sky’ and drone warfare

Warning – general plot spoilers to follow.

Collateral damage

Eye in the Sky begins as a joint British and US surveillance operation against known terrorists in Nairobi. During the operation, it becomes clear a terrorist attack is imminent, so the goals shift from surveillance to seek and destroy.

Moments before firing on the compound, drone pilots Steve Watts (Aaron Paul) and Carrie Gershon (Phoebe Fox) see a young girl setting up a bread stand near the target. Is her life acceptable collateral damage if her death saves many more people?

In military ethics, the question of collateral damage is a central point of discussion. The principle of ‘non-combatant immunity’ requires no civilian be intentionally targeted, but it doesn’t follow from this that all civilian casualties are unethical.

Most scholars and some Eye in the Sky characters, such as Colonel Katherine Powell (Helen Mirren), accept even foreseeable casualties can be justified under certain conditions – for instance, if the attack is necessary, the military benefits outweigh the negative side effects and all reasonable measures have been taken to avoid civilian casualties.

Risk-free warfare

The military and ethical advantages of drone strikes are obvious. By operating remotely, we prevent the risk of our military men and women being physically harmed. Drone strikes are also becoming increasingly precise and surveillance resources mean collateral damage can be minimised.

However, the damage radius of a missile strike drastically exceeds most infantry weapons – meaning the tools used by drones are often less discriminate than soldiers on the ground carrying rifles. If collateral damage is only justified when reasonable measures have been taken to reduce the risk to civilians, is drone warfare morally justified, or does it simply shift the risk away from our war fighters to the civilian population? The key question here is what counts as a reasonable measure – how much are we permitted to reduce the risk to our own troops?

Eye in the Sky forces us to confront the ethical complexity of war.

Reducing risk can also have consequences for the morale of soldiers. Christian Enemark, for example, suggests that drone warfare marks “the end of courage”. He wonders in what sense we can call drone pilots ‘warriors’ at all.

The risk-free nature of a drone strike means that he or she requires none of the courage that for millennia has distinguished the warrior from all other kinds of killers.

How then should drone operators be regarded? Are these grounded aviators merely technicians of death, at best deserving only admiration for their competent application of technical skills? If not, by what measure can they be reasonably compared to warriors?

Moral costs of killing

Throughout the film, military commanders Catherine Powell and Frank Benson (Alan Rickman) make a compelling consequentialist argument for killing the terrorists despite the fact it will kill the innocent girl. The suicide bombers, if allowed to escape, are likely to kill dozens of innocent people. If the cost of stopping them is one life, the ‘moral maths’ seems to check out.

Ultimately it is the pilot, Steve Watts, who has to take the shot. If he fires, it is by his hand a girl will die. This knowledge carries a serious ethical and psychological toll, even if he thinks it was the right thing to do.

There is evidence suggesting drone pilots suffer from Post Traumatic Stress Disorder (PTSD) and other forms of trauma at the same rates as pilots of manned aircraft. This can arise even if they haven’t killed any civilians. Drone pilots not only kill their targets, they observe them for weeks beforehand, coming to know their targets’ habits, families and communities. This means they humanise their targets in a way many manned pilots do not – and this too has psychological implications.

Who is responsible?

Modern military ethics insist all warriors have a moral obligation to refuse illegal or unethical orders. This sits in contrast to older approaches, by which soldiers had an absolute duty to obey. St Augustine, an early writer on the ethics of war, called soldiers “swords in the hand” of their commanders.

In a sense, drone pilots are treated in the same way. In Eye in the Sky, a huge number of senior decision-makers debate whether or not to take the shot. However, as Powell laments, “no one wants to take responsibility for pulling the trigger”. Who is responsible? The pilot who has to press the button? The highest authority in the ‘kill chain’? Or the terrorists for putting everyone in this position to begin with?