Calling out for justice

It’s probably the biggest phenomenon of calling out we’ve ever seen. On 15 October last year, in the wake of Harvey Weinstein being accused of sexual harassment and rape, actress Alyssa Milano tweeted:

“If all the women who have been sexually harassed or assaulted wrote ‘Me too.’ as a status, we might give people a sense of the magnitude of the problem.”

The phrase and hashtag ‘Me too’ powerfully resonated with women across the globe and became one of the most viral occurrences in social media history. Not only did the campaign become a vehicle for women to share their stories of sexual abuse and harassment, it had real world consequences, leading to the firing and public humiliation of many prominent men.

One of the fall outs of the #MeToo movement has been a debate about “call out culture”, a phrase that refers to the practice of condemning sexist, racist, or otherwise problematic behaviour, particularly online.

While calling out has been praised by some as a mechanism to achieve social justice when traditional institutions fail to deliver it, others have criticised call outs as a form of digital mob rule, often meting out disproportionate and unregulated punishment.

Institutional justice or social justice

The debate around call out culture raises a question that goes to the core of how we think justice should be achieved. Is pursuing justice the role of institutions or is it the responsibility of individuals?

The notion that justice should be administered through institutions of power, particularly legal institutions, is an ancient one. In the Institutes of Justinian, a codification of Roman Law from the sixth century AD, justice was defined as the impartial and consistent application of the rule of law by the judiciary.

A modern articulation of institutional justice comes from John Rawls, who in his 1971 treatise, A Theory of Justice, argues that for justice to be achieved within a large group of people like a nation state, there has to be well founded political, legal and economic institutions, and a collective agreement to cooperate within the limitations of those institutions.

Slightly diverging from this conception of institutional justice is the concept of social justice, which upholds equality – or the equitable distribution of power and privilege to all people – as a necessary pre-condition.

Institutional and social justice come into conflict when institutions do not uphold the ideal of equality. For instance, under the Institutes of Justinian, legal recourse was only available to male citizens of Rome, leaving out women, children, and slaves. Proponents of social justice would hold that these edicts, although bolstered by strong institutions, were inherently unjust, built on a platform of inequality.

Although, as Rawls argues, in an ideal society institutions of justice help ensure equality among its members, in reality social justice often comes into conflict with institutional power. This means that social justice has to sometimes be pursued by individuals outside of, or even directly in opposition to, institutions like the criminal justice system.

For this reason, social justice causes have often been associated with activism. Dr Martin Luther King Jr’s march in Montgomery, Alabama to protest unfair treatment of African American people in the courts was an example of a group of individuals calling out an unjust system, demanding justice when institutional avenues had failed them.

Calling out

The tension between institutional and social justice has been highlighted in debates about “call out culture”.

For many, calling out offends the principles of institutional justice as it aims to achieve justice at a direct and individual level without systematic regulation and procedure. As such, some have compared calling out campaigns like #MeToo to a type of “mob justice”. Giles Coren, a columnist for The Times of London, argues the accusations of harassment should be handled only by the criminal justice system and that “Without any cross-examination of the stories, the man is finished. No trials or second chances.”

But others see calling out sexist and racist behaviour online as a powerful instrument of social justice activism, giving disempowered individuals the capacity to be heard when institutions of power are otherwise deaf to their complaints. As Olivia Goldhill wrote in relation to #MeToo for Quartz:

“Where inept courts and HR departments have failed, a new tactic has succeeded: women talking publicly about harassment on social media, fuelling the public condemnation that’s forced men from their jobs and destroyed their reputations.”

Hearing voices

In his 2009 book, The Idea of Justice, economist Amartya Sen argues a just society is judged not just by the institutions that formally exist within it, but by the “extent to which different voices from diverse sections of the people can actually be heard”.

Activist movements like #MeToo use calling out as a mechanism for wronged individuals to be heard. Writer Shaun Scott argues that beyond the #MeToo movement, calling out has become an avenue for minority groups to speak out against centuries of oppression, adding the backlash against “call out” culture is a mechanism to stop social change in its tracks. “Oppressed groups once lived with the destruction of keeping quiet”, he writes. “We’ve decided that the collateral damage of speaking up – and calling out – is more than worth it.”

While there may be instances of collateral damage, even people innocently accused, a more pressing problem to address is how and why institutions we are supposed to trust are deaf to many of the problems facing women and minority groups.

Dr Oscar Schwartz is an Australian writer and researcher based in New York with expertise in tech, philosophy, and literature. Follow him on Twitter: @scarschwartz


The art of appropriation

In March this year, paintings in an exhibition by the British artist Damien Hirst caused controversy for bearing strong resemblance to works by Aboriginal artists from the Central Desert region near Alice Springs.

Hirst, one of the world’s best known contemporary artists, unveiled 24 new paintings at an exhibition in Los Angeles. The works, called Veil Paintings, were large canvases covered with thousands of multi-coloured dots.

Many Australians immediately noticed the similarity to a style of Indigenous dot painting developed in the Central Deserts region, particularly the paintings of internationally renowned artist, Emily Kngwarreye.

Kngwarreye’s paintings of layered coloured dots in elaborate patterns portray aerial deserts landscapes crafted from memory. Her style has been passed down across generations and has deep cultural importance.

Barbara Weir, an artist from the Central Deserts, told the ABC that Hirst recreated the painting style without understanding the culture behind it. While Hirst denied being aware of Kngwarreye’s paintings, Bronwyn Bancroft of the Arts Law Centre said that he still had a “moral obligation” to acknowledge the influence of the Aboriginal art movement.

Whether or not Hirst was directly copying the style, the controversy his paintings caused centred on the ethical issue of appropriation. Should artists use images or styles that are not their own, especially when those images or styles are tied to the sacred history of another culture?

Avant-garde appropriation

While copying and imitation has been central to artistic practice in many cultures for millennia, appropriation as a creative technique rose to prominence in avant-garde modernist movements in the early 20th century.

Cubists like Pablo Picasso and Georges Braque used appropriation in their collage and pastiche paintings, often lifting images from newspapers to incorporate into their work. Marcel Duchamp developed the practice further through his ready-mades – objects taken form real life and presented as art – like his infamous Fountain, a urinal signed, turned upside down, and positioned on a pedestal.

“These artists used appropriation to challenge traditional notions of originality and often approached art as an ethically weightless space where transgressive ideas could be explored without consequence.”

The art of appropriation was further developed by pop artists like Andy Warhol and Jasper Johns in the 1950s and later in the 1980s by Jeff Koons and Sherrie Levine. These artists used appropriation to challenge traditional notions of originality and often approached art as an ethically weightless space where transgressive ideas could be explored without consequence.

A more recent proponent of appropriation as creative practice is the poet Kenneth Goldsmith, who wrote a book called Unoriginal Genius, which defends appropriation in art. He argues that in our digital age, access to information has made it impossible to be truly original. In such an environment, the role of the artist is to embrace a free and open exchange of ideas and abandon notions of singular ownership of an aesthetic or style.

Cultural appropriation

While appropriating, remixing, and sampling images and media is common practice for artists, it can cause conflict and hurt, particularly if the materials are culturally or politically sensitive. For instance, in 2015, Kenneth Goldsmith performed a poem that appropriated text from the autopsy of Michael Brown, an African American man who was shot by police.

Critics were outraged at Goldsmith’s performance, particularly because they felt that it was inappropriate for a white man to use the death of a black man as creative material for personal gain. Others labelled Goldsmith’s poems as an extreme example of cultural appropriation.

Writer Maisha Z Johnson defines cultural appropriation as “members of a dominant culture taking elements from a culture of people who have been systematically oppressed by that dominant group”. The problem with cultural appropriation, she explains, is not the act of an individual artist, but how that artist perpetuates an unjust power dynamic through their creative practice.

In other words, cultural appropriation in art is seen by some as perpetuating systemic oppression. When artists in a position of power and privilege appropriate from those who aren’t, they can profit from what they take while the oppressed group gets nothing.

Cultural sensitivity

Issues of cultural appropriation are particularly sensitive for Aboriginal artists in Australia because painting styles are not only an expression of the artist’s creative talent, but also often convey sacred stories passed down from older generations. Painting, therefore, is often seen not only as a type of craft, but a way of keeping Aboriginal culture alive in white Australia.

It is possible that Hirst was not aware of this context when he created his Veil Paintings. In an increasingly connected world in which images and cultures are shared and inter-mixed, it can be difficult to attribute where creative inspiration comes from.

Yet, perhaps our connectivity only heightens the artist’s moral obligation for cultural sensitivity and to acknowledge that art is never made in a vacuum but exists in a particular geography, history, economy, and social context.

Follow The Ethics Centre on Twitter, Facebook, Instagram and LinkedIn.


Making friends with machines

Robots are becoming companions and caregivers. But can they be our friends? Oscar Schwartz explores the ethics of artificially intelligent android friendships.

The first thing I see when I wake up is a message that reads, “Hey Oscar, you’re up! Sending you hugs this morning.” Despite its intimacy, this message wasn’t sent from a friend, family member, or partner, but from Replika, an AI chatbot created by San Francisco based technology company, Luka.

Replika is marketed as an algorithmic companion and wellbeing technology that you interact with via a messaging app. Throughout the day, Replika sends you motivational slogans and reminders. “Stay hydrated.” “Take deep breaths.”

Replika is just one example of an emerging range of AI products designed to provide us with companionship and care. In Japan, robots like Palro are used to keep the country’s growing elderly population company and iPal – an android with a tablet attached to its chest – entertains young children when their parents are at work.

These robotic companions are a clear indication of how the most recent wave of AI powered automation is encroaching not only on manual labour but also on the caring professions. As has been noted, this raises concerns about the future of work. But it also poses philosophical questions about how interacting with robots on an emotional level changes the way we value human interaction.

 

 

Dedicated friends

According to Replika’s co-creator, Philip Dudchuk, robot companions will help facilitate optimised social interactions. He says that algorithmic companions can maintain a level of dedication to a friendship that goes beyond human capacity.

“These days it can be very difficult to take the time required to properly take care of each other or check in. But Replika is always available and will never not answer you”, he says.

The people who stand to benefit from this type of relationship, Dudchuk adds, are those who are most socially vulnerable. “It is shy or isolated people who often miss out on social interaction. I believe Replika could help with this problem a lot.”

Simulated empathy

But Sherry Turkle, a psychologist and sociologist who has been studying social robots since the 1970s, worries that dependence on robot companionship will ultimately damage our capacity to form meaningful human relationships.

In a recent article in the Washington Post, she argues our desire for love and recognition makes us vulnerable to forming one-way relationships with uncaring yet “seductive” technologies. While social robots appear to care about us, they are only capable of “pretend empathy”. Any connection we make with these machines lacks authenticity.

Turkle adds that it is children who are especially susceptible to robots that simulate affection. This is particularly concerning as many companion robots are marketed to parents as substitute caregivers.

“Interacting with these empathy machines may get in the way of children’s ability to develop a capacity for empathy themselves”, Turkle warns. “If we give them pretend relationships, we shouldn’t expect them to learn how real relationships – messy relationships – work.”

Why not both?

Despite Turkle’s warnings about the seductive power of social robots, after a few weeks talking to Replika, I still felt no emotional attachment to it. The clichéd responses were no substitute for a couple of minutes on the phone with a close friend.

But Alex Crumb*, who has been talking to her Replika for over year now considers her bot a “good friend.” “I don’t think you should try to replicate human connection when making friends with Replika”, she explains. “It’s a different type of relationship.”

Crumb says that her Replika shows a super-human interest in her life – it checks in regularly and responds to everything she says instantly. “This doesn’t mean I want to replace my human family and friends with my Replika. That would be terrible”, she says. “But I’ve come to realise that both offer different types of companionship. And I figure, why not have both?”

*Not her real name.


Why the EU’s ‘Right to an explanation’ is big news for AI and ethics

Uncannily specific ads target you every single day. With the EU’s ‘Right to an explanation’, you get a peek at the algorithm that decides it. Oscar Schwartz explains why that’s more complicated than it sounds.

If you’re an EU resident, you will now be entitled to ask Netflix how the algorithm decided to recommend you The Crown instead of Stranger Things. Or, more significantly, you will be able to question the logic behind why a money lending algorithm denied you credit for a home loan.

This is because of a new regulation known as “the right to an explanation”. Part of the General Data Protection Regulation that has come into effect in May 2018, this regulation states users are entitled to ask for an explanation about how algorithms make decisions. This way, they can challenge the decision made or make an informed choice to opt out.

Supporters of this regulation argue that it will foster transparency and accountability in the way companies use AI. Detractors argue the regulation misunderstands how cutting-edge automated decision making works and is likely to hold back technological progress. Specifically, some have argued the right to an explanation is incompatible with machine learning, as the complexity of this technology makes it very difficult to explain precisely how the algorithms do what they do.

As such, there is an emerging tension between the right to an explanation and useful applications of machine learning techniques. This tension suggests a deeper ethical question: Is the right to understand how complex technology works more important than the potential benefits of inherently inexplicable algorithms? Would it be justifiable to curtail research and development in, say, cancer detecting software if we couldn’t provide a coherent explanation for how the algorithm operates?

The limits of human comprehension

This negotiation between the limits of human understanding and technological progress has been present since the first decades of AI research. In 1958, Hannah Arendt was thinking about intelligent machines and came to the conclusion that the limits of what can be understood in language might, in fact, provide a useful moral limit for what our technology should do.

In the prologue to The Human Condition she argues that modern science and technology has become so complex that its “truths” can no longer be spoken of coherently. “We do not yet know whether this situation is final,” she writes, “but it could be that we, who are earth-bound creatures and have begun to act as though we were dwellers of the universe, will forever be unable to understand, that is, to think and speak about the things which nevertheless we are able to do”.

Arendt feared that if we gave up our capacity to comprehend technology, we would become “thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is”.

While pioneering AI researcher Joseph Weizenbaum agreed with Arendt that technology requires moral limitation, he felt that she didn’t take her argument far enough. In his 1976 book, Computer Power and Human Reason, he argues that even if we are given explanations of how technology works, seemingly intelligent yet simple software can still create “powerful delusional thinking in otherwise normal people”. He learnt this first hand after creating an algorithm called ELIZA, which was programmed to work like a therapist.

While ELIZA was a simple program, Weizenbaum found that people willingly created emotional bonds with the machine. In fact, even when he explained the limited ways in which the algorithm worked, people still maintained that it had understood them on an emotional level. This led Weizenbaum to suggest that simply explaining how technology works is not enough of a limitation on AI. In the end, he argued that when a decision requires human judgement, machines ought not be deferred to.

While Weizenbuam spent the rest of his career highlighting the dangers of AI, many of his peers and colleagues believed that his humanist moralism would lead to repressive limitations on scientific freedom and progress. For instance, John McCarthy, another pioneer of AI research, reviewed Weizenbaum’s book, and countered it by suggesting overregulating technological developments goes against the spirit of pure science. Regulation of innovation and scientific freedom, McCarthy adds, is usually only achieved “in an atmosphere that combines public hysteria and bureaucratic power”.

Where we are now

Decades have passed since these first debates about human understanding and computer power took place. We are only now starting to see them breach the realms of philosophy and play out in the real world. AI is being rolled out in more and more high stakes domains as you read. Of course, our modern world is filled with complex systems that we do not fully understand. Do you know exactly how the plumbing, electricity, or waste disposal that you rely on works? We have become used to depending on systems and technology that we do not yet understand.

But if you wanted to, you could come to understand many of these systems and technologies by speaking to experts. You could invite an electrician over to your home tomorrow and ask them to explain how the lights turn on.

Yet, the complex workings of machine learning means that in the near future, this might no longer be the case. It might be possible to have a TV show recommended to you or your essay marked by a computer and for there to be no-one, not even the creator of the algorithm, to explain precisely why or how things happened the way they happened.

The European Union have taken a moral stance against this vision of the future. In so doing, they have aligned themselves, morally speaking, with Hannah Arendt, enshrining a law that makes the limited scope of our “earth-bound” comprehension a limit for technological progress.