The world’s biggest social media platform’s slide into a cesspit of fake news, clickbait and shouty trolling was no accident.  

“Facebook gives the most reach to the most extreme ideas. They didn’t set out to do it, but they made a whole bunch of individual choices for business reasons,” Facebook whistleblower Frances Haugen said. 

In her Festival of Dangerous Ideas talk Unmasking Facebook, data engineer Haugen explained that back in Facebook’s halcyon days of 2008, when it actually was about your family and friends, your personal circle wasn’t making enough content to keep you regularly engaged on the platform. To encourage more screentime, Facebook introduced Pages and Groups and started pushing them on its users, even adding people automatically if they interacted with content. Naturally, the more out-there groups became the more popular ones – in 2016, 65% of people who joined neo-Nazi groups in Germany joined because Facebook suggested them.  

By 2019 (if not earlier), 60% of all content that people saw on Facebook was from their Groups, pushing out legitimate news sources, bi-partisan political parties, non-profits, small businesses and other pages that didn’t pay to promote their posts. Haugen estimates content from Groups is now 85% of Facebook. 

I was working for an online publisher between 2013 and 2016, and our traffic was entirely at the will of the Facebook algorithm. Some weeks we’d be prominent in people’s feeds and get great traffic, other weeks it would change without warning and our traffic and revenue would drop to nothing. By 2016, the situation had gotten so bad that I was made redundant and in 2018 the website folded entirely and disappeared from the internet. 

Personal grievances aside, Facebook has also had sinister implications for democracy and impacts on genocide, as Haugen reminds us. The 2016 Trump election exposed serious privacy deficits at Facebook when 87 million users had their data leaked to Cambridge Analytica for targeted pro-Trump political advertising. Enterprising Macedonian fake news writers exploited the carousel recommended link function to make US$40 million pumping out insane – and highly clickable – alt-right conspiracy theories that undoubtedly played a part in helping Trump into the White House – along with the hackers spreading anti-Clinton hate from the Glavset in St Petersburg. 

Worse, the Myanmar government sent military officials to Russia to learn to use online propaganda techniques for their genocide of the Muslim Rohingya from 2016 onwards, flooding Facebook with vitriolic anti-Rohingya misinformation and inciting violence against them. As The Guardian reported, around that time Facebook had only two Burmese-speaking content moderators. Facebook has also been blamed for “supercharging hate speech and inciting ethnic violence” (Vice) in Ethiopia over the past two years, with engagement-based ranking pushing the most extreme content to the top and English-first content moderation systems being no match for linguistically diverse environments where Facebook is the internet.  

There are design tools that can drive down the spread of misinformation, like forcing people to click on an article before they blindly share it and putting up a barrier between fourth person plus sharers, so they must copy and paste content before they can share or react to it. These have the same efficacy at preventing misinformation spread as third-party fact-checkers and work multi-lingually, Haugen said, and we can mobilise as nations and customers to put pressure on companies to implement them.  

But the best thing we can do is insist on having humans involved in the decision-making process about where to focus our attention, because AI and computers will always automatically opt for the most extreme content that gets the most clicks and eyeballs. 

For technology writer Kevin Roose, though, in his talk Caught in a Web, we are already surrounded by artificial intelligence and algorithms, and they’re only going to get smarter, more sophisticated, and more deeply entrenched. 

70% of our time on YouTube is now spent watching videos suggested by recommendation engines, and 30% of Amazon page views are from recommendations. We let Netflix preference shows for us, Spotify curate radio for us, Google Maps tell us which way to drive or walk, and with the Internet of Things, smart fridges even order milk and eggs for us before we know we need them. 

A commercialised tool one AI researcher told Roose about called pedestrian reidentification can identify you from multiple CCTV feeds, put that together with your phone’s location data and bank transactions and figure out to serve you an ad for banana bread as you’re getting off the train and walking towards your favourite café.   

And in news that will horrify but not surprise journalists, Roose said we’re entering a new age of ubiquitous synthetic media, in which articles written by machines will be hyper personalised at point of click for each reader by crawling your social media profiles.

After 125 years of the reign of ‘all the news that’s fit to print’, we’re now entering the era of “all the news that’s dynamically generated and personalised by machines to achieve business objectives.” 

How can we fight back and resist this seemingly inevitable drift towards automation, surveillance and passivity? Roose highlights three things to do:  

Quoting Socrates, Know Thyself. Know your own preferences and whether you’re choosing something because you like it or because the algorithm suggested it to you.
 

Resist Machine Drift – this is where we unconsciously hand over more and more of our decisions to machines, and “it’s the first step in losing our autonomy.” He recommends “preference mapping” – writing down a list of all your choices in a day, from what you ate or listened to, to what route you took to work. Did you make the decisions, or did an app help you?
 

Invest in Humanity. By this he means investing time in improving our deeply human skills that computers aren’t good at, like moral courage, empathy and divergent, creative thinking. 

Roose is optimistic in this regard – as AI gets better at understanding us and insinuating its way into our lives, he thinks we’re going to see a renewed reverence for humanism and the things machines can’t do. That means more appreciation for the ‘soft’ skills of health care workers, teachers, therapists, and even an artisanal journalism movement written by humans. 

I can’t be quite as optimistic as Roose – these soft skills have never been highly valued in capitalism and I can’t see it changing (I really hope I’m wrong), but I do agree with him that each new generation of social media app (e.g. Tik Tok and BeReal), in the global West at least, will be less toxic than the one before it, driven by the demands of Millennials and Generation Z, and those to come. 

Eventually, this generational movement away from the legacy social media platforms, which have become infected with toxicity, will cause them to either collapse or completely reshape their business model to be like newer apps if they’re going to keep operating in fragile countries and emerging economies.  

And that’s one reason to not let the machines win. 

 

Visit FODI on demand for more provocative ideas, articles, podcasts and videos.