Media literacy podcast recording session. Woman with headphones, laptop, and microphone. Politics of platforming discussion.

In an age where emotionally charged information travels faster than ever, the ability to critically evaluate what we read, watch, and share is the cornerstone of responsible citizenship. It’s essential that we’re able to strengthen our media literacy skills to detect bias, maintain trust and safeguard the integrity of public discourse.

Since the rise of generative AI, we have become inundated with content. It’s easier to generate and disseminate information than ever before. When content is cheap, it’s curation and discernment that becomes ever more critical.  

And this matters – how information travels influences policy, the bounds of acceptable discourse, and ultimately how our society functions. This means that each of us, whether a social media user, producer of a major national news program, or just someone chatting with a friend, has an obligation to ensure that the truth we share contributes to human flourishing. 

What is verifiable? 

The reality is that a huge amount of the information we have access to is, well, fake. Media literacy equips people to recognise bias, detect misinformation, and understand the motives behind content creation. Without these skills, we and those around us become vulnerable to manipulation, whether through sensational headlines, deepfakes, or algorithm-driven echo chambers.  

Verifying information before sharing is a critical part of media literacy. Every click and share amplifies a message, whether true or false. For example, deepfakes of Scott Morrison and other politicians have been used to perpetuate investment scams. When inaccurate information spreads unchecked, it can fuel polarisation, erode trust in institutions, and even endanger lives. Taking a moment to analyse the source, check for corroboration in other trusted places, and question the credibility of a claim is a small act with enormous impact. It transforms passive consumers into active participants in a healthier information ecosystem. 

Whose truth? 

What we see as objective is intrinsically shaped by the voices we are exposed to, and how often. This is as true for our social media feeds as it is for the nightly news. The messages that reach us are all in some way biased, meaning that they possess some kind of embedded agenda. But what we most often mean when we talk about bias, is a systematic and repeated filtering or skewing of information to conform to a particular or narrow agenda or worldview.  

Because of this, we should be wary of potential bias when issues are covered by individuals or organisations with financial or political interests at stake. For example, jurisdictions around the world are currently wrestling with questions of how training large language models (LLMs) relates to fair use of copyrighted work. In Australia, the government recently ruled out a special exception for AI models to be trained on Australian works without explicit permission or payment. While there was public consultation conducted, prominent voices didn’t all agree with protecting the labor and output of creators as a priority.  

Before the federal government made the determination, powerful members of the technology industry were consulted by journalists for their views on how the interests of AI labs and creators should be considered. These voices included those whose financial holdings include investments in the type of AI companies (and their methods) being discussed. Platforming voices with these interests has important implications for setting the terms of the debate as potentially more pro-technology, rather than encouraging a balance of perspectives.  

While this specific issue is critical because it’s an issue that affects all of us, it also illustrates a way that we can practice media responsibly.

When we are sharing information, we should consider the interests and ideological alignment of those who are sharing.

Where possible, we should seek to provide a balanced set of perspectives, and ideally one in which any conflicts of interest are clear and disclosed. 

Who gets platformed?

There is no free marketplace of ideas. The question of what voices and perspectives are platformed and held up as truth, whether in the media or on our feeds, greatly impacts our our own narratives of events. For example, since October 2023, more than 67,000 Gazans have been killed by Israeli forces. While the genocide has received significant media coverage, the perspectives of people impacted haven’t been equitably represented in mainstream media sources. Recently, a study found that only one Palestinian guest had been booked to share their views on major US Sunday news shows (which set the national agenda) in the last 2 years. In the same period, 48 Israeli guests had been given airtime. 

If we assume that Israeli guests do not have a monopoly on the truth, this pattern looks alarming. While this study didn’t speak to the views of the guests, a reasonable person would assume that such a skew in the identities and affiliations represented present a rather one-sided view of events on the ground.  

In this case, the platforming also speaks to the relative power of the perspectives. Despite the Palestinian community having the greatest lived experience of harm, their voices are effectively silenced. As we consider from whom we share information, we should always consider the following questions:  

  • Which individuals or groups have greater access to institutions – media or otherwise – to share their experience? 
  • In the case of conflict, are opposing forces equally powerful (eg in terms of financial resources, alliances, etc)? 
  • Who is marginalised, and what is the impact of not platforming that voice? 

In today’s media environment, we are flooded with information. This means that the responsibly each of us must take in our sphere of influence has increased proportionally. In order to act as responsible members of our community, we must question which voices we’re highlighting.

copy license