The internet has become the perfect breeding ground and circulatory system for all kinds of untrue or inaccurate claims, thanks in part to the sheer speed and volume of information that it accommodates. As civil society and governments focus on increasing media literacy and pressuring technology companies to change their policies, what can citizens and technology users do in the face of misinformation?
The Facebook post begins with an all-caps claim: "THEY WILL BE REMOVING YOUR KIDS & YOU CAN'T DO OR SAY ANYTHING ABOUT IT" and ends with the directive, "Copy and paste to spread awareness." In August 2020, this post, and many versions like it, which claimed that children with symptoms of COVID-19 could be quarantined for 14 days without their parents' consent, went viral in the United States. A few clues suggested that the message did not originate in the US. In fact, the original source of the claim was a letter written by the Children's Commissioner for England back in March, expressing her concern that children might be detained under that country's new Coronavirus Act. Regardless of her intentions, the claim was distorted, exaggerated and amplified as it spread among Facebook groups as far as Canada. Why did this particular piece of misinformation gain so much steam? Perhaps because it played on many of our worst fears in our current environment, from distrust in the government to fear for our children's health and safety.
The internet has become the perfect breeding ground and circulatory system for all kinds of untrue or inaccurate claims, thanks in part to the sheer speed and volume of information that it accommodates. In today's digital ecosystem, disinformation distributed by bots circulates alongside dangerous claims about the coronavirus, and memes mingle with conspiracy theories, in what has alternately been labelled an "infocalypse," "information pollution," "information soup," "information warfare" and "the infodemic". Whatever you call it, much like our global environment, our information environment is suffering a crisis that is outpacing human or technological solutions.
As civil society and governments focus on increasing media literacy and pressuring technology companies to change their policies, what can citizens and technology users do in the face of misinformation? To begin to tackle the problem of misinformation we first have to understand the underlying conditions that feed it in our societies, in our information ecosystem, and in ourselves. This means examining the psychological, social, and political reasons why misinformation starts, spreads and takes hold, as well as the technological structures of our information systems that allow it to propagate so readily.
Doubt and misinformation during the crisis
In some ways, we are psychologically predisposed to be vulnerable to misinformation: we inherently tend to seek information that aligns with our worldview and avoid or discredit information that does not. But we are especially susceptible to false or misleading claims in times of crisis and uncertainty. "For [disinformation] to actually work ... you have to have people who are susceptible to doubt, who don't trust what they're being told, or who just want other answers," scholar Emily Bell explains, "because the real answer isn't very helpful to their own lives." The pandemic has created these conditions worldwide. The alarming Facebook post about quarantining children reveals another key psychological aspect behind our vulnerability to misinformation, namely that information – true or not – tends to spread or be shared more when it triggers emotions such as fear or anger – emotions that purveyors of misinformation can easily tap into. Furthermore, our emotional responses, not to mention the glut of information we are faced with, can disincentivize us from fact-checking or looking deeper into posts or messages we read. Instead, we may use "cognitive shortcuts" to verify information, such as whether a post has been endorsed by people we know, rather than trying to fact-check each and every message, video and headline.
Beyond our individual motivations for engaging with misinformation, our current social and political crises create a ripe environment for false information to propagate. Studies show that globally our trust in the media is on the decline. More recent research has shown that, paradoxically, the spread of online misinformation exacerbates this distrust in media. In parallel, politicians and their supporters are creating and promoting misinformation and disinformation as a strategy. A 2019 report found instances of organized social media manipulation in 70 countries linked to governments or political parties. Political actors will also use a tactic that has been described as “the firehose of falsehood” (or what Steve Bannon calls “flooding the zone with shit”) – namely, bombarding the public with so much conflicting, contradictory, inaccurate information across multiple channels that we cannot keep up with what is true and what is not. Though the strategy is most often attributed to Vladimir Putin, it can be seen in the campaigns of politicians worldwide, from Jair Bolsanaro over Rodrigo Duterte to Enrique Pena Nieto. This strategy is employed not just to maintain power but to sow doubt and uncertainty in any kind of information, eventually leading to exhaustion or a belief that the "real truth" can never be found.
When those in power are increasingly willing to weaponize disinformation and unwilling to disavow misinformation, it combines with the structure of our current digital information ecosystem to create a perfect storm of misinformation. While the internet and social media increase the speed and volume at which all information can be shared and accessed, social media platforms serve to promote untruth above truthful messages. One study showed that on Twitter, falsehoods are 70% more likely to be retweeted than accurate news. Our information ecosystem also delivers information in all types of packages and from multiple directions: misleading news might not always appear under a headline that can be fact-checked; it might also arrive in the form of a WhatsApp message forwarded from a friend or family member, in a private Facebook group, or in an ephemeral Instagram story.
There is an increasing belief that the tendency for misinformation to circulate on these platforms is baked into the way the technologies are designed, though the lack of transparency about the algorithms makes it difficult to prove. Research on how the proprietary algorithms determine our news feeds and recommend us content suggests that many social media algorithms amplify and promote sensationalist and provocative content, as Mark Zuckerberg himself admitted. One former engineer at YouTube also revealed that the platform's recommendation algorithm, which he designed, favours misinformation: “YouTube is something that looks like reality, but it is distorted to make you spend more time online," Guillaume Chaslot told The Guardian. "The recommendation algorithm is not optimising for what is truthful, or balanced, or healthy for democracy.”
Why misinformation can easily be shared on the internet
How, for instance, does Facebook's design facilitate the delivery of a false message about children potentially being quarantined to thousands of parents who might be worried about precisely that issue, and inclined to share it? Thanks to social media, groups of like-minded people are already connected, meaning misinformation quickly finds a home. And similarly, groups of people with specific beliefs or interests can be easily identified and targeted. A 2020 study shows that rumours and false news about COVID-19 are viewed more and go viral on Facebook faster than accurate information. It also revealed that viral misinformation on Facebook originates from a relatively small group of accounts: health misinformation on just 42 Facebook pages managed to generate an estimated 800 million views. The ability for one piece of information to be spread by a single source and yet reach millions – whether it is the source's intention or not – is not unique to Facebook. YouTube, Instagram and Twitter operate the same way, in that a piece of misinformation can be instantaneously amplified by politicians, media and influencers with millions of followers.
For anyone who seeks to spread disinformation, social media platforms allow their messages to be easily targeted to those most likely to believe it, or even those who are undecided, at a relatively low cost. Furthermore, Google and Facebook allow propagators of disinformation to create and disseminate thousands of different messages to test which ones gain the most traction. In the limited cases when a social media platform takes down a false post, it has usually already reached a wide audience, and can move easily to other platforms – from Reddit to Twitter to Instagram and beyond. To date, these companies' efforts monitor content and fact-check misinformation on their platforms has been mixed. There are even signs just the act of fact-checking posts can lead to readers' views becoming more entrenched.
So how can citizens contain the spread of misinformation? If you have ever tried to convince a friend or family member that something they saw online is not true, you will know how complex the problem is and how hard it is to combat. Any solution or remedy will have to address all three underlying aspects of the problem – social, political and technical. That said, if we understand how information – true or false – spreads online, we can address the problem from a less polarizing perspective. Imagine explaining how Facebook's algorithm might have caused your uncle to see that ad or that headline, rather than trying to tell him why it is false. Debunking each coronavirus hoax or political meme might be too hard, but educating or informing people about the technology at work behind the stories – how they came to receive them and what control we have over that process – can be a useful tool.
To explore more about how misinformation spreads online, visit Tactical Tech's online exhibition The Glass Room: Misinformation Edition. For simple tips and tricks on how to combat recognize and combat misinformation, check out our Data Detox Kit.
Nicholas David Bowman and Elizabeth Cohen, "Mental Shortcuts, Emotion, and Social Rewards: The Challenges of Detecting and Resisting Fake News," in "Fake News: Understanding Media and Misinformation in the Digital Age," eds. Melissa Zimdars and Kembrew McLeod, Cambridge: MIT Press, 2020, pp. 223-233.