Combatting COVID-19 Disinformation on Online Platforms

"Unless measures are implemented to remove, de-emphasise or otherwise limit the circulation of false and misleading information, it can spread quickly."
In light of the coronavirus (COVID-19) pandemic, the volume of related news information has increased, leading to what has been called an "infodemic". Disinformation (false or misleading information, deliberately circulated to cause harm) and misinformation (the spread of false information, regardless of whether there is an intent to deceive) about COVID-19 is quickly and widely disseminated across the internet, reaching and potentially influencing many people. This policy brief from the Organisation for Economic Co-operation and Development (OECD) examines the problem and derives 4 key actions that governments and online platforms can take to counter COVID-19 disinformation on the internet.
OECD begins by outlining the harms that online disinformation about COVID-19 can cause. By calling into question official sources and data and convincing people to try bogus treatments, people spreading disinformation have motivated others to ingest fatal at-home cures, ignore social distancing and lockdown rules, and not to wear protective masks, thereby undermining the effectiveness of containment strategies, hampering the recovery, and even endangering lives. As effective new treatments and vaccines become available, disinformation could hinder uptake and further jeopardise countries' efforts to overcome COVID-19. The risks extend beyond the realm of health. For example, in Australia, the European Union (EU), and the United States (US), the spread of disinformation framing minorities as the cause of the pandemic has fuelled animosity against ethnic groups, leading to a rise in discrimination and incidents of violence.
Per OECD, understanding how COVID-19 disinformation spreads is essential for crafting effective responses. Thus, the policy brief looks at how online platforms serve as a key channel for the spread of disinformation. For instance, some platforms present news content alongside non-news, ads, and user-generated content, which can make it difficult for users to distinguish reliable news. Also, OECD notes that COVID-19 disinformation moves top-down, from politicians, celebrities, and other prominent figures, as well as bottom-up, from ordinary people. However, empirical research shows that top-down disinformation constitutes only 20% of all misleading claims about COVID-19, but generates 69% of total social media engagement. Thus, "influential public figures bear more responsibility in efforts to address disinformation claims and rumours about COVID-19."
In this context, OECD argues that online platforms can be a key channel for curbing the spread of false and misleading information and distributing accurate information about COVID-19, but they should not be expected to act alone. Online platforms, governments, and national and international health organisations need to work together. There are 3 main types of collaborative efforts between platforms and public health authorities:
- Highlighting, surfacing, and prioritising content from authoritative sources - For instance, Twitter features a COVID-19 event page with the latest information from trusted sources on top of users' timelines.
- Cooperating with fact-checkers and health authorities to flag and remove disinformation - Facebook cooperates with third-party fact checkers to debunk false rumours about COVID-19, label that content as false, and notify people trying to share such content that it has been verified as false.
- Offering free advertising to authorities - Facebook, Twitter, and Google have granted free advertising credits to the World Health Organization (WHO) and national health authorities to help them disseminate critical information regarding COVID-19.
On the final point, above, the policy brief explains that there are lingering challenges associated with the platforms' banning of certain ads. On the one hand, Google and YouTube prohibit any content, including ads, that seeks to capitalise on the pandemic, and on this basis they have banned ads for personal protective equipment (PPE). These measures deter scammers but may also make it more difficult for people to find and buy hygiene products online. On the other hand, evidence shared in the policy brief shows that some online platforms and content producers are still profiting from the ads that are displayed on the misleading and false content that escapes the platforms' moderation efforts.
Another challenge concerns ensuring that users' rights to privacy and freedom of expression are preserved. This requires an eye for contextual nuance that is highly challenging for algorithms. Therefore, more human moderators are needed to complement automated approaches. However, as a result of the pandemic and lockdown measures, online platforms have faced a shortage of human moderators and consequently increased their reliance on automated monitoring technologies to flag and remove inappropriate content.
In light of these and other challenges outlined in the brief, key policy recommendations are:
- Support a multiplicity of independent fact-checking organisations - Platforms might consider labelling content that has successfully passed fact-checks by two or more independent fact-checking organisations with a "trust mark", which could be a tool for increasing trust in online information about COVID-19.
- Ensure human moderators are in place to complement technological solutions - Admittedly, this is complicated during the pandemic, because while sending content moderators back to work too soon would be an unacceptable public health risk, asking them to work from home gives rise to privacy and confidentiality concerns.
- Voluntarily issue transparency reports about COVID-19 disinformation - Regular updates from online platforms about the nature and prevalence of COVID-19 disinformation, and the actions they are taking to counter it, could enable better, more evidence-based approaches.
- Improve users' media, digital, and health literacy skills - This would help them navigate and make sense of the COVID-19 content they see online, know how to verify its accuracy and reliability, and be able to distinguish facts from opinions, rumours, and falsehoods. Ways forward are suggested by a partnership between the EU, United Nations Educational, Scientific and Cultural Organization (UNESCO), and Twitter to promote media and information literacy amid the COVID-19 disinformation crisis.
A first step towards a more effective, long-term solution could be to gather evidence systematically. "Regular transparency updates from online platforms about the COVID-19 disinformation that is showing up and being viewed, and how the platforms are detecting and moderating it, would help researchers, policymakers, and the platforms themselves to identify ways to make improvements." OECD proposes that it could provide a forum for a common approach across companies and countries could facilitate global, cross-platform comparisons.
OECD concludes that platforms should be encouraged to work collaboratively to continue and enhance their practices to combat disinformation in support of successful "re-openings" (post-quarantine or lockdown periods) and as effective new treatments and vaccines become available.
OECD website, July 28 2020. Image credit: OECD Development Communication Network (DevCom)
- Log in to post comments











































