Insights & Commentary
Updates on analysis and commentary about platforms’ efforts to counter misinformation about the COVID-19 pandemic are reported in reverse chronological order.
Forbes reported on studies showing that even early in the pandemic, 1 in 4 videos on YouTube contained false and misleading health information. Since then, the proliferation of videos with false, misleading or harmful information about the COVID-19 pandemic has only intensified, despite social media companies’ attempts to rein them in. And unfortunately, very few of the videos containing accurate, verified information came from recognized health authorities like WHO and CDC. The main recommendation coming from the studies was that health authorities, academic medical hospitals and centers should recognize YouTube as an important platform to convey accurate medical information during health crises, using the same urgency and emotion in use by conspiracy theorists and bad actors. However, they also recommended that individual social media users think of YouTube, TV news, and partisan sites as the “sugar and fat” of a news diet: “Here are the protein and vegetables of news: city and national newspapers, public radio and established online publications.”
An article in Slate described Wikipedia as potentially the best future source of the historical record of evolving knowledge about the pandemic, since one of its signature design elements is citation and retention of every editorial change. This feature means it inherently addresses the letter sent by 75 signatory organizations in early April asking social media companies and content-sharing platforms to preserve all data that they have blocked or removed during the COVID-19 pandemic and make it available for future research. That means future researchers would be using Wikipedia as a primary source (it is actually a tertiary source and rarely cited for that reason).
A previous report noted that Adam Schiff (D-Pasadena), the Chairman of the House Intelligence Committee, had written letters to several media outlets asking them to describe the actions they were taking to address coronavirus misinformation on their respective platforms. He had asked them to proactively inform users who engage with harmful coronavirus-related misinformation before it can be removed and to direct them to authoritative, medically accurate resources. This week he released their written responses. He said, ”I appreciate the steps each platform is taking to reduce Coronavirus misinformation and connect users with authoritative health resources. While it is more effective to limit engagement with harmful content and provide context in real time and before users interact with it, that is not always possible given the scale of these platforms. When unwitting users do engage with false content that could harm them or their families, they should be informed. As we look ahead to this year’s election and beyond, the platforms’ investment and responsiveness to misinformation about Coronavirus will be gravely tested, and the health of our society and democracy along with it.”
MIT’s Technology Review and NPR - and then many others - reported on a study “associated with” Carnegie Mellon that concluded nearly half of Twitter accounts pushing to reopen America may be bots. The original “statement” about the research described a huge upswell of Twitter bot activity since the start of the coronavirus pandemic, amplifying medical disinformation and the push to reopen America. Across US and foreign elections, natural disasters, and other politicized events, the researchers noted, the level of bot involvement is normally between 10 and 20%. But in this study of 200M tweets, the researchers found that bots may account for between 45 and 60% of Twitter accounts discussing COVID-19. The researchers have begun to analyze Facebook, Reddit, and YouTube to understand how disinformation spreads between platforms. The work was described as being “in the early stages”, but already revealing some unexpected patterns.
Yes, but….several researchers and analysts specializing in misinformation tracking and analysis quickly contended that the “statement” was actually a press release, with neither a peer-reviewed study nor a pre-print available for review (more on that below). The release offered no description of data sets or methodology, and was offered by two researchers with a previously loose method for assigning an account to a bot. Twitter also contested the conclusions in the report.
In a report released on May 20, the strategic communications division of the European diplomatic corps, the European External Action Service (EEAS), noted a drop in volume of COVID-19 misinformation on digital platforms but said "it is clear that much more needs to be done." "The work of independent media and fact-checkers is crucial to deliver reliable and authoritative information about the pandemic," said the report, which covers observations and assessments from the period of April 23 to May 18. It also noted that threats and harassment against fact-checkers and fact-checking organizations are being observed. The publication was a follow-up to three previous reports in March and April in which EU monitors identified a "trilateral convergence of disinformation narratives" being promoted by China, Iran, and Russia on the pandemic to undermine public trust. (For anyone with a particular interest in misinformation themes from foreign state and non-state actors, this is a good source.)
Executives from Twitter, Facebook and Google are to be recalled by British Parliament in the first week of June to demand more information about the steps they are taking to stamp out coronavirus misinformation. The chair of the Digital, Culture, Media and Sport Committee for the U.K. accused the technology companies of failing to show “clarity and openness” in an earlier evidence session last month. The committee “remains concerned” about the role social media influencers including celebrities and politicians have played in promoting conspiracy theories about the pandemic online.
Steak-umm, the decades-old maker of thin-sliced frozen beef used in cheesesteak sandwiches, has garnered considerable attention on social media during the coronavirus crisis with a Twitter thread warning people to question their news sources amid a torrent of misinformation about the virus. The company implored people to “be careful in our media consumption” and reminded the public that it is crucial to “follow a range of credentialed sources for both breaking news and data collection.” The meat purveyor’s Twitter thread has generated about 13,000 retweets, over 48,000 likes, and hundreds of comments - and a supposedly “dead brand” (by its own admission) has gotten a lot of positive attention.
A research study conducted at Institute for Health System Innovation & Policy at Boston University leveraged volunteer fact checking to identify misinformation about COVID-19 in social media. Identifying emerging health misinformation for CV-19 is a challenge because its manner and type are often unknown. However, many social media users correct misinformation when they encounter it. These researchers implemented a strategy that detected emerging health misinformation by tracking replies that seem to provide accurate information. The strategy was more efficient than keyword-based search in identifying COVID-19 misinformation about antibiotics and a cure. It is one of several studies showing the potential value of crowd-sourcing and community engagement to counter misinformation.
Several outlets have accompanied articles about the platforms’ role in countering misinformation with advice for digital users. One of the most recent and simplest offered these five measures to prevent the spread of misinformation (or to prompt the platforms to do so):
Be critical when you look at social media.
Don’t leave false information in your online networks. You can politely ask the person who shared it to remove it.
Report the false information to the platform administrators.
When in doubt, take the time to verify the shared information.
Make more noise than people who share false information.
Washington Post reported on the myriad new workarounds being used to thwart companies taking a tougher line on misinformation during the pandemic. “More and different actors” are participating in tactical innovation, including using work productivity tools like Google Drive and exploiting the Internet Archive, a critical resource for researchers as reported in last month’s report. Actors include links to Drive or the Archive in social media posts, or post parts of removed videos with the most troubling parts edited out, or use CDC links to try to get content upranked by algorithms.
Mozilla published a thoughtful piece highlighting key challenges and opportunities to improve platform regulation in regard to content moderation; it included the limitations of automation and filtering, the gaps in transparency and consistency of rules, and some of the people and organizations surfacing social, technical and legal alternatives. Mozilla used a storytelling approach to help illustrate the need for balance among competing interests including public pressure, desire for regulation, and the impact of content moderation on the physical and mental health of human moderators
An article in Brookings Institution’s TechStream raised the question of whether it is worth considering whether the near-blanket liability protections granted to social media companies for content posted on their platforms should apply to questions of public health. Their conclusion, based on past regulation of deceptive drug ads: “Chipping away at liability protections has emerged as the favorite tool of Washington to hold big platforms to account, but it is a blunt instrument that legislators should be wary of deploying....it could prove hard to enforce and a disaster to implement.” Their rationale: 1) it is unclear that FOSTA-SESTA, the last attempt to temper 230, worked as intended; there is a good argument to be made that the FOSTA-SESTA bill has actually endangered sex workers by driving their business even further underground, and it has not stopped the ads. 2) the boundaries of public health information are blurry. Who will decide where wellness begins and health ends, particularly when top administration officials are themselves spreading health misinformation. 3) the worst offenders are sometimes older forms of media, including partisan media. Any new regulation would have to encompass them too in order to really protect public health -- something that the current administration and Republicans in Congress would be loath to do.
Preprints - non-peer-reviewed reports on scientific studies - are being weaponized during the pandemic. Preprints are meant to help scientists find and discuss new findings in real time, which is especially important during a pandemic. They generally carry a warning label: “This research has yet to be peer reviewed.” To a scientist, this means it’s provisional knowledge — maybe true, maybe not, and not yet passed through the primary means of academic quality control. But for partisan news media, anything carrying the mark of a respected institution counts as knowledge, particularly when it reinforces the day’s talking points.
Mainstream media coverage has added to the problem of digital content moderation and compounded the challenges of conveying accurate health information, according to the Harvard Global Health Institute. At many major news outlets, reporters and editors with no medical or public health training were quickly reassigned to cover the unfolding pandemic and are scrambling to get up to speed with complex scientific terminology, methodologies, and research, and then identify, as well as vet, a roster of credible sources. Because many are not yet knowledgeable enough to report critically and authoritatively on the science, they can sometimes lean too heavily on traditional journalism values like balance, novelty, and conflict. In doing so, they lift up outlier and inaccurate counterarguments and hypotheses, unnecessarily muddying the water. Then there is the problem of political bias. This has been especially true at right-leaning media outlets, which have largely repeated news angles and viewpoints promoted by the White House and the president on the progress of the pandemic and the efficacy of the administration’s response, boosting unproven COVID-19 treatments and exaggerating the availability of testing and safety equipment and prospects for speedy vaccine development.
One in four of the most popular English-language YouTube videos about the coronavirus contains misinformation, according to a study in the journal BMJ Global Health. For the study, researchers from the University of Ottawa analyzed 69 of the most widely-viewed English language videos from a single day in March and found 19 contained non-factual information, garnering more than 62 million views. Misinformation, according to the researchers, included any video that contained false information on the transmission, symptoms, prevention strategies, treatments and epidemiology of the coronavirus. Internet news sources were most likely to misinform, though entertainment, network and internet news outlets were all sources of misinformation, according to the study. None of the most popular professional and government videos contained misinformation.
Researchers at the George Washington University developed a first-of-its-kind map to track the vaccine conversation among 100 million Facebook users during the height of the 2019 measles outbreak. The new study and its "battleground" map reveal how distrust in establishment health guidance could jeopardize public health efforts to protect populations from COVID-19 and future pandemics through vaccinations. They discovered that, while there are fewer individuals with anti-vaccination sentiments on Facebook than with pro-vaccination sentiments, there are nearly three times the number of anti-vaccination communities on Facebook than pro-vaccination communities. This allows anti-vaccination communities to become highly entangled with undecided communities, while pro-vaccination communities remain mostly peripheral. In addition, pro-vaccination communities which focused on countering larger anti-vaccination communities may be missing medium-sized ones growing under the radar. The researchers also found anti-vaccination communities offer more diverse narratives around vaccines and other established health treatments -- promoting safety concerns, conspiracy theories or individual choice, thus increasing the chances of influencing individuals in undecided communities. Pro-vaccination communities, on the other hand, mostly offered monothematic messaging typically focused on the established public health benefits of vaccinations. In their study, the GW researchers proposed several different strategies to fight against online disinformation, including influencing the heterogeneity of individual communities to delay onset and decrease their growth and manipulating the links between communities in order to prevent the spread of negative views. "Instead of playing whack-a-mole with a global network of communities that consume and produce (mis)information, public health agencies, social media platforms and governments can use a map like ours and an entirely new set of strategies to identify where the largest theaters of online activity are and engage and neutralize those communities peddling in misinformation so harmful to the public," Dr. Johnson said.
A survey conducted by Flixed, a site that helps viewers manage their media consumption, reported that Facebook was the most-used platform for news related to the coronavirus. In the survey, 35.8% of people use Facebook as their primary social media platform for news about the pandemic. The social media network was followed in popularity by Twitter (17.0%), YouTube (16.3%), Reddit (12.4%), and other platforms (12.7%) for news about the virus. 5.8% reported that they do not use social media. However, a majority of people (57.6%) who tended to use Reddit for their primary pandemic news source were more likely to report worsening mental health since the start of 2020. Among Facebook users, 41.6% reported a decline in mental health, followed by Twitter (43.0%), YouTube (32.2%), and other platforms (32.5%). The lowest likelihood of worse mental health during the pandemic was reported by those who did not use social media (26.2%).
An article in Axios highlighted the challenges the digital platforms face in distinguishing coordinated information warfare (disinformation) from false claims spread by people who sincerely believe them (misinformation). Although Facebook, YouTube and Twitter have gotten better at spotting and stopping disinformation, two factors make it challenging to “stem the coronavirus misinformation tide”: it's a new disease and there's a lot we don't actually know for sure, making it hard for content moderators to draw clear distinctions between what's true and what's not, and 2) enough business and political leaders have lined up in opposition to the scientific consensus that fringe positions have moved into the mainstream.
An article in Brookings Institution’s TechStream maintains that the greatest information problem introduced by the pandemic is less one of moderation (of identifying and removing content that is demonstrably false and/or harmful), and more one of mediation (identifying what information is credible, when, and how to communicate these changes). By identifying good sources of information and highlighting them, platforms can reduce the need to address bad information that is quickly gaining visibility and engagement over algorithmically determined spaces. However, we must ask whether we trust tech companies to play this role of reconciling the user-generated internet with hierarchies of knowledge production.
Researchers in the Technology and Social Change Research Project at Harvard Kennedy’s Shorenstein Center found that pandemic conspiracy theorists are using the Internet Archive’s Wayback Machine to promote “zombie content” that evades moderators and fact-checkers on the digital platforms. Even after content is initially removed from platforms, versions of it saved on the Internet Archive’s Wayback Machine can flourish on Facebook with high engagement with its links. Some people use the Internet Archive to evade blocking of banned domains in their home country, but it is not simply about censorship. Others are seeking to get around fact-checking and algorithmic demotion of content. The research shows one way harmful conspiracies permeate private pages and groups on Facebook, and that health misinformation circulates in spaces where journalists, independent researchers, and public health advocates cannot assess it or counterbalance these false claims with facts.
An army of bot accounts linked to an alleged Chinese government-backed propaganda campaign is spreading disinformation on social media about coronavirus and other topics, according to a London-based researcher. The accounts have been used to promote content attacking critics of the Chinese government and to spread conspiracy theories blaming the U.S. for the origins of the virus, according to Benjamin Strick, who specializes in analyzing information operations on social media websites. Based on the number and velocity of the campaign on Facebook and Twitter he believes it is a state-backed Chinese campaign.
International nonprofit research center First Draft noted that a unique challenge facing journalists and fact-checkers around COVID-19 is that “what was true last month was true then, but now we know something different. Each week there is something new, and this is really challenging to communicate to the public.” Another challenge is that content once focused in online communities has now gone into closed messaging apps like WhatsApp, Facebook Messenger, Signal and Telegram. They report “better behavior and citizenship from all of the [major] platforms over the past six months” including efforts from the platforms to raise up quality information and root out problematic posts. First Draft asks for more collaboration across platforms and transparency about their rules and their enforcement of the rules.
The Washington Post reported on new research that indicates “observational correction” - correction that occurs on social media where people can observe other people being corrected - can be a highly effective strategy for countering misinformation on digital platforms. The research also found that people are engaging in the practice, and that attitudes about this kind of correction were highly positive. The latter two points were true for people across the political spectrum. The research suggests the most effective response when witnessing misinformation is to provide credible information that the misinformation is incorrect — and offering facts in response. A fact check from an independent journalistic organization or information from a credible organization like the American Medical Association is particularly effective. Making these types of corrections even if someone else has already done so reinforces the true message to those who see it. It’s also important to note that people generally agree that correction is appropriate, and that it’s a shared responsibility. This may make people on social media more comfortable with correcting others, and more likely to engage in it more often. The approach likely works because 1) the correction occurs in proximity (temporally and spatially) to the original misinformation, increasing the likelihood that people hadn’t had a chance to absorb the misinformation at all and 2) witnessing someone else being corrected may be less threatening than being corrected directly, but with all the same benefits. When highly trusted groups like the CDC directly respond to users sharing misinformation on social media, people are likely to believe the correction. Correction can also come from social media platforms themselves. For example, Facebook uses its “related articles” function to display debunking information from third-party fact-checkers.
A peer-reviewed study about how search engines disseminate information about COVID-19 found that different search engines (Baidu, Bing, DuckDuckGo, Google, Yandex, and Yahoo) prioritize specific categories of information sources, such as government-related websites or alternative media - even in non-personalized search results. It also found that source ranking within the same search engine is subjected to randomization, which can result in unequal access to information among users. The degree of randomization varied, and could mean that different users are exposed to different information. This can be detrimental when society urgently needs to access consistent and accurate information. If we assume that a major driver of randomization is the maximization of user engagement by testing different ways of ranking search results, it would mean companies’ private interests directly interfere with the people’s rights to access accurate and verifiable information. The study raises important questions like, what is “good” information, who should decide on its quality and can these decisions be applied univocally?
Beginning May 4, all the social media networks struggled to completely remove or stop the spread of a particularly virulent viral video featuring a well-known vaccine conspiracist, which contains false, misleading or unproven claims about Covid-19. The roughly 26-minute “Plandemic Movie” video claims to be an excerpt of a larger documentary to be released this summer and contains claims about the origins of the virus and how it spreads. In a matter of hours, the video became one of the most widespread pieces of coronavirus misinformation, drawing millions of views across major technology platforms. While it received an enormous amount of media attention, several sources noted that the "plandemic" conspiracy theory is small in its online spread in comparison to the Bill Gates conspiracy theory and the disinfectant conspiracy theory. 
A Washington Post research team working with a student at Stanford University mapped the spread of a particular story that originated with an article on Medium; the article said the health risks of COVID-19 were overstated and that social distancing would hurt the economy. The spread of the story demonstrated some of the challenges associated with countering misinformation:
A small group of key social media influencers can amplify the spread of misleading information and boost the long-term profile of previously obscure authors.
Social media platforms like Twitter interact quickly with other media like cable news; Fox News personalities played a key role in spreading the story.
Most important, science is being politicized. The article was originally published by a Medium channel associated with the Lincoln Network, a conservative nonprofit organization, and spread by media personalities with the same political orientation. While the downstream spreaders and amplifiers probably weren’t intentionally sharing misleading information, their choices presumably reflected their political priorities.
On April 30, Rep. Adam Schiff (D-CA) sent a letter to Google, YouTube, and Twitter urging the platforms to explicitly notify users when they’ve engaged with misinformation about the coronavirus. Schiff wrote to Google CEO Sundar Pichai, YouTube CEO Susan Wojcicki, and Twitter CEO Jack Dorsey, saying it’s not enough to remove or downgrade harmful or misleading content about the pandemic, but that it’s critical to ensure that users who saw the content have access to correct information as well. Facebook recently announced plans to display messages to any users who have engaged with harmful coronavirus-related misinformation.
On April 30, the New York Times reported that Facebook, Twitter and YouTube have declined to remove President Trump’s statements in a White House briefing that disinfectants and ultraviolet light were possible treatments for the virus, saying he did not specifically direct people to pursue the unproven treatments. His remarks immediately found their way onto Facebook, Instagram and other social media sites, and people rushed to defend the president’s statements as well as mock them. That led to a mushrooming of other posts, videos and comments about false virus cures with UV lights and disinfectants that the companies have largely left up.
Wikipedia can be a guide to the big commercial platforms on how to moderate misinformation, according to an article in Wired. While all the major platforms are trying to cleanse their sites of dangerous disinformation, they are doing so by relying in part on familiar, passive tools like acting when others flag dangerous content. Wikipedia shows that extreme circumstances, especially when related to public health, require different, more stringent rules, not better application of existing rules. For example, “you have to cite everything you write”, meaning legitimate sourcing filters out mistakes and lies.
An investigation co-led by BBC Click and the UK counter-extremism think-tank Institute of Strategic Dialogue indicated how both extremist political and fringe medical communities have tried to exploit the pandemic online. A review of 150K public Facebook posts sent by 38 far-right groups and pages since January identified five distinct communities, united by the topic of discussion: immigration, Islam, Judaism, LGBT, and Elites. For the first four of these, the scale of activity hadn't increased in volume overall since the lockdown. However, the fifth and largest community - the one concerning the "elites" like Jeff Bezos and Bill Gates - had shown a significant spike in activity during the lockdown. Discussions included the relationship of these "elites" to the "deep state", and their alleged role in causing the pandemic or using a lockdown as a tool of social control.
A US national survey conducted during the early days of the COVID-19 spread showed that, above and beyond respondents’ political party, mainstream broadcast and print media use (e.g., NBC News, the New York TImes) correlated with more accurate information about the disease’s lethality and/or more accurate beliefs about protection from infection. Conservative media use (e.g., Fox News) correlated with conspiracy theories including believing that some in the CDC were exaggerating the seriousness of the virus to undermine the presidency of Donald Trump. Exposure to online outlets such as Google News and Yahoo News correlated with lower belief in the efficacy of regular hand washing and avoiding contact with symptomatic individuals. Exposure to sources such as Facebook, Twitter or YouTube was positively correlated with belief in the efficacy of vitamin C, the belief that the CDC was exaggerating the threat to harm President Trump, and the belief that the virus was created by the US government. The report put forward five recommendations:
Proactively put forward communication about disease prevention before a crisis.
Focus on debunking beliefs that are considered salient in the population (10% or more) in order to avoid inadvertently increasing awareness of the problematic claim.
Establish a baseline for monitoring social media interventions.
Place public service announcements, encourage hyperlinks to the CDC information pages, and seek interviews on social (and conservative) outlets whose audiences are less knowledgeable, more misinformed, or more accepting of conspiracy theories.
Encourage or fund newspapers to take down paywalls on coverage of medical crises.
According to new research from Carnegie Mellon University, nearly half the “people” talking about the coronavirus pandemic on Twitter are not actually people, but bots. They are feeding Twitter with harmful, false story lines about the pandemic, including some inspiring real-world activity, such as the theory that 5G towers cause COVID-19, or state-sponsored propaganda from Russia and China that falsely claims the U.S. developed the coronavirus as a bioweapon or that American politicians are issuing “mandatory” lockdowns. In many ways, the bots are acting in ways that are consistent with the story lines that are coming out of Russia or China, according to researchers. The Carnegie Mellon team identified more than 100 false narratives relating to coronavirus worldwide, which they divided into six different categories: cures or preventative measures, weaponization of the virus, emergency responses, the nature of the virus (like children being immune to it), self-diagnosis methods, and feel-good stories, like dolphins returning to Venice’s canals.
China, Iran, and Russia are each using the COVID-19 pandemic as an opportunity to spread disinformation related to the United States, according to a State Department report viewed by Business Insider. The messaging from each government aligns with the others, the report says. They include the baseless narratives that the coronavirus is an American bioweapon and is being spread by US troops, that the US is scoring political points from the crisis, and that all three governments — unlike the US — are managing the crisis well, according to the document. The report, which is produced by the department's Global Engagement Center, is not public. The report makes the case that propaganda from the three governments have converged as coronavirus has spread. Some of the information is produced by state-run media, and some has been put out by the governments, the report says.
On the April 28 edition of his show on Fox News, Tucker Carlson complained about the "ludicrous" measures taken by leading technology companies to combat misinformation about the coronavirus pandemic. Referring primarily to YouTube’s removal of a highly circulated video featuring Dr. Daniel Erickson, in which Erickson alleged that doctors were encouraged to link deaths to COVID-19 to amplify concerns about the pandemic, Carlson accused the service of “cracking down on free expression” and “banning dissent” from medical orthodoxy. Carlson maintained that “the big technology companies are using this tragedy to increase their power over the population”.
One of the most widely circulated stories of the week, from the Associated Press, took a positive view of platforms’ efforts to counter COVID-19 misinformation. Under the headline “Tech companies step up fight against bad coronavirus info”, the article noted, “Facebook, Google and others have begun using algorithms, new rules and factual warnings to knock down harmful coronavirus conspiracy theories, questionable ads and unproven remedies that regularly crop up on their services — and which could jeopardize lives. Health officials, critics and others who have long implored the tech companies to step up their response to viral falsehoods have welcomed the new effort, saying the platforms are now working faster than ever to scrub their sites of coronavirus misinformation”.
As noted in previous editions, media and opinion writers have begun to speculate about whether the platforms’ efforts to counter misinformation about COVID-19 could - or should - be expanded beyond the epidemic. For example, this week in Politico, Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, noted, “While this system might be imperfect, it shows that fact-checking is possible, and it works...what I’m urging is an expansion of fact-checking, not to serve a partisan agenda, but to limit the amount of mis- and disinformation polluting American public life. That’s just a public service.”
In The Atlantic, Evelyn Douek, an Affiliate at Harvard’s Berkman Klein Center for Internet and Society, notes that the platforms have rewritten “a silent constitution that bound users and the platforms themselves”, exposing just how much power they can exercise when they decide to do so. First, many platforms have adopted new rules specifically addressing coronavirus-related content. Second, enforcement during the state of emergency is swift and blunt, largely the work of automated tools. Third, even with these sweeping new rules and blunter enforcement, platforms have been suspending their usual due-process protections. She notes that while the tech companies’ actions in the current crisis may deserve praise, they also raise important questions about checks or constraints, whether these emergency powers are temporary, and the role of oversight.
U.N. Secretary-General Antonio Guterres said the world is fighting a “dangerous epidemic of misinformation” about the coronavirus and has announced a UN initiative to disseminate the facts and science about the pandemic. The United Nations Communications Response initiative will “flood the internet with facts and science while countering the growing scourge of misinformation — a poison that is putting even more lives at risk.”
Ad tech executives - representing the marketers whose advertising fuels the platforms’ business model - diverge on the responsibility and capability of Google and Twitter to monitor the veracity of the content they host during the crisis. Some execs said brands should seek safer environments than the social giants, others expressed a measured amount of trust in Google and Twitter’s capacity to root out fake news, and still others said preventing false content is impossible and up to readers to sort through. One VP of insight said, “As an advertiser, the safer bet is to partner with trusted and reliable sites that have teams of journalists vetting every piece of content”. A media VP claimed Google and Twitter are aware of the harm misinformation can cause on this issue, meaning advertisers can trust that coronavirus-related content will be “closely monitored,” though another noted that the content vetting process will be imperfect. But an agency COO referred to the “fundamental insurmountability” of eliminating all false content on platforms teeming with information when the truth itself changes day to day.
One of the most-reported updates of the week was Facebook’s blog post (noted in the chart below) that they would start showing messages in News Feed to people who have liked, reacted or commented on harmful misinformation about COVID-19 that was subsequently removed. It contradicted past reports in which the company has said it is “challenging” to reliably identify and notify everyone who had been incidentally exposed to (in that case) foreign propaganda. The design of the message came under a lot of criticism for being too soft. It says: “Help friends and family avoid false information about COVID-19.” It then invites them to share a link to the WHO’s myth-busting site, as well as a button that will take the user to the site directly. Users are never directly told they have engaged with misinformation, or why they got the message, and the WHO site currently debunks 19 different hoaxes. However, Facebook maintains the “nudging” approach is based on past research that people were more likely to share posts that had been labeled “disputed”, and that the design and language may evolve. The debate shows the challenges of designing methods of countering misinformation without research or evidence of their effectiveness.
Several outlets reported on a study by activist organization Avaaz specifically focused on Facebook’s response to the infodemic. Avaaz examined over 100 pieces of misinformation content in six different languages about the virus that were rated false and misleading by reputable, independent fact-checkers and could cause public harm. After noting “the commendable efforts Facebook’s anti-misinformation team has applied to fight this infodemic”, and that “the company's efforts to combat the problem had steadily improved”, Avaaz claimed the platform’s current policies were insufficient and did not protect its users. Specifically, Avaaz found that Facebook’s approaches are 1) incomplete; 2) delayed; 3) English language-focused; and 4) unable to address how individual stories (“mother stories”) mutate and spread (“babies”). They shared their findings directly with Facebook along with recommendations for more overt notification to users who have interacted with misinformation. Avaaz’ report was considered instrumental in Facebook’s decision to add messages in News Feed (see above), but Avaaz noted that “the step doesn't reflect the full gamut of what we would like to see them do."
Avaaz also published an academic study they say proves that providing social media users who have seen false or misleading information corrections from fact-checkers can significantly decrease belief in disinformation. The research, commissioned by Avaaz, was conducted by Dr. Ethan Porter of George Washington University and Dr. Tom Wood of Ohio State University, authors of “False Alarm: The Truth About Political Mistruths in the Trump Era”, which also showed corrections can reduce the share of inaccurate beliefs. In the test, which used a hyper-real visual model of Facebook, “correcting the record” reduced belief in disinformation by half, on average, and worked across party affiliation and political ideology.
Avaaz maintains that showing fact-checked corrections to every single user exposed to viral disinformation is currently one of the strongest defenses we have against coordinated disinformation campaigns.
A group of scholars and nonprofit organizations have asked web platforms to keep track of the content they’re removing during the coronavirus pandemic so they can make it available to researchers studying how online information affects public health. The signatories — including Access Now, the Committee to Protect Journalists, and EU DisinfoLab — sent an open letter to social media and content sharing services, urging them to preserve data even as they remove misinformation. The letter urges companies to preserve content that is removed from the service, including accounts, posts, and videos. It also encourages them to keep records of the removal process itself, like whether a takedown was automated or received human oversight, whether users tried to appeal the takedown, and whether content was reported but left online. Some of that information could be included in public transparency reports, and other pieces could be released specifically to researchers.
Research results in a working paper from the Becker Friedman Institute of Economics at the University of Chicago “indicate that provision of misinformation in the early stages of a pandemic can have important consequences for how a disease ultimately affects the population”. The study, focused on news coverage of the novel coronavirus by Hannity and Tucker Carlson Tonight, both on Fox News, showed that greater exposure to Hannity, who originally dismissed the risks associated with the virus relative to Tucker Carlson Tonight, is associated with a greater number of county-level cases and deaths.
In an effort to combat misinformation, doctors and health professionals have taken to social media. Using popular platforms such as tik tok, Snapchat, Instagram, and Facebook, health professionals are increasingly creating interesting content that also helps spread accurate health information.
The U.S. State Department has assessed that Russia, China and Iran are mounting increasingly intense and coordinated disinformation campaigns against the U.S. relating to the outbreak of the new coronavirus. All three countries are using state-controlled media, social media and government agencies and officials to disseminate information to domestic audiences and global audiences alike that denigrates the U.S. and spreads false accounts, the State Department report says.
All updates this week were listed under individual platforms.
Reuters reported on April 7 that India has told Facebook and TikTok to remove users that spread misinformation about coronavirus. India is concerned about videos intended to mislead Muslims. This request came after Voyager Infosec, a Delhi-based digital analytics firm, identified a pattern of misinformation videos using religious beliefs to justify defying health advisories.
On April 7, Washington Post reported on a study of 225 pieces of English-language misinformation (88% of which appeared on social media platforms) rated false or misleading by fact-checkers between January and the end of March 2020, conducted by Oxford Internet Institute, Reuters Institute for the Study of Journalism and the Oxford Martin School. The researchers found that misinformation about COVID-19 comes in many different forms, from many different sources, and makes many different claims. It most frequently reconfigures existing or true content rather than fabricating it wholesale, and where it is manipulated, it is edited with simple tools (not deep fakes or other AI-based tools). High-level politicians and celebrities produced or spread only 20% of the misinformation, but that content attracted a large majority of all social media engagements. The most common claims of misinformation concern the actions or policies that public authorities are taking to address COVID-19, and the spread of the virus through individual communities (e.g., date of first case or claims of where it came from). There was very significant variation from company to company: while 59% of false posts remained active on Twitter with no direct warning label, the number was 27% for YouTube and 24% for Facebook. That was the primary focus of the Post’s coverage.
In the April 6 episode of GZero Media, Ben Smith, media columnist at The New York Times and former head of Buzzfeed News, described the platforms’ efforts to counter mis- and misinformation as “aggressive” and “confident”, in part due to the “clarity” of direction provided by world health organizations. But he also said he believes they will return to a position of “neutrality” once the obvious risk of harm has passed. He agreed with host Ian Bremmer that the platforms would emerge from the crisis as “trusted institutions”, particularly in contrast to the “dysfunction” of government and “polarization” of traditional media. In the same episode, Danny Rogers of The Global Disinformation Index described the two major themes of current misinformation as “bunk cures” and “weaponized conspiracy theories”, including some specifically designed to inflame racial tensions. His reference to coronavirus as “the Super Bowl of disinformation” has also been widely quoted.
An April 3 article in TechCrunch also made the case that the tech companies “may face a respite from focused criticism, particularly with the industry leveraging its extraordinary resources to pitch in with COVID-19 relief efforts”. The article noted that platforms have been “uncharacteristically transparent” about the shifts the pandemic is creating within their own workflows, including greater use of artificial intelligence, and speculated that social media companies will “have a fresh appreciation for the value of human efforts”. Unfortunately, we may ultimately need to “take companies’ word” for the effectiveness of their efforts.
On April 2, Free Press circulated a petition asking recipients to “immediately tell Facebook, Twitter and YouTube to save lives by doing everything they can to shut down disinformation now” (this was separate from a letter asking the FCC to provide guidelines for broadcast coverage of the pandemic). Free Press asked the platforms to “invest more in people — not algorithms — to identify and remove harmful content and to equip moderators to do this work safely from home”.
Regulators, privacy advocates and others in the U.S. (and Europe) are wrestling with the tension between consumer privacy and the degree of surveillance required to address the pandemic, including for critical contact tracing and assessment of the effectiveness of policies designed to fight it. Some advocates appear willing to make trade-offs (assuming the data is essential and effective, as well as anonymized and aggregated) but worry about data retention, and use beyond the pandemic, as well as issues of equity and discrimination.,
In general there is strong support for the platforms’ reliance on health organizations such as the WHO and CDC to guide their content moderation practices. However, an opinion from the Cato Institute highlighted the potential pitfalls of “outsourcing truth” to external authorities for the legitimacy of their moderation practices. They used two examples from Twitter: when some users advocated wearing masks contrary to those authorities, Twitter did not remove their posts. Twitter also tolerated a falsehood from a Chinese government official because it was unlikely to cause immediate harm. Both decisions to tolerate posts that contradicted expert beliefs came from Twitter rather than external authorities. Cato called for more transparency and justification for how Twitter’s own values inform its use of expert knowledge.
New research from Pew Research Center shows that Americans who get news mostly from social media are least likely to follow COVID-19 coverage; most likely to say they have seen at least some misinformation about the pandemic; fare among the poorest at answering a question about when a vaccine might be available; were more likely than most to say that news sources have exaggerated the threat posed by the virus, and were also slightly more likely than those who turn to other pathways for their news to say that the virus was created in a lab, either intentionally or unintentionally.
While some of the platforms are gaining praise for their efforts to manage misinformation, there are also calls for them to do more. Bhaskar Chakravorti, Dean of Global Business at The Fletcher School at Tufts University and an economist who tracks digital technology’s use worldwide, identified three ways to evaluate the companies’ responses to the pandemic:
Are they applying the techniques they have meticulously designed to anticipate the user’s experience, hold their attention and influence their actions and behaviors related to the pandemic?
Are they enforcing responsible advertising policies, including closing loopholes, setting clear industry-wide principles and enforcing firm policies to avoid fraud and misdirection?
Are they providing data to public health authorities (including geographic information, data about people’s movements, high-resolution population density maps, search and location data, trends analyses, depending on the platform) and independent researchers - without compromising privacy?
Efforts by the platforms to remove misinformation are making the news when they involve notable political and media figures. On Friday, March 27, Google confirmed the removal of the Infowars Android app from its Play Store, after the app posted a video in which Alex Jones disputed the need for social distancing, shelter in place, and quarantine efforts meant to slow the spread of the novel coronavirus. Over the past few days Twitter has removed Tweets from Rudy Guiliani and Charlie Kirk
, actress and activist Alyssa Milano, conservative magazine The Federalist (which also had its account restricted), and Laura Ingraham for violating its terms of service when they posted false information about cures for the coronavirus. Facebook removed a post from Brazilian President Jair Bolsonaro in which he claimed that hydroxychloroquine is working to cure the virus. Conversely, Twitter came under fire for NOT removing a post from Elon Musk, founder of Tesla and SpaceX, with a false assertion about the coronavirus, and for removing one from John McAfee, founder of the eponymous security solutions company, only after it had been widely shared. And as of March 28 Facebook maintained that The Federalist’s post did not violate its policies.
There are mixed views in the media about whether the platforms’ aggressive practices for managing misinformation about the coronavirus should extend beyond the pandemic, including into political content and the electoral process. An article in Foreign Affairs noted, “The platforms’ approach to pandemic information has been aggressive, effective, and necessary—but it cannot and should not be applied to politics.” The article notes that false stories about health information are easier to detect; enable more effective moderation by artificial intelligence; are easier to establish evidentiary standards; more conducive to a consensus; and are far less subjective or likely to provoke controversy. Further, it notes, “False speech about politics is a necessary byproduct of living in a free society (unless it runs afoul of carefully circumscribed laws against libel and slander). Identifying false claims about politics is a laborious affair that requires difficult judgments about the nature of truth. As a result, the social consensus in favor of reducing political misinformation on social media is more limited.”
Some reports, including a New York Times article, have expressed concern that the platforms’ all-consuming efforts to combat misinformation related to the virus will deter their ability to address new practices being used by both domestic and foreign players to influence the 2020 election. Strains on their technical infrastructure and the challenges of coordinating “a vast election effort spanning multiple teams and government agencies” from employees’ homes may increase the difficulty.
This week there was a flurry of articles encouraging digital literacy; that is, encouraging consumers of media to improve their own ability to detect and avoid sharing of misinformation. Sources ranged from The Daily Tar Heel, the student newspaper at the University of North Carolina at Chapel Hill, which asked its community to “do our part to flatten the misinformation curve and fight fake news whenever we see it”, to Atlantic magazine, which noted: “Here’s How to Fight Coronavirus Misinformation: Send this to the person in your life who needs to read it”. The Advisory Board, a research organization for leaders in the healthcare industry, offered a guide to why misinformation spreads so readily, and how to spot it and deter its spread.
The Global Disinformation Index (GDI), a U.K. research organization that provides the advertising community with non-partisan and independent ratings of a site’s disinformation risks, released a new report this week specifically focused on coronavirus disinformation sites. It showed that ad tech players continue to serve up ads and provide ad revenue streams to known disinformation sites peddling coronavirus conspiracies. That means they are placing unknowing brands’ advertisements (in the survey these included Amazon Prime Video, Hyundai, Jeep, Samsung, Wayfair, Spotify and others) on websites with false claims about the pandemic. In a sample survey of nearly 50 sites carrying coronavirus conspiracies in the U.S., U.K. and Germany, GDI found Google provided ad services to 86% of these sites. (Google responded, “Similar to past reports, this report is flawed. GDI doesn’t detail how it defines disinformation, nor does it provide the full list of domains examined.”)  For more information about GDI’s methodology, see the footnote below.
On March 25, a group of attorneys general led by Pennsylvania Attorney General Josh Shaprio sent letters to Amazon, Walmart, Craigslist, Facebook and eBay asking them to create and enforce policies aimed at preventing price-gouging of products related to COVID-19. “We believe you have an ethical obligation and duty to help your fellow citizens in this time of need by doing everything in your power to stop price gouging in real-time”. The letters cite several examples of price-gouging uncovered by reporters, including a 2-liter bottle of hand sanitizer being sold on Craigslist for $250, and an 8-ounce bottle being sold on Facebook for $40. The attorneys general are asking for the platforms' voluntary cooperation.
On March 26, companies from across the tech industry joined with health organizations in the #BuildForCOVID19 global hackathon. WHO, scientists from the Chan Zuckerberg Biohub and experts from other industries will be joined by teams from Facebook, Slack, Pinterest, TikTok, Twitter, WeChat, Giphy, Slow Ventures and more to build tools to help tackle some of the health, economic and community challenges coming from the outbreak.
The White House will join forces with major tech companies to pool supercomputing resources in the battle against the novel coronavirus. The initiative is meant to help researchers gain access to computing power to help “discover new treatments and vaccines." IBM helped launch the “COVID-19 High Performance Computing Consortium,” and Amazon, Microsoft and Google also confirmed they would participate. A related article noted, “The coronavirus crisis is testing whether Big Tech can restore its reputation in Washington after years of backlash and scrutiny. Companies are presenting themselves as advocates of the U.S. government in its efforts to stop the spread of the virus, keep unemployment down, push out critical public health information and support workers financially. But if the industry doesn't tread carefully – especially to preserve users' privacy – companies risk coming out of the crisis looking like villains rather than heroes.”
In an article in Lawfare, Evelyn Douek, an S.J.D candidate at Harvard Law School studying online regulation of speech, called out how different the platforms’ approaches have been for the pandemic, including greater transparency, greater use of interest balancing and greater use of artificial intelligence. She noted, “There are three lessons to be learned from this. First, platforms should be commended for being upfront about the likely effects of this change in approach. But they should also be collecting data about this natural experiment and preparing themselves to be equally as transparent about the actual effects of the change. Second, platforms and lawmakers should remember these announcements in the future. The candid announcements from platforms in recent weeks about the costs of relying on AI tools should be nailed to the door of every legislative body considering such an approach. Finally, regulators and academics need to recognize that these announcements are really just an extreme version of the choices that platforms are making every day. Content moderation at scale is impossible to perform perfectly—platforms have to make millions of decisions a day and cannot get it right in every instance. Because error is inevitable, content moderation system design requires choosing which kinds of errors the system will err on the side of making.” She further noted that “the pandemic is shaping up to be a formative moment for tech companies...we’re asking them to step up, but we also need to keep thinking about how to rein them in”.
Some advocates are asking the platforms to go further: use their massive capabilities in segmentation, microtargeting and personalization to deliver “21st century PSA’s”: personalized information that motivates the most effective actions by consumer group to flatten the curve. In an op-ed in Wired, Tristan Harris, the president and cofounder of the Center for Humane Technology, writes, “This emergency, this moment, calls for a fundamentally new approach to technology—to abandon the myth of neutral metrics and engagement, and restructure technology to prioritize this corrective lens that can help save millions of lives.” He recommends the platforms: change internal performance measures to emphasize the highest-priority actions; use their ad targeting capabilities (including the ranking of posts within feeds, notification delivery, and group suggestions) to personalize behavioral cues; use their persuasive powers of “social proof”, social norms and signaling (as they have tested with voting); and use their capabilities in localization to assist in relief coordination and coalition-building.
In a related editorial on Medium, Tristan and CHT co-founder Aza Raskin ask the platforms to move from “informing” to “persuading”. They defined five persuasion principles - many of which are already the basis of persuasive ads on the platforms - that could “accelerate life-saving choices”:
👯 Social Proof (we do what others do)
🔭 Make the future feelable (through compelling graphics)
🙋🏻♀️ Make it personal (show our actions’ impact on our Friends)
✅ Make it concrete (quizzes, surveys, checklists)
👀 Social comparison (we compare ourselves to others)
Joan Donovan, the newly-appointed Research Director of Harvard Kennedy School’s Shorenstein Center and one of the world’s leading analysts of how internet misinformation is seeded and spread, notes that the platforms may in fact be getting better at maintaining a “rolling index” of “tripwire” terms that redirect users to reputable sources of information. However, they are perpetually behind the culture. For example, younger consumers colloquially refer to the virus as ‘rona or “the ‘rona” in social media posts and these are not detected; searches and shares related to the “Chinese virus” exploded after the term was used by President Trump; and text messages and posts about “martial law” and the possibility of a national quarantine were widely shared after several states called upon the National Guard to distribute food or medical supplies.
Some reports also note that misinformation about the virus is significantly more challenging to detect and remove because it is being created and shared so widely by regular social media users: “it is sporadic, not networked”. Most time and investment in countering misinformation has gone into systems to detect, monitor and combat sophisticated digital misinformation campaigns coming from coordinated, state-backed campaigns, often in attempts to influence elections or sow discord. However, there are also reports - most disputed by the relevant governments - that Russian and Chinese media have deployed disinformation campaigns to worsen the impact of the coronavirus, generate panic and sow distrust.
Independent researchers have struggled to track misinformation traveling from person to person, or through closed groups of people, through email or texts that are not seen by the general public. Text messages are particularly difficult for independent researchers to trace, especially when messages ― like some recent texts about a national quarantine in the U.S. - are delivered as graphical images as opposed to words that computers can more easily analyze. Those pushing misinformation may be changing tactics away from social media to thwart the major platforms’ efforts to catch and block falsehoods. The sophistication of the campaign about the U.S. quarantine resulted in an interagency effort — involving the NSC, FBI, intelligence agencies, the Department of Homeland Security and the State Department — to determine who is behind the apparent disinformation campaign. Encrypted messaging services such as WhatsApp and iMessage and private groups on Facebook are probably also among the greatest sources of misinformation, but impossible or challenging for researchers to monitor in real time.
A preliminary analysis of the online conversation surrounding the coronavirus pandemic prepared by social network analysis firm Graphika suggests that conservative and right-wing voices played an outsized role in spreading mis- and disinformation online about the coronavirus pandemic worldwide. Graphika produced a set of three global network maps that capture the mainstream global conversation around coronavirus at monthly intervals as part of an effort to map and analyze what the World Health Organization called an “infodemic”. Their key findings were:
The online conversation about the pandemic has become more complex over time; a large “mega cluster” of US right-wing accounts became diminished by the mainstreaming of the coronavirus conversation online over time.
Particularly in the US, Italy, and France, more right-wing accounts are involved in the conversation and these accounts are more active in their engagement than their leftwing counterparts.
A number of groups are leveraging the conversation around coronavirus to propagate racism and anti-immigration sentiment, or to draw attention to immigration policy in their respective countries.
At first, conspiracy theories appeared to revolve around the causes of the outbreak, but as the pandemic continued to spread, conspiratorial content has become more closely focused on governmental responses to the outbreak.
Habitual sharers of health misinformation increased their share of voice in the coronavirus in February; the data highlights a serious distrust of established and official sources of health information.
Narratives designed to stoke geopolitical tensions, including some seeming to originate with the Kremlin, seem focused on undermining trust in global institutions and drawing attention to the failures of other governments, predominantly the Chinese response.
The core narratives of the misinformation efforts were: racism and xenophobia, conspiracy communities, health misinformation, and geopolitical tensions. Conversely, the spread of #FlattenTheCurve is a small case study in the efficacy of science communication that attests to the positive, and likely life-saving, impact of credible information online.
- Lecture in “Politics of the Press” class at Harvard Kennedy School, March 22, 2020.