After three months of tracking platform efforts to counter misinformation about the novel coronavirus, we want to put forward a specific policy solution -- a “superfund for the internet” -- for countering misinformation flowing over digital platforms.

The Politicization of the Pandemic

As background: even early in the pandemic, some hypothesized that the platforms would be more aggressive about addressing misinformation, in part to try to counter a growing “techlash narrative” and the growing risk of regulation, but also because the pandemic presented a fundamentally different kind of information challenge. For example, Claire Wardle, the co-founder and leader of First Draft, the world’s foremost nonprofit focused on research and practice to address mis- and disinformation, believed “There are no two sides with coronavirus, so they don’t have people on the other side saying: ‘We want this,’ the way you do with anti-vaxxers or political misinformation. [The platforms] are freer to act.”[1]

Facebook CEO Mark Zuckerberg, who has been notoriously resistant to moderation of political content on the platform, also noted, “The difference between good and bad information is clearer in a medical crisis than in the world of, say, politics. When you’re dealing with a pandemic, a lot of the stuff we’re seeing just crossed the threshold. So it’s easier to set policies that are a little more black and white and take a much harder line.”[2]

But that’s not how it played out. Very quickly, information about the novel coronavirus pandemic became every bit as politicized as what we normally consider highly partisan topics. This information -- about the severity of the virus, its origins, treatments, and whether or how to “open America,” for a few examples -- has been subject to the same patterns of creation and distribution as political content designed to sow division and undermine democratic institutions. That includes content created or shared by trolls and bots, foreign interference, and amplification by partisan players. The only good news about this nightmare scenario is that it makes the pandemic a very appropriate model for how platforms should manage other types of misinformation, including overtly political misinformation. In fact, they may be one and the same, since there is now strong evidence of the danger to democracy posed by pandemic-related disinformation from foreign parties, which is being used to weaken democratic checks on power or interfere with elections[3].

How Digital Platforms Have Countered Pandemic Misinformation

Although we tracked all the major platforms through three months of the pandemic, considerations about information quality and the media are generally seen as specific to companies that engage largely in open (non-encrypted) information distribution such as Google, YouTube, Facebook, and Twitter. For this reason, the focus of this perspective is primarily on these platforms.

Simply put, based on our tracking, the efforts of these platforms to counter misinformation about the pandemic far exceed anything they’ve done -- or said they could do -- in the past. In the process, they have undermined their own past arguments for an arguably lax approach, ranging from “It’s too hard!” to “Free Speech!” to “It’s not our job.” But besides their willingness to change their posture and user experience design in favor of content moderation, there’s one thing that has enabled their approaches: more extensive partnering with other organizations, whose authoritative content and information analysis has enabled them to check sources, up- and down-rank content, direct people who’ve experienced misinformation to debunking sites, and understand what kinds of misinformation may create the greatest harm. Those relationships have allowed the platforms to deploy some of the most proven strategies for countering misinformation[4], regardless of content or context. These include countering with accurate information, evaluating the source, avoiding binary solutions, setting priorities for remediation, and increasing the salience[5] of accuracy. And to some extent, it’s been the platforms’ partnerships with trusted sources of authoritative information -- including the WHO, CDC, and fact checking organizations -- that has allowed them to act so aggressively without appearing politically biased.

A Policy Solution for Misinformation: “A Superfund for the Internet”

In general, we give these platforms credit for their efforts during the coronavirus crisis thus far. But we got a preview of what may happen to all that good work after the crisis when Mark Zuckerberg said, in that same interview[6], that it was “hard to predict” how things would play out after the pandemic, and reiterated that the kind of threats posed by misinformation about the virus were “in a different class” (though many weeks later, he may not still feel that way).

Given how much of the misinformation problem is generated through the pervasive reach, speed and power of digital platforms, we believe it is critical that the effective strategies described above become fully embedded with the major information distribution platforms. We would like to see the platforms themselves, accountable to independent expert bodies established through legal mandate, master the process of identifying, minimizing, and helping the public navigate disinformation -- without interfering with Constitutionally-protected speech rights. This is particularly necessary in contexts where the quality of information is of high stakes, where spread of mis- and disinformation is virulent and destructive, and where salience or engagement is high. Given the risk for harm, we believe the platforms’ efforts shouldn’t be reliant on their continued good will and philanthropy, or conducted in the absence of oversight.

Given the enormous and now proven value of information analysis to support public health and institutions, we can imagine, and are now developing, a solution in which platforms are compelled to invest much more in the tools and approaches that work. We’re thinking of this as a trust fund, or “superfund,” modeled on the 1980 Superfund for clearing toxic waste sites. Unlike other, similar concepts, (like here[7] and here[8] and here[9] and here[10] and here[11] and here[12] and here[13] and here[14]) though, we don’t believe a punitive “tax” on advertising revenue -- which isn’t really the direct source of the problem -- is the preferable approach. We favor an approach of value creation, since the pandemic has given us such a powerful model for its benefits. It has essentially created a market in which the platforms have more demand for -- and journalistic organizations have more supply of -- information cleansing services. The platforms should pay for these services to help to clear the toxic junk from their platforms, at a fair price. It’s an exchange of real value that would preclude any assumption, expectation, or threat of editorial influence. In doing so, we can provide an essential new revenue stream to local journalistic organizations and information analysts who also help protect our public and democratic institutions.

Not “Back to Normal” for Misinformation

Just like so many other aspects of life during the pandemic, we shouldn’t expect -- or allow -- the platforms to go “back to normal” when the crisis is over. As the whole world has gone online for working, learning, telehealth, and entertainment, the platforms’ power has only grown, and with it, their responsibility and accountability to the public. We need both a policy framework[15] and specialized regulatory authority[16] to limit their anti-competitive behaviors, protect Americans’ privacy, and to stop or slow the spread of disinformation online. A superfund for the internet, which also fosters reputable journalism, is the next step.

Public Knowledge

1818 N Street NW
Suite 410
Washington, DC 20036