• On May 21, was accused of ‘censorship’ after removing a video of a former World Health Organisation chief saying coronavirus could ‘burn out’ before a vaccine is found. YouTube has since said the deletion was an error and that the video had been reinstated. Following the incident, YouTube said that it was sifting through massive numbers of videos on its site looking for harmful fake news and occasionally its algorithms made mistakes.[1]

  • On May 20, updated its Community Guidelines to include a page specifically on COVID-19 misinformation. Its COVID-19 Medical Misinformation Policy doesn't allow content about COVID-19 that poses a serious risk of egregious harm, or that spreads medical misinformation that contradicts the World Health Organization (WHO) or local health authorities’ medical information about COVID-19. This is limited to content that contradicts WHO or local health authorities’ guidance on:

    • Treatment

    • Prevention

    • Diagnostic

    • Transmission

    For a detailed description of its policy see the footnote.

  • In the first week of May, multiple media sources referred to YouTube “finally” deleting conspiracy theorist David Icke's official channel, having repeatedly warned him not to violate its policies. Icke has spread misinformation for years via social and traditional media, and his YouTube channel had close to 1M subscribers at the time of its deletion. The decision to “terminate” Icke’s channel seems to have been Icke linking COVID-19 symptoms to 5G mobile networks, in violation of YouTube's policies.[1]

  • On April 28, communicated to partners that YouTube is expanding the use of “fact check information panels” – a feature launched in Brazil and India last year — to the United States. The panels are designed to highlight relevant, third-party fact-checked articles above search results for relevant queries. A fact check information panel will appear when there is a relevant fact checked article available from an eligible publisher, and it will only show when people search for a specific claim.[2]

  • On April 20, the Wall Street Journal reported that Alphabet wants to substantially limit the information a key auditor of YouTube can share about the risks of advertising on the video service. The auditor, OpenSlate, is refusing to sign a contract that would prevent it from reporting to ad clients when ads have run in videos with sensitive subject matter, including information about COVID-19.[3]

  • On April 7, in an interview with Axios, Chief Product Officer Neal Mohan said YouTube has been focused on a twofold approach: making authoritative information more prominent and aggressively removing policy-violating content. An information panel on its home page linking to national health agencies' websites represents the first time YouTube has linked to a text site rather than a video. It has expanded its existing medical misinformation policies that prohibit promoting false cures or encouraging people not to see a doctor to include barring promoting actions that go against recommendations from national health authorities. Unlike Facebook and Twitter, YouTube's policies are entirely focused on the content of a video and not who is doing the speaking.[4]

  • On April 6, confirmed it will remove or reduce recommendations for videos that falsely link 5G to the virus after reports of people setting phone masts on fire and attacking phone company workers. It will actively remove videos that breach its policies for CV, but content that is simply conspiratorial about 5G mobile communications networks, without mentioning coronavirus, is still allowed on the site.[5]

  • On April 2, expanded monetization of content mentioning or featuring COVID-19 to all creators and news organizations, assuming they follow both Advertiser-Friendly and Community Guidelines.[6] Subsequent to this change, the Tech Transparency Project, a not-for-profit watchdog organization, reported the company was running advertisements with videos pushing herbs, meditative music, and potentially unsafe over-the-counter supplements as cures for Covid-19. When notified of these, Google removed 4 of the videos noting that the other 3 were not misinformation but “wellness”-related.[7]

  • On March 19, communicated that they will create a new “row” of verified videos on its homepage for displaying trustworthy videos about coronavirus. It will pull from a list of authoritative news outlets and local health authorities that upload to YouTube and are more reliable than just general videos on the subject uploaded by random users. Videos are generated algorithmically, with hundreds of different signals being used to help pick videos. This technique of boxing out videos, known as a shelf, has been used by the platform in the past to help promote legitimate information as world news events unfold.[8]

  • On March 16, confirmed that they “will start relying more on technology to help do some of the work normally done by reviewers, which may result in some accidental removal of content that does not violate our policies”.[9]

  • On March 11, communicated a reversal of its Sensitive Events Policy: YouTube was previously de-monetizing videos that included “more than a passing mention” of COVID-19, but now will allow monetization for videos that discuss the coronavirus “on a limited number of channels”[10]

  • Added the CDC’s YouTube channel as a featured channel on its homepage; giving priority to authoritative sources in search.[11]

  • Using the homepage to direct users to the WHO or other locally relevant authoritative organizations.[12]

  • Pulled ads from videos that discuss COVID-19[13], while donating ad inventory to governments and NGOs in impacted regions to use for education and information.[14]

  • According to Bloomberg, recent YouTube searches for specific coronavirus conspiracies showed videos debunking those untruths.[15]


Public Knowledge

1818 N Street NW
Suite 410
Washington, DC 20036