The report found that content from 10 “superspreader” sites sharing health misinformation had almost four times as many Facebook views in April 2020 as equivalent content from the sites of 10 leading health institutions, such as the World Health Organization and the Centers for Disease Control and Prevention.
The social media giant, which has been under pressure to curb misinformation on its platform, has made amplifying credible health information a key element of its response. It also started removing misinformation about the novel coronavirus outbreak that it said could cause imminent harm.
“Facebook’s algorithm is a major threat to public health. Mark Zuckerberg promised to provide reliable information during the pandemic, but his algorithm is sabotaging those efforts by driving many of Facebook’s 2.7 billion users to health misinformation-spreading networks,” said Fadi Quran, campaign director at Avaaz.
“We share Avaaz’s goal of limiting misinformation, but their findings don’t reflect the steps we’ve taken to keep it from spreading on our services,” said a Facebook company spokeswoman.
“Thanks to our global network of fact-checkers, from April to June, we applied warning labels to 98 million pieces of COVID-19 misinformation and removed 7 million pieces of content that could lead to imminent harm. We’ve directed over 2 billion people to resources from health authorities and when someone tries to share a link about COVID-19, we show them a pop-up to connect them with credible health information,” she said.
Avaaz’s report also said that warning labels from fact-checkers were applied inconsistently even when misinformation had been found to be false.
The report tracked how content from a sample of misinformation-sharing websites was shared on Facebook by interpreting available Facebook data between May 2019 and May 2020.