This content included fake preventative measures for COVID-19, the respiratory illness caused by the novel coronavirus, and exaggerated cures, Facebook’s vice president of integrity, Guy Rosen, said Thursday in a press call.
The social media giant has been taking a tougher stance against health misinformation since the coronavirus pandemic started. On Monday, Facebook said it’s expanding a list of debunked claims about COVID-19 and vaccines that it doesn’t allow on its platforms. Some of the false claims include stating that the coronavirus is human-made or manufactured or that vaccines are more dangerous than getting the disease. Groups, pages and accounts on the main social network and Instagram that share these false claims repeatedly could get pulled down altogether.
“After the pandemic is over, we will continue to talk to health authorities and make sure that we are striking the right approach going forward consistent with our misinformation policies and principles,” said Monika Bickert, who oversees global policy management at Facebook.
A new oversight board tasked with reviewing some of the social network’s toughest content moderation decisions has challenged some of the company’s calls on COVID-19 misinformation. In January, the board overturned Facebook’s decision to pull down a COVID-19 post for its potential to cause harm. The board found the social network’s rules addressing health misinformation “to be inappropriately vague” and urged the company to create a new standard.
Facebook also shared new data on Thursday about the amount of content it removed in the fourth quarter for violating rules against hate speech, harassment, nudity and other types of offensive content. The company has been under more scrutiny to do a better job of moderating content especially after the riot on Capitol Hill, which underscored how online hate can spill into the real world. Bickert said the company worked with law enforcement officials, including helping them identify people who posted photos of themselves at the riot after the attack was over.
Facebook said the percentage of times a user sees hate speech, nudity and violent and graphic content on its platform is also dropping. There are seven to eight views of hate speech for every 10,000 views of content, Facebook said. In the fourth quarter, the company said it took action against 26.9 million pieces of hate speech content, up from 22.1 million in the third quarter. Facebook attributed this uptick to technological updates in Arabic, Spanish and Portuguese. There was 6.6 million pieces of hate speech content that Instagram took action against in the fourth quarter.
Facebook also said it’s catching more bullying and harassment content before it’s reported by users, thanks to technological updates used to detect hurtful comments. The company took action against 6.3 million pieces of content for bullying and harassment, up from 3.5 million in the previous quarter. Instagram took action against 5 million pieces of bullying and harassment content.