Facebook has updated its rules for dealing with posts containing depictions of “black faces” and common anti-Semitic stereotypes.
Its community standards now explicitly state that such content should be removed if used to target or mock people.
The company said it consulted more than 60 external experts before making the transfer.
But one activist said she still has concerns about her broader efforts against racism.
“Blackface is a problem that has existed for ten years, which is why it is surprising that it is only being addressed now,” said Zubaida Haque, interim director of the Runnymede Trust think tank on racial equality.
“It is profoundly harmful to the lives of people of color in terms of the hatred that is directed at them and the spread of myths, lies and racial stereotypes.
“We welcome Facebook’s decision.
“But I’m not entirely convinced that these steps are part of a solid strategy to proactively address this hatred instead of being crisis-driven.”
Policies on hate speech
Facebook’s rules have long included a ban on hate speech in relation to race, ethnicity, and religious affiliation, among other characteristics.
But now they have been revised to specify:
- caricatures of black people in the form of a black face
- references to Jewish people who run the world or control important institutions such as media networks, the economy or the government
The rules also apply to Instagram.
“This type of content has always gone against the spirit of our hate speech policies,” said Monika Bickert, head of content policy at Facebook.
“But it can be really difficult to take concepts … and define them in a way that allows our content reviewers around the world to identify violations consistently and fairly.”
Facebook said the ban will apply to photos of people portraying Black Pete, a Saint Nicholas aide, who traditionally appears in blackface at winter festival events in the Netherlands.
And it might even remove some photos of Morris English folk dancers who painted their faces black.
However, Ms Bickert suggested that other examples, including critical posts calling attention to the fact that a politician once wore the black face, might still be allowed once the policy goes into effect.
The announcement coincided with Facebook’s latest data on handling problem posts.
The tech firm said it eliminated 22.5 million hate speeches in the months of April through June, up from 9.6 million in the previous quarter.
It said the increase was “largely driven” by improvements to its automatic detection technologies in several languages including Spanish, Arabic, Indonesian and Burmese. This implied that a lot of content had been lost in the past.
Facebook has acknowledged that it is still unable to provide a measure of the “prevalence of hate speech” on its platform, in other words whether the problem is actually getting worse.
It already provides such a metric for other topics, including violent and graphical content.
But a spokesman said the company hoped to start providing a figure by the end of the year. He also said the social network intended to start using a third-party auditor to verify its numbers in 2021.
A campaign group said they suspected hate speech was indeed a growing problem.
“We have long felt that a major pandemic has the potential to inflame xenophobia and racism,” said Imran Ahmed, chief executive of the Center for Countering Digital Hate (CCDH).
Hate speech on Facebook
More than 5 times an increase compared to last year
The Facebook report also revealed that staffing problems caused by the pandemic had meant it took action for fewer suicide and self-harm posts, both on Instagram and Facebook.
And on Instagram, the same problem has resulted in fewer posts in the category it calls “child nudity and sexual exploitation.” Shares fell by more than half, from one million to 479,400 seats.
“Facebook’s inability to take action against harmful content on their platforms is inexcusable, especially when they have been repeatedly warned that the lockdown conditions were creating a perfect storm for online child abuse at the start of this pandemic,” he said. said Martha Kirby of the NSPCC.
“The crisis has highlighted how tech companies are unwilling to prioritize the safety of children and instead respond to damage after it happens rather than designing basic safety features at their sites to prevent it,” he said.
However, on Facebook itself, the number of such post removals has increased.