Facebook has been an arbiter of truth, here are examples

Mark Zuckerberg, President and CEO of Facebook, arrives to testify at the Maison des services financiers hearing on a review of Facebook and its impact on the financial services and housing sectors on Wednesday, October 23, 2019.

Bill Clark | CQ-Roll Call, Inc. | Getty Images

Facebook CEO Mark Zuckerberg criticized rival Twitter on Thursday for his decision to check President Trump’s tweets, telling CNBC he does not “think Facebook or Internet platforms in general should be the arbiters of the truth “.

Zuckerberg’s argument is that Facebook should give people a voice and freedom of expression. It even allows misinformation in politicians’ Facebook ads.

“Political speech is one of the most sensitive parts of a democracy, and people should be able to see what politicians are saying,” Zuckerberg told CNBC.

But despite Zuckerberg’s comments, there have been cases throughout Facebook’s history where the company has played the role of arbiter of truth.

Here are some examples.

Require real names

For much of the company’s history, Facebook has demanded that people use their real names on their profiles. the company policy indicates that “the name on your profile should be the name that your friends call you in everyday life. This name should also appear on ID or a document on our identity list.”

In particular, this rule has been abused by trolls who target and harass transgender Facebook users who are reported to have not used their legal name. In some cases, these users have had their accounts suspended. The harassment has become so serious that at one point in 2014, the former Facebook executive Chris Cox apologized.

Coronavirus facts

Since the coronavirus pandemic began to spread, Facebook has played a proactive role in guiding its users to precise Covid-19 and in hiding or removing false information about the virus.

Facebook announced in January that it would “remove content containing false allegations or conspiracy theories that have been reported by major global health organizations and local health authorities and that could harm people who believe them.” In March, the company launched a coronavirus information center to appear at the top of user news feeds.

Earlier this month, Facebook said it had placed warning labels on 50 million false reports of Covid-19. The company said these labels discourage users from clicking on inaccurate content 95% of the time.

When the “pandemic movie” started going viral earlier this month, Facebook decided to delete the posts that included the video, as the movie suggested “that wearing a mask can make you sick and cause imminent damage” .

In March, Facebook also deleted a message from Brazilian President Jair Bolsonaro. In the message, Bolsonaro said that hydroxychloroquine could be used as a treatment for Covid-19.

“We are removing content on Facebook and Instagram that violates our community standards, which do not allow misinformation that could lead to physical damage,” Facebook said at the time.

Extracting far-right conspiracy theorists

Earlier this month, Facebook made the decision to delete pages dedicated to QAnon conspiracy theory. QAnon is a far-right group that believes there is a “deep state” plot against Trump.

Facebook has deleted five pages, which collectively have 133,000 subscribers. The pages violated Facebook policies against unauthorized coordinated behavior, which is defined like using multiple fake accounts working together to deliver content that misleads people.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.