Hundreds of posts that spread misinformation about Covid-19 are left online, according to a report from the Center for Countering Digital Hate.
About 649 posts have been reported to Facebook and Twitter, including fake cures, anti-vaccination propaganda and conspiracy theories around 5G.
90% remained visible online afterwards without any attached notice, suggests the report.
Facebook said the sample is not “representative”.
A Facebook spokesman said; “We are taking aggressive measures to remove malicious disinformation information from our platforms and have removed hundreds of thousands of these posts, including complaints of false treatment.
“During March and April we placed warning labels on approximately 90 million Covid-19 related content and these labels prevented people from viewing the original content 95% of the time.
“We will notify anyone who has liked, shared or commented on the Covid-19 posts that we have since removed.”
Twitter said it was prioritizing the removal of Covid-19 content “when it has a call to action that could potentially cause harm.”
“As we said earlier, we will not take enforcement measures on all Tweets that contain incomplete or disputed information about Covid-19. Since we introduced these new policies on March 18 and while we doubled the technology, our automated systems have ha challenged over 4.3 million accounts that targeted discussions around Covid-19 with spammed or manipulative behavior. “
Imran Ahmed, CEO of the Center for Countering Digital Hate, said that companies “shirk their responsibilities.”
“Their misinformation and management reporting systems are simply not fit for purpose.
“Social media giants have said many times that they are taking Covid-related misinformation seriously, but this new research shows that even when they are given seats that promote disinformation, they fail to act.”
Rosanne Palmer-White, director of the Restless Development youth action group, which also took part in the survey, said that young people “do their best to stop the spread of misinformation”, but social media companies “them disappoint “.
Both Twitter and Facebook address questions from the UK’s Digital Culture Media and Sport subcommittee on how they manage coronavirus disinformation.
MEPs were not happy with a previous session. They asked for more detailed answers and said that more senior executives should attend the next meeting.
For the study, ten volunteers from the UK, Ireland and Romania sought disinformation in social media from late April to late May.
They found posts suggesting that sufferers can get rid of the coronavirus by drinking aspirin dissolved in hot water or taking zinc and vitamin C and D supplements,
Twitter was deemed the least responsive, with only 3% of the 179 posts taken into consideration.
Facebook removed 10% of the 334 reported posts and marked another 2% as fake. Instagram, owned by Facebook, has acted on 10% of the 135 complaints that have been sent.
Both social networks insist that they made efforts to keep fake coronavirus news under control.
Twitter has started tagging tweets that spread misinformation about Covid-19. Facebook has also removed some content, including from groups that claim that the launch of the 5G network was the cause of the spread of the virus.
By Marianna Spring, disinformation specialist and social media reporter
All eyes have been on how social media sites have dealt with misleading information on their platforms in the past few weeks – and all eyes will be on them again today, as they are grilled by MPs.
During the pandemic, several social media companies made a number of changes to their policies to try and deal with harmful and misleading information. Facebook and YouTube both claim to have cracked down on conspiracies that could harm.
And in a high-profile move, Twitter decided to label a misleading tweet from U.S. President Donald Trump, even if it was a postal vote rather than a coronavirus.
But these changes in politics don’t seem easy to implement. In practice, misleading posts are often not reported or, when they are, are not always removed. The issue of the damage posed by several posts seems to underlie this problem.
Messages that pose an immediate threat to life are removed more quickly. However, misleading messages that pose a less immediate threat can prove to be just as dangerous, including those from anti-vaccination groups.
A BBC investigation into the human cost of disinformation found that the potential for indirect harm caused by conspiracies and bad information that undermines public health messages – or an effective vaccine – could be enormous.
And while disinformation about protests and other news events floods social media, it becomes evident that the pandemic is just one of many battles against disinformation to be fought.