Meta wants to stop removing disinformation about COVID-19 from their platforms, such as Facebook. After adopting the policy of deleting information related to the coronavirus that would put people’s health at risk, the company seeks to soften the rules.

Nick Clegg, President, Meta Global Affairs, asked the Oversight Board to look at how they deal with disinformation so they don’t have to erase it.

Although Meta claims to have removed more than 25 million false posts about COVID-19, the company does not want to be so drastic. Clegg suggests that the posts could be verified by third parties and that there would be no need to remove them if they do not pose an imminent risk of harm.

According to Target, the COVID-19 situation has evolved. In many countries where vaccination rates are high, life is returning to normal. Instead of eliminating fake news about vaccines or the use of masks, the company proposes that they be labeled or demoted directly.

The technology company says it is “fundamentally committed to freedom of expression” and that its applications are important for people to make their voices heard.

But resolving the inherent tensions between freedom of expression and security is not easy, especially when faced with rapid and unprecedented challenges, as it has been during the pandemic.

The company seeks judgment from the Supervisory Board on these policies. The decision would help them respond to future health emergencies.

Although Meta claims that it maintains its commitment against disinformation about COVID-19, these types of actions cast doubt on their intentions.

Removing 25 million posts or deleting more than 3,000 accounts, pages or groups dedicated to spreading false news about the pandemic seems small if we take into account that Facebook has more than 2 billion active users worldwide.

The company announced its plan to combat hoaxes during the first months of the pandemic, but did not commit to eliminate them. With the help of fact checkers, Meta tagged over 190 million posts as false, altered or lacking in context.

The warnings were useless or the content appeared at the bottom of the page. Disinformation continued to flow for almost a year. After months of criticism, Facebook decided to take action on the matter and confirmed that it would eliminate the fake news.

By asking its Oversight Board to consider relaxing its rules against misinformation, Meta makes it clear that it has no interest in the good of its users.

Categorized in:

Tagged in:

, ,