US tech firms roll back misinformation curbs ahead of 2024 polls

Tech companies have loosened content moderation policies, downsized trust and safety teams and, in the case of X, restored accounts known for pushing bogus conspiracies. PHOTO: REUTERS

WASHINGTON – As a global election season widely expected to be mired in misinformation and falsehoods fast approaches, the big US-based tech platforms are walking back policies meant to curb them, stoking alarm.

Whether it is YouTube scrapping a key misinformation policy or Facebook altering fact-checking controls, the social media giants are demonstrating a certain lassitude with being the sheriffs of the “Internet Wild West”.

Changes have come in a climate of layoffs, cost-cutting measures and pressure from right-wing groups that accuse the likes of Facebook-parent Meta or YouTube owner Google of suppressing free speech.

This has spurred tech companies to loosen content moderation policies, downsize trust and safety teams and, in the case of Mr Elon Musk-owned X (formerly Twitter), restore accounts known for pushing bogus conspiracies.

Those moves, researchers say, have eroded their ability to tackle what is expected to be a deluge of misinformation during more than 50 major elections around the world in 2024, not only in the United States, but also in India, Africa and the European Union.

“Social media companies aren’t ready for the 2024 election tsunami,” the watchdog Global Coalition for Tech Justice said in a report in September.

“While they continue to count their profits, our democracies are left vulnerable to violent coup attempts, venomous hate speech and election interference.”

In June, YouTube said it will stop removing content that falsely claims the 2020 US Presidential Election was plagued by “fraud, errors or glitches”, a move sharply criticised by misinformation researchers.

YouTube justified its action, saying that removing this content could have the “unintended effect of curtailing political speech”.

Twitter, now known as X, said in November it would no longer enforce its Covid-19 misinformation policy.

Since billionaire Musk’s turbulent acquisition of the platform in 2022, it has restored thousands of accounts that were once suspended for violations including spreading misinformation and introduced a paid verification system researchers say has served to boost conspiracy theorists.

In August, the platform said it would now allow paid political advertising from US candidates, reversing a previous ban and sparking concerns over misinformation and hate speech in next year’s election.

“Musk’s control over Twitter has helped usher in a new era of recklessness by large tech platforms,” said Ms Nora Benavidez from non-partisan group Free Press.

“We’re observing a significant roll back in concrete measures companies once had in place.”

Platforms are also under pressure from conservative US advocates, who accuse them of colluding with the government to censor or suppress right-leaning content under the guise of fact-checking.

“These companies think that if they just keep appeasing Republicans, they’ll just stop causing them problems, when all they’re doing is increasing their own vulnerability,” said Mr Berin Szoka, president of TechFreedom, a think-tank.

For years, Facebook’s algorithm automatically moved posts lower in the feed if they were flagged by one of the platform’s third-party fact-checking partners, including AFP, reducing the visibility of false or misleading content.

Facebook recently gave US users the controls, allowing them to move this content higher if they wanted, in a potentially significant move the platform said will give users more power over its algorithm.

The hyper-polarised political climate in the US has made content moderation on social media platforms a hot-button issue.

Earlier in September, the US Supreme Court temporarily put on hold an order limiting the ability of President Joe Biden’s administration to contact social media companies to remove content it considers to be misinformation.

A lower court of Republican-nominated judges had given that order, ruling that US officials went too far in their efforts to get platforms to censor certain posts.

Misinformation researchers from prominent institutions such as Stanford Internet Observatory also face a Republican-led congressional inquiry, as well as lawsuits from conservative activists who accuse them of promoting censorship – a charge they deny.

Tech sector downsizing that has gutted trust and safety teams and poor access to platform data have further added to their challenges.

“The public urgently needs to know how platforms are being used to manipulate the democratic process,” said Ms Ramya Krishnan from the Knight First Amendment Institute at Columbia University.

“Independent research is crucial to exposing these efforts, but platforms continue to get in the way by making it more costly and risky to do this work.” AFP

Join ST's Telegram channel and get the latest breaking news delivered to you.