A FACEBOOK bug led to the platform mistakenly showing users more harmful content for six months.

According to The Verge, content identified as misleading or problematic was prioritized in users’ feeds when it should have been hidden.

A Facebook bug led to the platform mistakenly showing users more harmful content

1

A Facebook bug led to the platform mistakenly showing users more harmful contentCredit: AFP

Internal documents show that the software bug was identified by engineers and took half a year to fix.

Facebook disputed the report, which was published Thursday, saying that it “vastly overstated what this bug.”

The glitch ultimately had “no meaningful, long-term impact on problematic content,” according to Joe Osborne, a spokesman for parent company Meta.

But it was serious enough for a group of Facebook employees to draft an internal report referring to a “massive ranking failure” of content.

Mark Zuckerberg promises metaverse plans including VR Grand Theft Auto
Mark Zuckerberg says Meta has invented the gadget that will REPLACE iPhone

In October, the employees noticed that some content that had been marked as questionable was nevertheless being favoured by the algorithm to be widely distributed in users’ News Feeds.

The content was flagged by external media – members of Facebook’s third-party fact-checking program.

“Unable to find the root cause, the engineers watched the surge subside a few weeks later and then flare up repeatedly until the ranking issue was fixed on March 11,” The Verge reported.

But according to Osborne, the bug affected “only a very small number of views” of content.

Most read in News Tech

That’s because “the overwhelming majority of posts in Feed are not eligible to be down-ranked in the first place,” Osborne explained.

He added that other mechanisms designed to limit views of “harmful” content remained in place, “including other demotions, fact-checking labels and violating content removals.”

Facebook’s fact-checking program launched in 2018 and aims to identify content that is harmful and misleading.

Under the program, Facebook pays to use fact checks from around 80 organisations, including media outlets and specialized fact-checkers, on its platform, WhatsApp and on Instagram.

Content rated “false” is downgraded in news feeds so fewer people will see it.

If someone tries to share that post, they are presented with an article explaining why it is misleading.

Dog mauling deaths DOUBLE as 'impulse buys during lockdown' spark UK crisis
Lewis Hamilton retirement fears after emotional post by F1 Mercedes star

Those who still choose to share the post receive a notification with a link to the article. No posts are taken down.

Fact-checkers are free to choose how and what they wish to investigate.

Best Phone and Gadget tips and hacks

Looking for tips and hacks for your phone? Want to find those secret features within social media apps? We have you covered…


We pay for your stories! Do you have a story for The Sun Online Tech & Science team? Email us at [email protected]


This post first appeared on Thesun.co.uk

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

The Watch That Made Everything Now

The time of our lives begins April 4, 1972. That’s the day…

Ford, GM Step Into Chip Business

Detroit’s two biggest auto makers— Ford Motor Co. and General Motors Co.…

Russia is accused of creating social media accounts of fake Ukrainians

Russia has been accused of creating fake, AI-generated Facebook and Twitter accounts…

Life-changing iPhone hacks that every Apple fan should know revealed

YOUR iPhone is packed with tricks that you might not have known…