Researchers sifting through social media content on the Israel-Hamas conflict say it’s getting harder to verify information and track the spread of misleading material, adding to the digital fog of war.

As misinformation and violent content surrounding the war proliferates online, social media companies’ pullbacks in moderation and other policy shifts have made it “close to impossible” to do the work researchers were able to do less than a year ago, said Rebekah Tromble, director of George Washington University’s Institute for Data, Democracy and Politics.

“It has become much more difficult for researchers to collect and analyze meaningful data to understand what’s actually happening on any of these platforms,” she said.

Follow live updates from NBC News here.

Much attention has focused on X, formerly known as Twitter, which has made significant changes since Elon Musk bought the company for $44 billion late last year.

In the days after Hamas’ Oct. 7 attack, researchers flagged dozens of accounts pushing a coordinated disinformation campaign related to the war, and a separate report from the Technology Transparency Project found Hamas has used premium accounts on X to spread propaganda videos.

The latter issue comes after X began offering blue checkmarks to premium users for subscriptions starting at $8 a month, rather than applying the badge to those whose identities it had verified. That has made it harder to distinguish the accounts of journalists, public figures and institutions from potential impostors, experts say.

“One of the things that is touted for that [premium] service is that you get prioritized algorithmic ranking and searches,” said TTP Director Katie Paul. Hamas propaganda is getting the same treatment, she said, “which is making it even easier to find these videos that are also being monetized by the platform.”

X is far from the only major social media company coming under scrutiny during the conflict. Paul said X used to be an industry leader in combating online misinformation, but in the past year it has spearheaded a movement toward a more hands-off approach.

“That leadership role has remained, but in the reverse direction,” said Paul, adding that the Hamas videos highlight what she described as platforms’ business incentives to embrace looser content moderation. “Companies have cut costs by laying off thousands of moderators, all while continuing to monetize harmful content that perpetuates on their platforms.”

Paul pointed to ads that ran alongside Facebook search results related to the 2022 Buffalo mass shooting video while it circulated online, as well as findings by TTP and the Anti-Defamation League that YouTube previously auto-generated “art tracks,” or music with static images, for white power content that it monetized with ads.

A spokesperson for Meta, which owns Facebook and Instagram, declined to comment on the Buffalo incident. The company said at the time that it was committed to protecting users from encountering violent content. YouTube said in a statement it doesn’t want to profit from hate and has since “terminated several YouTube channels noted in ADL’s report.”

X responded with an automated message, “Busy now, please check back later.”

The deep cuts to “trust and safety” teams at many major platforms, which came amid a broader wave of tech industry layoffs beginning late last year, drew warnings at the time about backsliding on efforts to police abusive content — especially during major global crises.

We’re left being completely unclear what’s really happening on the ground.

Claire Wardle, co-director of Brown University’s Information Futures Lab

Some social media companies have changed their moderation policies since then, researchers say, and existing rules are sometimes being enforced differently or unevenly.

“Today in conflict situations, information is one of the most important weapons,” said Claire Wardle, co-director of the Information Futures Lab at Brown University. Many are now successfully pushing “false narratives to support their cause,” she said, but “we’re left being completely unclear what’s really happening on the ground.”

Over the past year, Reddit joined X in ending sharply reducing free access to its application programming interface, or API, a tool that allows third parties to gather more detailed information from an app than what’s available from its user-facing features. That has added a hurdle for researchers tracking abusive content.

The most basic access to X’s API now starts at $100 a month; enterprise access starts at $42,000 a month. Reddit’s fee structure is geared toward large-scale data collection.

Some major platforms, such as YouTube and Facebook, have long offered limited API access, but others have recently widened their own. TikTok launched a research API earlier this year in the U.S. as part of a broader transparency push, after fielding national security concerns from Western authorities over its Chinese parent company, ByteDance.

Reddit said its safety teams are monitoring for policy violations during the war, including content posted by legally designated terrorist groups.

TikTok said it has added “resources to help prevent violent, hateful or misleading content on our platform” and is working with fact-checkers “to help assess the accuracy of content in this rapidly changing environment.”

YouTube said it has already removed thousands of harmful videos and is “working around the clock” to “take action quickly” against abusive activity.

“My biggest worry is the offline consequence,” Nora Benavidez, senior counsel and director of digital justice at the media watchdog Free Press, said. “Real people will suffer more because they are desperate for credible information quickly. They soak in what they see from platforms, and the platforms have largely abandoned, and are in the process of abandoning, their promises to keep their environments healthy.”

Real people will suffer more because they are desperate for credible information quickly. They soak in what they see.

Nora Benavidez, FRee Press senior counsel and director of digital justice

Another obstacle during the current conflict, Tromble said, is that Meta has allowed key tools such as CrowdTangle to degrade.

“Journalists and researchers, both in academia and civil society, used [CrowdTangle] extensively to study and understand the spread of mis- and disinformation and other sorts of problematic content,” Tromble said. “The team behind that tool is no longer at Meta and its features aren’t being maintained, and it’s just becoming worse and worse to use.”

That change and others across social media mean “we simply don’t have nearly as much high-quality verifiable information to inform decision making.” Where once researchers could sift through data in real time and “share that with law enforcement and executive agencies” relatively quickly, “that is effectively impossible now.”

The Meta spokesperson declined to comment on CrowdTangle but pointed to the company’s statement Friday that it is working to intercept and moderate misinformation and graphic content involving the Israel-Hamas war. The company, which has rolled out additional research tools this year, said it has “removed seven times as many pieces of content” for violating its policies compared with the two months preceding the Hamas attack.

Resources remain tight for examining how social media content impacts the public, said Zeve Sanderson, founding executive director at New York University’s Center for Social Media and Politics.

“Researchers really don’t have either a wide or deep perspective onto the platforms,” he said. “If you want to understand how those pieces of misinformation are fitting into an overall information ecosystem at a particular moment in time, that’s where the current data-access landscape is especially limiting.”

Source: | This article originally belongs to Nbcnews.com

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Federal judge dismisses Strauss sex abuse lawsuits against Ohio State

A federal judge Wednesday dismissed all the outstanding lawsuits against Ohio State…

Britain’s Queen Elizabeth II tests positive for Covid-19

LONDON — Britain’s Queen Elizabeth II has tested positive for Covid-19, Buckingham…

7-year-old girl killed outside McDonald’s in Chicago, police say

CHICAGO (AP) — A 7-year-old girl was killed and her father was…

1,000-pound great white shark from Canada named Ironbound spotted near North Carolina

A great white shark from Canada is enjoying his early summer cruising…