An investigation by the Senate Homeland Security Committee alleges that the FBI, the Department of Homeland Security and leading social media companies are not adequately addressing the growing threat of domestic terrorism, especially white supremacist and anti-government extremists.

In a 128-page report obtained by NBC News, the committee’s majority Democrats say federal law enforcement agencies have not appropriately allocated resources to match the metastasizing threat, and have failed to systematically track and report data on domestic terrorism incidents, as required by federal law.

“Unfortunately, our counterterrorism agencies have not effectively tracked the data that you need to measure this threat,” Sen. Gary Peters, D-Mich., who chairs the Senate Homeland Security and Governmental Affairs Committee, said Wednesday. “If they’re not tracking it, it’s likely they are not prioritizing our counterterrorism resources to effectively counter this threat.”

In a statement, the FBI said it is “agile” and adjusts resources to meet the latest threats, while DHS said that “addressing domestic violent extremism is a top priority” for the department.

Meta declined to comment, but a top executive, Nick Clegg, said last year, “The reality is, it’s not in Facebook’s interest — financially or reputationally — to continually turn up the temperature and push users towards ever more extreme content.”

A TikTok spokesperson said in a statement, “We believe that maintaining a safe and trusted platform is critical to our long-term success, which is why we are dedicated to identifying and removing content that incites or glorifies violence or promotes violent extremist organizations.”

A YouTube spokeswoman said the platform is acting to block extremist content. Twitter did not immediately provide comment in response to a request.

The report found that the FBI and DHS continue to spend more on international terrorism, despite saying for years that domestic terrorism now poses a greater threat to Americans.

Sept. 14, 202206:31

The investigation also found that social media companies “have failed to meaningfully address the growing presence of extremism on their platforms,” and that the business models of four leading social media outlets — Meta, TikTok, Twitter and YouTube — are based on maximizing user engagement, growth, and profits, which incentivizes increasingly extreme content.

“These companies point to the voluminous amount of violative content they remove from their platforms, but the investigation found that their own recommendations algorithms and other features and products play in the proliferation of that content in the first place,” the report said. “Absent new incentives or regulation, extremist content will continue to proliferate on these platforms and companies’ content moderation efforts will continue to be inadequate to stop its spread.”

The report’s analysis of the FBI and DHS response to domestic terrorism appears to have been hampered by a lack of data. For example, the committee said neither agency provided complete information on how many employees and how much money were devoted to combating domestic terrorism, despite a 2020 law requiring them to do so.

Although experts say the threat from domestic violent extremists has been building for years, the committee found that arrests and federal charges in domestic terrorism cases involving the FBI had been steadily declining before the Jan. 6, 2021, attack on the Capitol. Arrests and charges in domestic extremism cases have since spiked, but the bulk of them are related to the Capitol riot investigation.

The report suggests that despite prioritizing domestic violent extremism in recent years, the FBI appears quicker to call an attack terrorism when it was carried out in the name of jihadist ideology than white supremacist beliefs.

Both DHS and FBI define “homegrown violent extremists” as terrorists inspired by foreign ideologies. The report points out that the people accused of killing 23 people in El Paso, Texas, and 10 people in Buffalo, New York, were not given that designation though they reportedly claimed inspiration from the international terrorist attack in Christchurch, New Zealand, among other racist and antisemitic ideologies.

At the same time, the FBI categorized a Muslim man who killed U.S. military personnel in a July 2015 mass shooting that killed four U.S. Marines and a Navy sailor in Tennessee as a homegrown violent extremist, “despite not having information on which international terrorist organization supposedly inspired the attack,” the report said.

The report said a change in how the FBI categorizes domestic terrorism ideologies has been a hindrance to understanding the problem. In 2017, FBI created a new category of domestic terrorism ideology called “Black Identity Extremists,” but then stopped using it. By 2019, the FBI combined all forms of racially motivated extremism, including the pre-existing category of “White Supremacist Violence,” into one category called “Racially Motivated Violent Extremists.”

“This change obscures the full scope of white supremacist terrorist attacks, and it has prevented the federal government from accurately measuring domestic terrorism threats,” the report said.

The report also criticized the FBI and DHS as having been conservative in hunting for threat intelligence posted publicly on social media. The FBI has said that a torrent of threat information leading up to the Jan. 6 attack was not specific enough to have prompted action.

“Agencies have been slow to adapt to the open planning of extremist violence online, leading to incomplete threat assessments,” the report said.

Nov. 4, 202203:46

Peters added in a phone call with reporters: “The FBI and DHS must do a better job” monitoring threat information on social media. Before the Jan. 6 attack, he said, “There was a lot of open-source material that was out there indicating that people were planning to come to the Capitol and engage in violent acts. … These agencies have to be quicker on their feet.”

The report said the FBI uses a company called ZeroFox that identifies potentially concerning posts from social media platforms based on specific search terms identified and approved by the bureau. After identifying these posts, ZeroFox generates automatic alerts for FBI to investigate further. Each field office decides whether and how to use the data, the report said, and as a result, the data is not used consistently by FBI agents across the country.

DHS, meanwhile, “has failed to effectively utilize” its legal authority to monitor public social media. The report noted that the DHS inspector general found that DHS’s intelligence office “identified specific threat information related to the events on January 6, 2021, but did not issue any intelligence products about these threats until January 8, 2021,” despite communicating internally about security concerns.

In a statement, DHS said it “engages in a community-based approach to prevent terrorism and targeted violence, and does so in ways that protect privacy, civil rights, and civil liberties, and that adhere to all applicable laws. To that end, DHS regularly shares information regarding the heightened threat environment with federal, state, local, tribal, and territorial officials to ensure the safety and security of all communities across the country.”

The report is unsparing in its criticism of major social media platforms, which it says are a hotbed of extremist content.

It cited a study by the National Consortium for the Study of Terrorism and Responses to Terrorism, which found that in 2016, use of social media played a role in the radicalization processes of nearly 90% of U.S. extremist plots and activities. The study found that social media “has become an increasingly important tool for extremists to disseminate content, share ideas, and facilitate relationships.”

The committee requested information from Meta, TikTok, Twitter and YouTube, which it said have a combined footprint that reaches nearly 75% of Americans and several billion people worldwide.

Nov. 8, 202201:42

The bottom line, the report found: “Although Meta, TikTok, Twitter, and YouTube have a range of policies aimed at addressing extremist and hateful content on their platforms … extreme content is still prevalent across these platforms.”

The report added:

  • Meta has been aware of the harm that its products cause for years. Internal documents provided by a Meta whistleblower show that the platform’s recommendation features are designed to provide users with content they are most likely to engage with, and therefore often drive the spread of harmful content, according to internal Meta research and external researchers. Yet Meta has chosen in some instances to not make changes to its features and products that would alter what content is prioritized for viewers, instead focusing on taking down content that violates its rules, often after it has spread.
  • TikTok recommends videos based on user engagement, in particular the amount of time spent consuming individual pieces of content. Research says TikTok’s algorithm pushes users toward more extreme content. In an interview with committee staff, TikTok’s chief operating officer said she did not believe the company had conducted research into whether the company’s algorithms promote extreme content.
  • Twitter generates a list of accounts it recommends users follow based on the user’s engagement with similar accounts and topics, creating a “rabbit hole” effect that can promote conspiracy theories and extreme content. Twitter was central to the spread of QAnon conspiracy theories and the “pizzagate” conspiracy that falsely alleged that public officials were linked to a human trafficking and child sex ring out of a pizzeria in Washington, D.C. The Taliban and white supremacists used Twitter’s Spaces feature to spread extremist content to hundreds of users.
  • Over 70% of viewing time on YouTube is generated by the platform’s recommendation system, which is based on users’ engagement on the platform and activity on Google. Research conducted by MIT’s Technology Review found that “users consistently migrate from milder to more extreme content” on YouTube. In an interview with committee staff, YouTube’s chief product officer could not point to internal research done to evaluate whether the platform recommends extreme content.

YouTube spokeswoman Ivy Choi said in an email: “Responsibility is our top priority and informs every product and policy decision we make. We have established policies against hate speech, harmful conspiracies and violent extremism, and in Q2 2022, only 9 to 11 views out of every 10,000 came from violative content. Additionally, our recommendation system surfaces authoritative content in search results and the Watch Next panel, including for search queries related to violent extremism. While study in this area continues, a number of researchers have found that our recommendations aren’t steering viewers towards extreme content.”

The report’s policy recommendation to address these issues is vague.

“Congress and regulators should create accountability mechanisms for social media companies to prioritize safety in the development of their products and features,” the report said, adding that lawmakers should “consider removing current protections in law that allow companies, without meaningful consequences, to continue to prioritize engagement on their platforms even if that results in knowingly promoting extreme content.”

Source: | This article originally belongs to Nbcnews.com

You May Also Like

Chinese apps remain hugely popular in the U.S. despite efforts to ban TikTok

For several years now, ByteDance’s TikTok has been the focus of lawmakers…

Duke Ellington school delays naming theater after Dave Chappelle following Netflix controversy

The Duke Ellington School of the Arts, in Northwest D.C., said it…

Israel says hostage release won’t happen before Friday

IE 11 is not supported. For an optimal experience visit our site…

Some Retailers Are Learning to Love Bulked-Up Inventories

While many retailers are discounting, canceling orders and otherwise scrambling to get…