Graphic videos of animal abuse have circulated widely on Twitter in recent weeks, generating outrage and renewed concern over the platform’s moderation practices.
One such video, in which a kitten appears to be placed inside a blender and then killed, has become so notorious that reactions to it have become their own genre of internet content.
Laura Clemens, 46, said her 11-year-old son came home from his school in London two weeks ago and asked if she had seen the video.
“There’s something about a cat in a blender,” Clemens remembered her son saying.
Clemens said she went on Twitter and searched for “cat,” and the search box suggested searching for “cat in a blender.”
Clemens said that she clicked on the suggested search term and a gruesome video of what appeared to be a kitten being killed inside of a blender appeared instantly. For users who have not manually turned off autoplay, the video will begin rolling instantly. NBC News was able to replicate the same process to surface the video on Wednesday.
Clemens said she is grateful her child asked her about the video instead of simply going on Twitter and typing in the word “cat” by himself.
“I’m glad that my child has talked to me, but there must be lots of parents whose kids just look it up,” she said.
The spread of the video as well as its presence in Twitter’s suggested searches is part of a worrying trend of animal cruelty videos that have littered the social media platform following Elon Musk’s takeover, which included mass layoffs and deep cuts to the company’s content moderation and safety teams.
Last weekend, gory videos from two violent events in Texas spread on Twitter, with some users saying that the images had been pushed into the platform’s algorithmic “For You” feed.
The animal abuse videos appear to predate those videos. Various users have tweeted that they have seen the cat video, with some trying to get Musk’s attention on the issue — some dating back to early May. Clemens said she flagged the video on May 3 to Twitter’s support account and Ella Irwin, the vice president of trust and safety at Twitter and one of Musk’s closest advisers.
“Young children know this has been trending on your site. My little one hasn’t seen it but knows about it. It should not be an autofill suggestion,” Clemens wrote.
Neither Irwin nor Twitter safety responded to the tweet, Clemens said.
Yoel Roth, Twitter’s former head of trust and safety, told NBC News that he believes the company likely dismantled a series of safeguards meant to stop these kinds of autocomplete problems.
Roth explained that autocompleted search results on Twitter were internally known as “type-ahead search” and that the company had built a system to prevent illegal, illicit and dangerous content from appearing as autocompleting suggestions.
“There is an extensive, well-built and maintained list of things that filtered type-ahead search, and a lot of it was constructed with wildcards and regular expressions,” Roth said.
Roth said there was a several-step process to prevent gore and death videos from appearing in autocompleted search suggestions. The process was a combination of automatic and human moderation, which flagged animal cruelty and violent videos before they began to appear automatically in search results.
“Type-ahead search was really not easy to break. These are longstanding systems with multiple layers of redundancy,” said Roth. “If it just stops working, it almost defies probability.”
Autocomplete suggestions in search bars are a common feature on many social media platforms, and they can often surface disturbing content. The terms for “dog” and “cat” autocompleted viral animal cruelty videos in Twitter’s search box Thursday, when NBC News contacted the company for comment. Twitter’s press account automatically responded with a poop emoji, the company’s standard response for the last month.
As of Friday afternoon, Twitter appeared to have turned off its autocomplete suggestions in its search bar.
Source: | This article originally belongs to Nbcnews.com