Facebook has blamed a ‘technical issue’ for a drop in the amount of child abuse images and videos it’s blocked on the site over the last six months. 

According to its Community Standards Enforcement Report, the company had a problem with its ‘media-matching’ technology, which identifies illegal uploads.

From January to March 2021, Facebook removed five million pieces of child abuse content – down from 5.4 million from October to December 2020.

But both these quarters marked a massive slump in removals from the quarter prior – 12.4 million between July and September 2020. 

Between July and September 2020, Facebook removed 12.4 million pieces of child abuse content, but this figure slumped to 5.4 million for October to December 2020, and further to 5 million in January to March 2021

Between July and September 2020, Facebook removed 12.4 million pieces of child abuse content, but this figure slumped to 5.4 million for October to December 2020, and further to 5 million in January to March 2021

Between July and September 2020, Facebook removed 12.4 million pieces of child abuse content, but this figure slumped to 5.4 million for October to December 2020, and further to 5 million in January to March 2021

Facebook explained the huge difference in removals between Q3 and Q4 last year, which means it failed to stop potentially millions of child abuse images and videos from appearing on its website.  

‘In Q4, content actioned decreased due to a technical issue with our media-matching technology,’ Facebook said in the report. 

‘We resolved that issue, but from mid-Q1 we encountered a separate technical issue. 

‘We are in the process of addressing this and working to catch any content we may have missed.’ 

Graph from the social network's Community Standards Enforcement Report, published Wednesday, visualises the slump caused by the 'technical issue'

Graph from the social network's Community Standards Enforcement Report, published Wednesday, visualises the slump caused by the 'technical issue'

Graph from the social network’s Community Standards Enforcement Report, published Wednesday, visualises the slump caused by the ‘technical issue’ 

Facebook tests tools to combat content showing child abuse on its site 

Facebook is making an effort to end child exploitation on its platform with new tools for detecting and removing such photos and videos.

The features include a pop-up message that appears when users search terms associated with child exploitation and suggestions for people to seek help to change the behaviour.

Another tool is aimed at stopping the spread of such content by informing users that attempting to share abusive content may disable their account.

Facebook said it teamed up with NCMEC to investigate how and why people share child exploitative contact on its main platform and Instagram. 

Read more: Facebook tests new tools to detect and remove content showing child sexual abuse 

<!—->

Advertisement

The ‘media-matching’ tool that Facebook said was to blame refers to its artificial intelligence-powered detection technology. 

It’s believed to work by matching new uploads with a database of child abuse content that has already been taken down. 

But the tardiness of Facebook’s announcement caused consternation with one child protection expert. 

Andy Burrows, Head of Child Safety Online Policy at the NSPCC, said to the Telegraph: ‘For the last two consecutive quarters, Facebook has taken down fewer than half of the child abuse content compared to the three months prior to that.

‘This is a significant reduction due to two separate technical issues which have not been explained and it’s the first we have heard about them.’ 

The Community Standards Enforcement Report, published on Wednesday, also revealed that in the first three months of 2021, Facebook took down 8.8 million pieces of bullying and harassment content, up from 6.3 million in the final quarter of last year.

Some 9.8 million pieces of organised hate content were also removed, up from 6.4 million in late 2020.

Meanwhile, 25.2 million pieces of hate speech were removed, down on the 26.9 million pieces removed in the last three months of 2020.

On Instagram (which Facebook owns), it took down 5.5 million pieces of bullying content – up from five million at the end of last year – as well as 324,500 pieces of organised hate content, up slightly on the previous quarter.

However, the amount of hate speech content removed from Instagram was also down slightly to 6.3 million compared with 6.6 million in the last quarter of 2020.

Facebook says: 'We do not allow content that sexually exploits or endangers children on Facebook and Instagram'

Facebook says: 'We do not allow content that sexually exploits or endangers children on Facebook and Instagram'

Facebook says: ‘We do not allow content that sexually exploits or endangers children on Facebook and Instagram’ 

The social media giant has previously admitted that its content review team’s ability to moderate content had been affected by the pandemic and that would continue to be the case globally until vaccines were more widely available.

Specifically on misinformation around Covid-19, Facebook said it had removed more than 18 million pieces of content from Facebook and Instagram for violating its policies on coronavirus misinformation and harm.  

Facebook, along with wider social media, has come under increased scrutiny during the pandemic over its approach to keeping users safe online and amid high-profile cases of online abuse, harassment, misinformation and hate speech.

The government is set to introduce its Online Safety Bill later this year, which will enforce stricter regulation around protecting young people online and harsh punishments for platforms found to be failing to meet a duty of care. 

The government recently published a draft of the upcoming Bill, which will enforce regulation around Facebook and other online platforms for the first time.

However, experts criticised the draft for a loophole that would potentially expose children to pornography websites, due to a lack of age verification checks. 

The Bill, which was published as a draft on May 12, only applies to sites or services that allow ‘user interactivity’ – in other words, sites allowing interactions between users or allowing users to upload content, like Facebook.  

Commercial pornography sites, such as Pornhub and YouPorn, could therefore ‘put themselves outside of the scope of the Bill’ by removing all user-generated content. 

The Bill will require social media and other platforms to remove and limit harmful content, with large fines for failing to protect users, enforced by regular Ofcom. 

But the problem with the Bill is it focuses on the issue of kids ‘stumbling’ across pornography on social media – not children who start to look for it on dedicated porn sites.   

Government reveals ‘landmark’ internet laws to curb hate and harmful content in Online Safety Bill draft 

Ofcom will have the power to fine social media firms and block access to sites under new ‘landmark’ internet laws aimed at tackling abusive and harmful content online.

On May 12, the government published the draft Online Safety Bill, which it says will help keep children safe online and combat racism and other abuse.

The Bill will require social media and other platforms to remove and limit harmful content, with large fines for failing to protect users.

The government has also included a deferred power making senior managers at firms criminally liable for failing to follow a new duty of care, which could be introduced at a later date, while provisions to tackle online scams and protect freedom of expression have also been included.

Pressure to more strictly regulate internet companies has grown in recent years amid increasing incidents of online abuse. 

A wide range of professional sports, athletes and organisations recently took part in a social media boycott in protest at alleged inaction by tech firms against online abuse.

As the new online regulator, Ofcom will be given the power to fine companies who fail to comply up to £18 million or 10 per cent of their annual global turnover, whichever is higher – a figure which could run into billions of pounds for larger companies.

Ofcom will also have the power to block access to sites, the government said.

The new rules, which are expected to be brought before Parliament in the coming months, are set to be the first major set of regulations for the internet anywhere in the world.

‘Today the UK shows global leadership with our ground-breaking laws to usher in a new age of accountability for tech and bring fairness and accountability to the online world,’ Digital Secretary Oliver Dowden said.

Writing in the Daily Telegraph, he added: ‘What does all of that mean in the real world? It means a 13-year-old will no longer be able to access pornographic images on Twitter. YouTube will be banned from recommending videos promoting terrorist ideologies.

‘Criminal anti-semitic posts will need to be removed without delay, while platforms will have to stop the intolerable level of abuse that many women face in almost every single online setting.

‘And, of course, this legislation will make sure the internet is not a safe space for horrors such as child sexual abuse or terrorism.’

As part of the new duty of care rules, the largest tech companies and platforms will not only be expected to take action against the most dangerous content, but also take action against content that is lawful but still harmful, such as that linked to suicide and self-harm and misinformation.

The government said the deferred power to pursue criminal action against named senior managers would be introduced if tech companies fail to live up to their new responsibilities, with a review of the new rules set to take place two years after it is introduced.

The proposed laws will also target online scams, requiring online firms to take responsibility for fraudulent user-generated content, including financial fraud schemes such as romance scams or fake investment opportunities where people are tricked into sending money to fake identities or companies.

And there are further provisions to protect what the government calls democratic content, which will forbid platforms from discriminating against particular political viewpoints, and allow certain types of content which would otherwise be banned if it is defined as ‘democratically important’.

‘This new legislation will force tech companies to report online child abuse on their platforms, giving our law enforcement agencies the evidence they need to bring these offenders to justice,’ Home Secretary Priti Patel said.

‘Ruthless criminals who defraud millions of people and sick individuals who exploit the most vulnerable in our society cannot be allowed to operate unimpeded, and we are unapologetic in going after them.

‘It’s time for tech companies to be held to account and to protect the British people from harm. If they fail to do so, they will face penalties.’

However, the NSPCC has warned that the draft Bill fails to offer the comprehensive protection that children should receive on social media.

The children’s charity said it believes the Bill fails to place responsibility on tech firms to address the cross-platform nature of abuse and is being undermined by not holding senior managers accountable immediately.

Sir Peter Wanless, chief executive of the NSPCC, said: ‘Government has the opportunity to deliver a transformative Online Safety Bill if they choose to make it work for children and families, not just what’s palatable to tech firms.

‘The ambition to achieve safety by design is the right one. But this landmark piece of legislation risks falling short if Oliver Dowden does not tackle the complexities of online abuse and fails to learn the lessons from other regulated sectors.

‘Successful regulation requires the powers and tools necessary to achieve the rhetoric.

‘Unless Government stands firm on their promise to put child safety front and centre of the Bill, children will continue to be exposed to harm and sexual abuse in their everyday lives which could have been avoided.’

Labour called the proposals ‘watered down and incomplete’ and said the new rules did ‘very little’ to ensure children are safe online.

Shadow culture secretary Jo Stevens said: ‘There is little to incentivise companies to prevent their platforms from being used for harmful practices.

‘The Bill, which will have taken the Government more than five years from its first promise to act to be published, is a wasted opportunity to put into place future proofed legislation to provide an effective and all-encompassing regulatory framework to keep people safe online.’

Source: PA 

<!—->

Advertisement

This post first appeared on Dailymail.co.uk

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Apple’s Privacy Mythology Doesn’t Match Reality

In 2021, Apple has cast itself as the world’s superhero of privacy.…

Twitter is letting users decide who can reply to their tweets even AFTER they’re sent

Twitter has introduced a new feature that allows users to choose who…

They Had PTSD. A Psychedelic Called Ibogaine Helped Them Get Better

After multiple deployments with the US Army Special Forces, Joe Hudak returned…

Robots Won’t Close the Warehouse Worker Gap Anytime Soon

AI holds significant promise for making robots far more capable. Instead of…