For years, tech companies have struggled between two impulses: the need to encrypt their users’ data to protect their privacy, and the need to detect the worst sorts of abuse on their platforms. Now Apple is debuting a new cryptographic system that seeks to thread that needle, detecting child abuse imagery stored on iCloud without—in theory–introducing new forms of privacy invasion. In doing so, it’s also driven a wedge between privacy and cryptography experts who see its work as a innovative new solution, and those that see it as a dangerous capitulation to government surveillance.

Today Apple introduced a new set of technological measures in iMessage, iCloud, Siri, and search, all of which the company says are designed to prevent the abuse of children. A new opt-in setting in family iCloud accounts will use machine learning to detect nudity in images sent in iMessage. The system can also block those images from being sent or received, display warnings, and in some cases alert parents that a child viewed or sent them. Siri and search will now display a warning if it detects that someone is searching for or seeing child sexual abuse materials, also known as CSAM, and offer options to seek help for their behavior or to report what they found.

But in Apple’s most technically innovative—and controversial—new feature, iPhones, iPads, and Macs will now also integrate a new system that checks images uploaded to iCloud in the US for known child sexual abuse images. That feature will use a cryptographic process that takes place partly on the device and partly on Apple’s servers to detect those images and report them to the National Center for Missing and Exploited Children or NCMEC, and ultimately US law enforcement.

Apple argues that none of those new features for dealing with CSAM endanger user privacy—that even the iCloud detection mechanism will use clever cryptography to prevent Apple’s scanning mechanism from accessing any visible images that aren’t CSAM. The system was designed and analyzed in collaboration with Stanford University cryptographer Dan Boneh, and Apple’s announcement of the feature includes endorsement from several other well-known cryptography experts. 

“I believe that the Apple PSI system provides an excellent balance between privacy and utility, and will be extremely helpful in identifying CSAM content while maintaining a high level of user privacy and keeping false positives to a minimum,” Benny Pinkas, a cryptographer at Israel’s Bar-Ilan University who reviewed Apple’s system, wrote in a statement to WIRED.

Children’s safety groups, for their part, also immediately applauded Apple’s moves, arguing they strike a necessary balance that “brings us a step closer to justice for survivors whose most traumatic moments are disseminated online,” as Julie Cordua, the CEO of the child safety advocacy group Thorn wrote in a statement to WIRED.

Other cloud storage providers from Microsoft to Dropbox already perform detection on images uploaded to their servers. But by adding any sort of image analysis to user devices, some privacy critics argue, Apple has also taken a step towards a troubling new form of surveillance and weakened its historically strong privacy stance in the face of pressure from law enforcement.

“I’m not defending child abuse. But this whole idea that your personal device is constantly locally scanning and monitoring you based on some criteria for objectionable content and conditionally reporting it to the authorities is a very, very slippery slope,” says Nadim Kobeissi, a cryptographer and founder of the Paris-based cryptography software firm Symbolic Software. “I definitely will be switching to an Android phone if this continues.”

Apple’s new system isn’t a straightforward scan of user images, either on their devices or on Apple’s iCloud servers. Instead it’s a clever—and complex—new form of image analysis designed to prevent Apple from ever seeing those photos unless they’re already determined to be part of a collection of multiple CSAM images uploaded by a user. The system takes a “hash” of all images a user sends to iCloud, converting the files into strings of characters that are uniquely derived from those images. Then, like older systems of CSAM detection such as PhotoDNA, it compares them with a vast collection of known CSAM image hashes provided by NCMEC to find any matches.

Apple is also using a new form of hashing it calls NeuralHash, which the company says can match images despite alterations like cropping or colorization. Just as crucially to prevent evasion, its system never actually downloads those NCMEC hashes to a user’s device. Instead, it uses some cryptographic tricks to convert them into a so-called “blind database” that’s downloaded to the user’s phone or PC, containing seemingly meaningless strings of characters derived from those hashes. That blinding prevents any user from obtaining the hashes and using them to skirt the system’s detection.

You May Also Like

Don’t Worry, It’s Just ‘Fire Ice’

The finding suggests that far more fire ice is vulnerable to climate-induced…

Fortnite servers are down ahead of v18.21 Halloween update – here’s what’s in store

FORTNITE servers are down for scheduled maintenance right now, but there’s a…

Facebook and the Folly of Self-Regulation

My late colleague, Neil Postman, used to ask about any new proposal…

Amazon has reduced the latest Blink Outdoor HD security camera by 25 per cent

If you’re looking to invest or upgrade your home security system, then…