Apple's CSAM scanning could be very dangerous, say researchers

3comments
Apple's CSAM scanning could be very dangerous, say researchers
A couple of weeks ago, Apple announced that it would be implementing a new measure across all iCloud-enabled iPhones to help catch predators in possession of photos containing child sexual abuse. This has long been a prevalent issue in our society, unfortunately, and there is only so much we have been able to do to fight against it.

Apple's idea to help out was by the imminent implementation of automatic image scanning of any and all personal photos uploaded to iCloud accounts within the United States (for now). This means that virtually all photos will undergo this scanning, as most people tend to have iCloud Photos turned on due to the essentially unlimited storage space it offers. 

Once a photo is scanned, its hash value will be checked against that of images in a database of existing CSAM (or Child Sexual Abuse Material). If any identical or even close matches are found, the photo will be instantly flagged. In order for an account to be flagged and reported to the National Center for Missing and Exploited Children, however, there would need to be around 30 CSAM matches found on that account. 

This provides for a "less than a one in 1 trillion chance per year of incorrectly flagging a given account," or so says Apple.


Widespread backlash came immediately 


Apple's decision to implement this kind of on-device surveillance received a huge amount of backlash immediately, as it threatens to seriously compromise people's fundamental right to privacy and is inherently vulnerable to gross exploitation and misuse. Thousands of people have already signed a letter of opposition to the impending policy, and even the German Parliament contacted Tim Cook asking him to reconsider.

The entire company's integrity and privacy reputation on which Apple is founded could be compromised, if it chooses to replace its industry-standard end-to-end encryption policy in order to follow through with this plan. Both people and governments across the globe naturally have a right to be concerned at the prospect of such a huge back door opening up to illicit surveillance, and a whole Pandora's box of possible evils.

Princeton joins the fray, calling out the dangers in Apple's system


A couple of researchers from Princeton University recently contributed to the discussion with some in-depth knowledge and first-hand experience with such systems. Anunay Kulshrestha and Jonathan Mayer had both developed a prototype of a surveillance system very similar to Apple's in function and purpose—designed to identify child abuse material through perceptual hash matching (PHM). 

However, they quickly ran into an ethical problem. No PHM system that is designed "to counter harmful media, such as CSAM and extremist content" comes without the danger of being also leveraged for immoral purposes, should it fall into the wrong hands.


Recommended Stories
In other words, a system in place that allows for private communication to be monitored and handed over to authorities at any given time could be an easy tool for dictatorship. In the end, the way in which a PHM system is implemented and leveraged lies entirely at the discretion of its human controller, who "will have to curate and validate [that hash set B does exclusively contain harmful media]."

What constitutes harmful media also lacks an objective definition, and could be potentially expanded and twisted, to include anything which the powers that be choose to deem "harmful media"—compromising people's right to free speech, or suppressing citizens even further in countries where that freedom is not considered a human right.  
Apple's "one-in-a-trillion" false detection rate is also called into question, as while the tech giant makes it sound like impossibility, the Princeton researchers believe that these false positives are a legitimate possibility (albeit with low probability) that could compromise innocent individuals' safety. "A Client’s media [could] match a value in the hash set even though there is no perceptual similarity," they write.

"In these instances, an innocent E2EE Client may—depending on the content moderation response—lose communications confidentiality, have their account terminated, or become the
subject of a law enforcement inquiry." Needless to say, this is a ridiculously undesirable scenario, the risk of which should never exist.

Even employees working for Apple have been voicing their concern for the dangers inherent in such a system as well. Apple has already responded to the criticism once, promising that it will never allow such a photo surveillance system to fall into the control of any governing or external entity, and that it will only be used to prevent child abuse material possession. 

Naturally, this promise alone is far from satisfactory, and it doesn't seem like the backlash will slow down until Apple agrees to stick to end-to-end encryption, with no one other than the owner having access to any photos and data stored either on their iPhone or iCloud—which at this point has become an extension of nearly every iPhone in circulation.

Create a free account and join our vibrant community
Register to enjoy the full PhoneArena experience. Here’s what you get with your PhoneArena account:
  • Access members-only articles
  • Join community discussions
  • Share your own device reviews
  • Build your personal phone library
Register For Free

Recommended Stories

Loading Comments...
FCC OKs Cingular\'s purchase of AT&T Wireless