Explained: Why Apple’s child abuse AI image-scanning feature is controversial

“Privacy is a fundamental human right. At Apple, it’s also one of our core values. Your devices are important to so many parts of your life. What you share from those experiences, and who you share it with, should be up to you.”

That’s straight from the Apple website and, up until now, most users have had little reason to doubt the assertion.

It’s a stance the company maintained even when the reason for an entity wanting to unlock an iPhone was compelling.

In 2016 Apple refused to unlock the iPhone of a terrorist for the FBI as it believed that creating a back door into the phone would weaken security and could be used by malicious actors (the FBI turned to an Australian security firm in the end.)

Despite that, backdoors were found in the recent NSO attack – which I wrote about here

That compromised iPhones of activists in sophisticated and targeted attacks, but Apple issued a fix within a few days and only a handful of iPhones were compromised. 

Yes, it put a dent in Apple’s reputation but until the announcement last week that new updates for iOS and iPadOS will automatically match iCloud Photos accounts against known CSAM (Child Sexual Abuse Material) from a list of image hashes compiled by child safety groups, most Apple users still bought into Apple’s privacy-first spiel.

Even worse for many was the news that Apple’s scanning would be done on your device for photos that are in the cloud; so much for “who you share it with, should be up to you.”

As Mathew Green a cryptography researcher at John Hopkins University tweeted – “This sort of tool can be a boon for finding child pornography in people’s phones. But imagine what it could do in the hands of an authoritarian government?”

He added that even if you believe Apple won’t allow these tools to be misused there’s still a lot to be concerned about as “the systems rely on a database of “problematic media hashes” that you, as a consumer, can’t review.”

Others were concerned that CSAM hashes could be maliciously planted on an iPhone or that the CSAM hash – essentially a list of numbers that correspond to illegal images that would be on all iPhones once updated – could be reversed engineered in some way.

Slippery slope

No matter how well-intentioned, Apple’s announcement was seen by many as a troubling, “slippery slope” moment – a view best expressed in a tweet by privacy campaigner Edward Snowden – “Make no mistake: if they can scan for kiddie porn today, they can scan for anything tomorrow. They turned a trillion dollars of devices into iNarcs—without asking.”

The resulting uproar even had Apple’s usually suave senior vice president of software engineering Craig Federighi ruffled.

“It’s really clear a lot of messages got jumbled pretty badly in terms of how things were understood,” Federighi told the Wall Street Journal. 

“We wish that this would’ve come out a little more clearly for everyone because we feel very positive and strongly about what we’re doing.”

He went on to say that Apple has no plans to expand the database beyond illegal CSAM, after critics pointed out that countries with restrictive censorship policies could potentially use the technology.

At the moment New Zealand users won’t be affected. 

Apple will ship the hash database of known CSAM onto the operating system in all countries, but, for now at least, it will only be used for scanning in the US. 

But for such a self-aware company this has been an odd and completely foreseeable public relations disaster, that even had its own employees riled.

It seems inconceivable that a company with privacy so embedded in its key values would not anticipate the uproar – after all, any erosion of privacy in Apple products is going to be big news. 

Perhaps it presumed that because Dropbox, Facebook and Google were already quietly scanning for CSAM in their cloud storage this wouldn’t be seen as such a big thing.

If that was the case then Apple underrated the trust their own customers had in its own products.

The furore ensured the lights at Cupertino burned through the night with Apple releasing a six-page report in a belated attempt to clarify the situation. 

It reassured users that the CSAM feature only impacts users who have chosen to use iCloud Photos to store their photos and that there is no impact to any other on-device data; and neither is Messages monitored, that is unless a parent signs up to the child safety initiative – not connected with the CSAM scanning – that gives parents and children who share a family iCloud account tools to help protect under 13-year-olds from sending and receiving sexually explicit images.

That latter feature, which, despite some obvious flaws – not every kid’s Mum and Dad has their best interests at heart – could well have been applauded by many but got buried in Apple’s inept messaging.

Explained - Why Apple’s child abuse AI image-scanning feature is controversial

Privacy tough to maintain

“What Apple is showing… is that there are technical weaknesses that they are willing to build in,” Center for Democracy and Technology project director Emma Llanso said in an interview. 

“It seems so out of step from everything that they had previously been saying and doing.”

But, delve a little deeper into Apple’s recent history and you find that that’s not quite true. 

There have been signs over the past several years that the company is willing to peel back some of its vaunted privacy measures when there’s an upside for Apple bowing to pressure from US and foreign governments as it discovers that its “privacy-first” stance is a tricky one to maintain. 

Either you get attacked by privacy groups or you get skewered by governments for protecting criminals and terrorists and/or potentially lose access to lucrative territories.

In 2018 Apple dropped plans to let iPhone users fully encrypt backups of their devices in iCloud after the FBI complained that doing so would harm investigations (and it’s that unencrypted state that allows the CSAM hash technology to work in the iCloud.)

Apple also caved to China – where most of Apple’s products are manufactured – when, in 2014, it agreed to store Chinese user data on servers in that country. 

At first, it was assumed the data was out of the reach of the Chinese Government but recent reporting by the New York Times suggests that Apple agreed to abandon encryption after protests from Chinese regulators.

Apple also complied with the Communist Party’s demands to censor about 55,000 apps – including VPNs and news apps – from its App Store since 2017 that the Party deemed harmful to the government’s values.

Apple seems to be saying: privacy now has levels, and we decide what they are and how they will be monitored.

LEAVE A REPLY

Please enter your comment!
Please enter your name here