New Times,
New Thinking.

  1. Business
  2. Companies
6 August 2021

Has Apple changed its mind on privacy?

The tech giant is rolling out a new tool to identify child abuse imagery stored on iPhones, sparking criticism from privacy campaigners.

By Oscar Williams

In 2016, the FBI asked Apple to provide technical support to unlock a dead terrorist’s iPhone. Tim Cook, the company’s chief executive, however, refused and warned that a “skeleton key” would have given the FBI the opportunity to unlock hundreds more devices. The decision was a political risk for Apple, but it established the company’s reputation as the US tech giant most committed to protecting its customers’ privacy.

In the years since, Apple has doubled down on its pro-privacy principles. The company has entered into a public war of words with Facebook over the ethics of its advertising-funded business model, while also making it harder for advertisers to track users across different apps. Privacy has become a key selling point of the iPhone brand, distinguishing it from smartphones that run Google’s Android software.

Given its record on privacy, many security experts were surprised to discover on 5 August that the company has agreed to start scanning US users’ photo libraries for child abuse. The new tool, dubbed neuralMatch, cross-references codes of images set to be uploaded to iCloud against a library of child abuse material. If a match is found, the account is disabled and US authorities are informed.

[See also: NYU’s researchers are the latest victims of Big Tech’s war on scrutiny]

The announcement has sparked an uproar among privacy campaigners who had previously seen Apple as being on their side of the debate. “Apple can explain at length how its technical implementation will preserve privacy and security in its proposed backdoor, but at the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor”, commented the Electronic Frontier Foundation.

What this doesn’t do, however, is give Apple or law enforcement agencies access to a users’ photos, and it doesn’t apply to photos which aren’t uploaded to iCloud. And some expect that the new policy is designed to appease law enforcement agencies ahead of a move to encrypt iCloud libraries.

What concerns privacy campaigners most is that such a system could be repurposed by law enforcement agencies in countries such as China to identify individuals who had shared anti-government literature. But some experts are unconvinced that this represents a significant enough risk to avoid rolling out a technology that could aid the fight against child abuse.

Give a gift subscription to the New Statesman this Christmas from just £49

[See also: Are Apple and Google weaponising privacy?]

“Is it possible [that it would be abused]? Of course. But is it something that I’m concerned about? No,” Hany Farid, a researcher at the University of California-Berkeley, told the Associated Press, noting that other, similar, technology had not been repurposed in this way.

For many, the potential future privacy consequences of the technology are heavily outweighed by the benefits of combatting child abuse. “Apple’s expanded protection for children is a game changer,” said John Clark, the CEO of the National Center for Missing and Exploited Children. “With so many people using Apple products, these new safety measures have life-saving potential for children.”

[See also: Apple vs Facebook: how the war between the Silicon Valley giants is changing tech]

Content from our partners
How the UK can lead the transition to net zero
We can eliminate cervical cancer
Leveraging Search AI to build a resilient future is mission-critical for the public sector