Tech

iPhones as photo informers: Apple’s software boss rejects backdoor allegations

Apple is trying hard to justify the introduction of controversial child protection functions in iOS 15: Local scanning on the iPhone and iPad for known image material that shows the sexual abuse of children is more privacy-friendly and “more verifiable” than performing detection only in the cloud, affirmed Apple’s software boss Craig Federighi in an interview with the Wall Street Journal. They see the project as “very positive” and are “deeply convinced of it”.

The joint announcement of a nude image recognition for iMessage and the local discovery of photo material with known abuse material (Child Sexual Abuse Material – CSAM) caused confusion and led to many “misunderstandings”, explained Federighi compared to the Wall Street Journal. He emphasized again that these are two completely separate systems.

Prominent civil rights activists, security researchers and data protection activists criticized the project as a “backdoor”: With the system, Apple has “opened the back door for more global surveillance and censorship,” says the Electronic Frontier Foundation (EFF). Apple’s software boss showed incomprehension about this characterization of the child protection functions – he believes that this is “absolutely no backdoor”. It would also only check iCloud photos and not other local images.

The database with the hashes for matching the illegal image material should become part of the operating system, which is the same everywhere around the world, said Federighi. Since the detection takes place locally on the device, security researchers are able to check this – and immediately draw attention to undocumented changes made by Apple. So there are “several levels of verifiability”, so you don’t have to blindly trust Apple.

According to Federighis, Apple is only notified when the system has recognized “around 30 known child pornography images” – and only then can employees check the photos. If it is actually abusive material, this will be reported to the responsible authorities. There was “no pressure” to introduce such a function, said the top manager. But they wanted to do something about this material and are now finished with the technology for it.

With regard to the local nude image filter in iMessage, Federighi pointed to “grooming” as a problem, a message to parents could help here. The recognition of nude content or pornographic image material by machine learning works – but it can make mistakes and be deceived, admitted Federighi.

The first security specialists have now asked Apple to stop the introduction of the scanning function for photos. Apple must first publicly explain the “goals, threat modeling and possible compromises” before implementation and establish clear principles and guidelines for the use of machine learning on end devices, says security specialist Alex Stamos, who works at Stanford Universitywho previously led Facebook’s security team. He would like all platforms to coordinate the important action against CSAM together.

Several civil rights organizations are also planning News agency information ReutersTo ask Apple to stop the project. Internally, Apple employees are also intensively discussing the planned functions and expressing concerns about possible misappropriation under pressure from governments, according to the news agency.


(lbe)

To home page

.