iPhone detects illegal content: computer scientists warn of Apple’s scanning plans


The warnings about Apple’s planned system for the detection of misuse photos do not stop: Such a technology is “dangerous”, warn two researchers from Princeton University, who have themselves developed a similar system for the local capture of illegal content in crypto messengers. One is not concerned about Apple’s technology because one does not understand it, but because one knows exactly how it works, write the scientists.

After initial headwinds, Apple had stated that the criticism of the planned functions was also a matter of “misunderstandings”.

The aim of the researchers was to develop a way for communication services with end-to-end encryption that still allows the detection of illegal and “harmful” content such as child sexual abuse material (CSAM) and extremist content – in a form that is as compliant with data protection as possible, as the paper elaborates.

Similar to Apple, the two scientists are shifting the recognition to the end device – and also relying on perceptual hash matching to compare hashes of the user’s images locally with the hashes of a database. Various protective mechanisms are intended to ensure that the service is only informed if there are hits and otherwise has no insight into the communication.

After several attempts, a functioning prototype of the system was developed, but a “glaring problem” was encountered, the scientists explain in a guest post for the Washington Post: Such a system can be “easily abused for surveillance and censorship”.

Apple’s implementation is “more efficient and more powerful”, but has the same basic problems and risks. The iPhone company has so far provided only a few and inadequate answers to the possible consequences of the technology, the researchers argue. For many of the risks of such a system there is probably no technical solution – Apple is putting security, data protection and freedom of expression all over the world at risk.


To home page