Tech

Child porn scanner in the iPhone: hash collisions already generated

[ad_1]

Hackers have succeeded in subjecting Apple’s controversial iPhone scanning function to initial tests for images of abuse. It is said to have succeeded in generating so-called hash collisions – i.e. recordings that match entries in the so-called CSAM database (Child Sexual Abuse Materials), but are actually harmless.

Developer Asuhariet Yvgar had previously discovered code of the system named NeuralHash by Apple in iOS 14.3 and later versions and details about it shared on Reddit as first, replicated routines on GitHub released. There is also a guide on how to NeuralHash data exported from Apple’s operating system.

Apple itself responded to the allegations and announced that the NeuralHash function analyzed by Yvgar and Co. is not the version that is used for CSAM detection. Opposite the IT blog Motherboard shared the group in a statement withthat it is not the final version. This will be part of the code of the signed operating system – Apple announced the function for iOS 15 and iPadOS 15, which will appear in the fall – and will then be verifiable by security researchers “whether it works as described”. What was published on GitHub is “a generic version”. They want to make the algorithm public.

Johns Hopkins University security researcher and cryptologist Matthew Green, who was the first to publicize Apple’s controversial local child porn scanner in the iPhone, commented Motherboard he assumes that collisions will also exist for Apple’s final version of NeuralHash “if they already exist for this function”. Of course, the hash function can be renewed (“re-spin”). But as a proof of concept, the published code is “definitely valid”.

The algorithm is not perfect anyway: According to Ygvar, the procedure contained in iOS 14.3 and higher can tolerate compression and resizing of images, but not the cropping or rotating of images. It remains unclear how much hash collisions could disrupt Apple’s approach. The group only wants to be alerted when around 30 images of abuse are discovered on iPhones and iPads before they are uploaded to iCloud. Then employees should be able to decrypt the recordings in order to then control them. Only then should child protection and authorities be switched on.

With collisions, it would be conceivable that this team is overloaded – but then Apple could implement its own filters for these “garbage recordings”, said security researcher Nicholas Weaver. Furthermore, attackers would have to have access to the CSAM databases used by Apple. Nevertheless, the reengineering now shows that Apple’s technology is by no means infallible – which in turn underpins the massive global criticism of its use.


More from Mac & i

More from Mac & i


More from Mac & i

More from Mac & i


(bsc)

To home page

.

[ad_2]