For years, companies and scientists have been building their sets with training data for systems for biometric face recognition, often “wildly” without the consent of those affected. Adam Harvey and Jules LaPlace, who live as researchers and artists in Berlin, are now making this hustle and bustle in ethical and legal gray areas at least a little more transparent. With “Exposing.AI” you have developed a search engine that users can use to check whether their Flickr images have been misused for such purposes.
That recently project that went online allows a search with Flickr identifiers such as the user name, the NSID assigned by the picture platform or a photo ID. Results will only be displayed if an exact match is found with corresponding data in the monitoring databases involved. The platform operators do not save any search data themselves and do not pass them on. The displayed photos are then downloaded directly from Flickr.com, copies of which are not kept.
Those interested can also search for photos of themselves that third parties have taken and posted on Flickr by using a hashtag. An abbreviation for events attended or private celebrations such as “#mybirthdayparty” is conceivable. However, the makers point out that this type of search takes a little longer: “Each photo can contain dozens of tags, which leads to millions of additional data records for the search.”
High potential for abuse and damage
“People need to realize that some of their most intimate moments have been turned into weapons,” explained Liz O’Sullivan, Technology director at the civil rights organization Surveillance Technology Oversight Project (STOP), the initiative to the New York Times. The activist helped design Exposing.AI. According to her, it was originally planned to use automated facial recognition for the search engine as well. However, the team refrained from doing this again: the potential for abuse and damage was too high.
Harvey recently attended a conference based on findings from his Predecessor project “Megapixels” reports on the action taken by the hunters after facial photographs in order to eradicate the sometimes still high error rates of the technology. Microsoft, for example, simply used pictures of celebrities and lesser-known people on the web for the Celeb database, while Duke University took pictures of students with telephoto lenses from a window of the institute for the “multi-tracking register” DukeMTMC. For “Brainwash”, the perpetrators even diverted image data from a video live stream from a café in San Francisco.
Most of these databases have now been officially shut down, the artist knows: “But you can’t really get them online.” The content was still circulating in “academic torrents” in peer-to-peer networks “around the world”. It has been proven that parts of it were taken over by the Chinese army and now also used for the suppression of the Muslim minority in the autonomous region of Xinjiang. Participating companies such as Megvii and universities would have to be liable, the activist demanded.
Next the mentioned data sets The project, supported by the “Artificial Intelligence and Media Philosophy” group at the Karlsruhe University of Design and the Weizenbaum Institute, enables searches in MegaFace with over 3.5 million photos, DiveFace with over 115,000 photos from Flickr, VGG Face, Pipa, IJB-C, FaceScrub, TownCentre, UCCS and Wildtrack. Although Exposing.AI searches millions of records in this way, the creators say there are “countless other training records for facial recognition that are continuously being compiled from social media, news and entertainment sites.” Future versions of the project may be expanded accordingly.
Subsequent deletion of images from copies of data sets that are already in circulation is not possible, according to the website. For training databases that are still up-to-date, they are working on a function to use the search results to request the operator to remove their own recordings. Photos that were removed from Flickr then no longer appeared on Exposing.AI.