To err is not only human, but also typical of machines: Facebook users who recently watched a video posted in June 2020 by a British tabloid about racism against black men received an unusual automatic prompt. They were asked if they “would like to continue to watch videos about primates”.
Facebook switches off the recommendation function
The clip of the British Daily Mail with the Title “White man calls police about black men at the port” shows clashes between black people celebrating a birthday with a white civilian and summoned police officers in Connecticut. A black man is handcuffed out of a house and temporarily arrested – apparently largely for no reason. There is no reference to great apes or other animals in the video.
Facebook has, according to the New York Times an investigation into the case is initiated and the recommendation function triggered on the basis of biometric face recognition is deactivated. On Friday, the US company apologized for the “unacceptable mistake”. The relevant algorithm for the artificial intelligence (AI) system used is analyzed in order to “prevent this from happening again”.
Former employees: Facebook is doing too little
Has drawn attention to the process according to the report Darci Groves, a former content design manager on Facebook. A friend recently sent her a screenshot of the request, it says. She then posted it in a product-related feedback forum for current and former employees of the social network. A product manager for Facebook Watch, the company’s video service, described the incident as “unacceptable”. He pointed out that the group was looking for the cause.
Groves described the recommendation as “appalling and egregious”. Facebook is doing too little to get a grip on racist problems caused by technology.
“AI will still have to make progress”
The platform operator uses various types of AI, including automated face recognition and machine learning, to personalize the content displayed to users and keep them on the ball for as long as possible. Civil rights activists and data protection activists have been attacking the use of identification technologies for a long time. A Facebook spokeswoman now apologized on behalf of the company “to anyone who might have seen these offensive recommendations”. The AI used is being continuously improved. But one is aware “that it is not perfect and that we still have to make further progress”.
Google, Amazon and other technology companies have been criticized for years for discrimination by their AI systems. Studies have shown that facial recognition technology is biased towards people of color. For example, there are always difficulties in identifying people of color at all. Innocent black people have also been arrested for faulty algorithms.
In 2015, for example, Google’s Photos app incorrectly classified images with dark human faces as “gorillas”. The company “sincerely” apologized and stated that it would resolve the matter immediately. More than two years later found the portal Wired found out that Google’s solution was to to censor the word “gorilla” in the search and at the same time to block terms like “chimpanzee” and “monkey”.