Tech

EU data protectionists are calling for a clear ban on biometric facial recognition

The European Data Protection Board (EDSA) and the EU data protection officer Wojciech Wiewiórowski estimate the risks involved in the biometric identification of people in public spaces from a distance, for example through video surveillance with automated face recognition, as “extremely high”. In a joint statement, they therefore urge a general ban on any use of artificial intelligence (AI) for the automated recognition of human characteristics in the publicly accessible area.

This must apply, for example, to the automatic identification of “faces, gait, fingerprints, DNA, voice, keystrokes and other biometric or behavior-related signals”, emphasize the two supervisory authorities. They also call for a ban on AI systems that use biometric data to classify people into groups based on their ethnicity, gender, political or sexual orientation, or for similar reasons. Such discrimination violates the European Charter of Fundamental Rights.

Furthermore, the supervisory authorities consider the use of AI to read the emotions of a natural person to be “highly undesirable” and are calling for a ban here too. Exceptions should apply if such detection is important for medical purposes. In addition, any kind of social scoring should be prohibited. With such statistical procedures, which originate from the area of ​​credit checks, the social behavior of citizens is to be evaluated within the framework of point systems.

The EDSA chairmen Andrea Jelinek and Wiewiórowski emphasize that without such bans “the end of anonymity” in public is threatened. Applications such as real-time facial recognition “encroach upon fundamental rights and freedoms to such an extent” that they could call into question the very essence of those rights. Therefore, the precautionary principle must apply in order to create a “human-centered legal framework for AI”. The Federal Data Protection Commissioner Ulrich Kelber made it clear: “We do not want AI in the gray area of ​​fundamental rights.”

With its planned AI rules, the EU Commission only wants to prevent, in principle, remote biometric identification in real time. Subsequent use of the technology, for example for an investigation, would not be affected. The EU Commission has also made some exceptions for live detection. The police should be able to use such procedures, for example, to search for missing children or terror suspects, and in general in the fight against serious crimes. Many civil society organizations are also calling for a general ban on biometric recognition technologies in public.

The data protectionists generally welcome the risk-based approach of the EU Commission to contain AI systems. However, you are in favor of bringing the concept more in line with the General Data Protection Regulation (GDPR) and assessing and mitigating the social risks for groups. At the same time, the bodies are questioning the intended “predominant role” of the planned new European AI committee. This must be independent and a harmonized regulatory procedure must be ensured. The existing data protection authorities are already implementing the GDPR in the area of ​​AI, so that they should also be formally recognized as supervisory bodies under the proposal.

The EDSA also warns in a letter to the EU authorities that a “very high data protection standard” is crucial in the plans for a digital euro in order to promote the trust of end consumers. Corresponding concerns should be incorporated into the design process from the start and a data protection impact assessment should be carried out at an early stage. The committee also accepted the final version of its recommendations for additional measuresto secure data flows in third countries in the light of the “Schrems II” ruling of the European Court of Justice. The papers are to be published in the coming days.


(olb)

To home page

.