AI rules: MEPs and researchers call for mass surveillance

Representatives and scientists see a need to correct the leaked draft of the EU Commission for a regulation on a “European concept for Artificial Intelligence” (AI). In a cross-party fire letter, 40 MEPs appeal to the Commission to “propose a clear ban on biometric mass surveillance in public spaces”. This is what the majority of citizens want.

Stumbling block: The Commission wants to prohibit the use of AI for indiscriminate surveillance only by companies. Security authorities should be allowed to continue using such AI technologies. The biometric remote identification of people, for example through automated face recognition, is classified as high-risk, but should in principle be possible after going through a special approval process.

These exceptions would have to be deleted, demanded the parliamentarians, among them Patrick Breyer (Pirate Party), Nicola Beer (FDP), Cornelia Ernst (Die Linke), Evelyne Gebhardt (SPD), the Green Alexandra Geese, the Liberals Moritz Körner and Svenja Hahn as well the social democrat Tiemo Wölken belong. Mass surveillance is constantly justified by public safety – a ban would therefore be relevant here in particular. Courts have repeatedly declared this approach null and void.

The automated detection of sensitive characteristics of people such as gender, sexuality, ethnicity and state of health “is not acceptable and must be excluded”, write the deputies. They “harbor the risk of entrenching many forms of discrimination” and served as the basis for “the extensive and indiscriminate surveillance and persecution of population groups on the basis of their biometric characteristics”.

The co-chair of the data ethics commission, Christiane Wendehorst, welcomed the list of generally prohibited AI practices, for example for manipulation or mass surveillance. “With this, ‘red lines’ are finally being formulated, which AI applications must not cross.” The Viennese civil law specialist points to a “strange discrepancy”: On the one hand, the Commission emphasizes that such techniques contradict “European values ​​and fundamental rights”, but should then be allowed by law for reasons of public security.

In this context, it should be noted that “the use of AI by government agencies is particularly sensitive and requires careful consideration,” says Frankfurt data protection lawyer Anne Riechert. For the Tübingen media ethicist Jessica Heesen, the draft makes it clear that problems relating to surveillance and security issues “cannot be overcome by regulating AI alone”.

“Manipulation”, for example, is rightly defined to such an extent that “this could ultimately include the entire area of ​​personalized advertising and the adaptive design of social media,” Heesen provides an example. AI is used in both areas to influence user behavior. However, this is a “central feature of the platform operator’s business model”. It is therefore questionable how the regulation can be implemented here.

According to Kristian Kersting, who researches machine learning at TU Darmstadt, the guidelines could even be interpreted in such a way that social media should be banned: “That sounds cynical, but many people are of the opinion that social networks negatively influence people’s opinions can.” Corresponding algorithms and social scoring would be classified as unacceptable from the outset.

According to Kersting, the minimum standards that the Commission provides for AI for personnel selection or police work are unlikely to be manageable: “A complete, explicit description of all the effects of actions on all facts applicable in a world is difficult, if not impossible.” The goal is good, but “the details are obscured”. The rules shouldn’t go too far, because “A head start in AI means prosperity through innovation and a powerful tool in the fight against climate change and disease”.

“New technologies, solutions and markets need a reliable regulatory framework, but without overregulation,” emphasized Antonio Krüger, Managing Director of the German Research Center for Artificial Intelligence (DFKI). AI is not a new science, but its applications and market penetration were still in adolescence. It is clear that the technology in Europe should not be used for arbitrary surveillance. The DFKI has been criticized for its broad AI cooperation with China.

The Hamburg media researcher Stephan Dreyer explained that it was right to want to steer artificial intelligence “in ways that protect fundamental rights”. According to the draft, almost any software currently in use, including modules and program libraries, could fall under AI. The commission also defines high-risk AI according to “highly indeterminate criteria, the existence of which can only be determined through a risk assessment”.


To home page