EU AI rules: “worrying loophole” in discrimination and surveillance


The EU Commission’s draft for a regulation for harmonized rules for artificial intelligence (AI) has met with mixed feedback. The Brussels government institution rightly acknowledges “that some applications of AI are simply unacceptable and must be banned,” explained Sarah Chander from the European Digital Rights (EDRi) initiative. The initiative would not prohibit the extent of the unacceptable use of the technology and not all forms of biometric mass surveillance.

“This leaves a worrying void for discriminatory and supervisory technologies used by governments and businesses,” says Chander. The draft leaves “too much leeway for self-regulation by companies that benefit from AI.” The majority of the requirements “naively rely on AI developers to implement technical solutions to complex social problems that are likely to be assessed by the companies themselves”. The Commission supports a “profitable market for unfair AI that is used for surveillance and discrimination”.

The planned bans did not go far enough, believes the civil rights organization Access Now. The Commission is not doing anything “to stop the development or use of a multitude of applications of AI that drastically undermine social progress and fundamental rights”. The proposal falls short of the EU’s goal of “enabling AI that people can trust”, says the consumer association Beuc. The rules did not adequately protect consumers from possible harm from products and services with AI.

The commission is walking a difficult balancing act, says Kristian Kersting, head of the “Machine Learning” department at TU Darmstadt, classifying the project. They try to make sure that AI serves the goal of increasing human well-being. Nor should the EU countries be prevented from “competing with the US and China for technological innovations”. The definition of high and low risk in AI application classes remains nebulous at best. The complex, already existing AI ecosystem cannot be grasped in this way.

The Hamburg media lawyer Stephan Dreyer points out contradictions. State social scoring procedures for classifying the population would be banned, but they would remain permissible in the private sector. This is more liberal than in the previously leaked draft, since with this corresponding extensive practices for screening citizens would have been prohibited by credit agencies such as the Schufa. For the media ethicist Jessica Heesen from Tübingen, it remains unclear how Schufa credit ratings should be dealt with, regardless of whether there are algorithms behind them.

The eco-Association of the Internet Industry welcomed the fact that the commission refrains from “a general overregulation of artificial intelligence and instead focuses on the regulation of AI in high-risk applications”. Particularly when using biometric techniques such as face recognition, high standards would have to apply, “which adequately take into account the protection of personal rights”.

The inclusion of AI software in the framework for product liability could lead to an excessive burden for many providers, warns the industry association DigitalEurope. This area is dominated by smaller companies “that have little to no experience with market access regulations that were designed years ago for physical products”. In addition, speed is required here in order to apply the latest technological developments or to correct errors. The IT lobby doubts that future startup founders will decide in favor of Europe in “high-risk areas”.

“AI applications are already being used in the energy and water industries today to improve efficiency, better serve customers, the CO2-Reducing emissions and making work processes more efficient “, emphasized the industry association BDEW: It is critical that these applications should now be classified as” high-risk AI systems “. This would mean increased bureaucratic effort and legal uncertainty. It would be better to stop to set fundamental technology-neutral guidelines.