Conference on IT ethics: Artificial superintelligence and the handling of corona data

Lately, criticism of digitization has mostly been about speed. It’s all going too slowly, according to the complaint. It could often be advisable to take more time. An online conference organized from Hamburg now deals intensively with the questions of what we do with computers, what we shouldn’t and what we should perhaps do completely differently.

The joint conference the International Society for Ethics and Information Technology (INSEIT) and the International Association for Computing and Philosophy (IACAP) With its extensive program, bundled in several parallel strands, has addressed a multitude of fundamental questions about Philosophy and Ethics of Artificial Intelligence performed.

On the first day, Matthew Davis (Uppsala University) dealt with the question of whether machines can develop consciousness. Based on the biologist Mario Vaneechoutte, he differentiates between experience, perception and awareness as hierarchically building on processes that have evolved in nature and could develop in the same way in machines – driven by economic development, which is as diverse as possible usable robots. It can be assumed that machines will develop their own ethics and morals and will be capable of feelings similar to human fear, says Davis.

Michael Cannon (TU Eindhoven) discussed the existential risk that could be associated with the development of a superintelligence. This essentially revolves around the question of whether a superintelligence is able to reflect on the legitimacy of its goals – which in turn depends on the underlying concept of intelligence: A general artificial intelligence would contain this ability for self-reflection, says Cannon, a more instrumental one , on the other hand, an understanding of intelligence that is focused on limited applications does not. These two concepts have so far been incompatible, says Cannon.

Rafael Capurro pointed out that there has been a discussion that has been going on for centuries about the status of superintelligences and the human way of dealing with them in the form of gods. Cannon conceded that the question of superintelligence was basically theology, but that knowledge from this area had yet to be integrated.

The considerations of Sabina Leonelli (University of Exeter) on handling data during the Covid-19 pandemic referred to the here and now. The digital transformation had been massively accelerated by the spread of the corona virus, but at the same time awareness of problems such as the marginalization of population groups, often along existing divisions, had also grown. In general, an overemphasis on technical measures such as tracking apps or vaccinations can be observed. Surveillance capitalism will be strengthened while supporting social initiatives like the global, even distribution of vaccines (COVAX) Lip service remained.

In general, the pandemic offers many opportunities to examine social changes, identify societal sore points and develop solutions. However, it is important to keep an eye on a number of things when using the data, warns Leonelli. There is a risk that the surveillance measures introduced to track the infection will survive the pandemic. It is therefore by no means too early to start thinking about the digital structures of the world after Covid-19.

Leonelli also recalls that data is by no means neutral. They don’t speak for themselves, are very diverse and difficult to compare. Predictive models as well as attempts at causal explanations would therefore have to be critically scrutinized again and again. Most neglected is the identification of social and ecological deficiencies. The connection between data and physical and social reality should not be lost sight of.

All of this speaks against advancing data science too quickly, said Leonelli. Methodological fairness at every stage of research requires that not only the usual experts be invited to consultations, but that ordinary citizens are also involved. At the moment a “closed” science is still dominant, and its publications largely refer to itself. Science must open up, expand its focus from the technical to the social – and also address automation as a dystopia.


To home page