Tech

Robots in everyday life: the loved one

[ad_1]

Long-term, cross-generational provision is obviously not one of the strengths of human civilization. If even such a massive threat as the current climate change brings us to look at the coming decades only with great resistance – not to mention the social upheavals and changes in behavior that would be necessary to weaken the catastrophic development – , it can hardly come as a surprise when sustainable concepts in technical developments such as robotics and artificial intelligence (AI) are even less well received.

This is how the physicist Claudius Gros (University of Frankfurt) Association Zukunft25 founded in 2003, which had the next 10,000 years in mind, just over a thousandth of this period. And that of the former Google employee Anthony Levandowski in 2015, the AI ​​Church registered Way of the Future only came to a lifespan of five years: in February reported Techcrunchthat Levandowski dissolved the church at the end of 2020.

Way of the Future had campaigned for an ethically motivated development of AI and wanted to promote the peaceful and charitable integration of the non-biological forms of life into society. Levandowski told Techcrunch that he is still convinced that AI will fundamentally change the way people live and work, and will continue to work to steer this in a positive direction. However, last year’s Black Lives Matter protests sparked by the death of George Floyd led him to transfer the Church’s credit of 175,172 US dollars to the NAACP Legal Defense and Education Fund, which opposes racist attacks Discrimination sets in. It was time to put the money in an area where it can be of immediate use.




(Bild: Tatiana Shepeleva/Shutterstock.com)

The headwind that blew against his church project from the start may also have played a role in the decision. Patrick Beuth, for example, classified the project in the time already in the title line as “nonsense”. His rejection was essentially based on the limited successes of AI research so far, which were largely limited to special applications and were overrated by an uninformed public. He stayed in the very short-term, business-oriented time horizon, while Levandowski thought more long-term and said: “Let’s stop pretending we can stop the development of intelligence if it brings massive economic benefits in the short term for those who develop it. ”

Indeed, market dynamics are pushing robotics to steadily expand autonomous functions and increase intelligence. Otherwise it will not be possible to open up new fields of application such as service robotics, which has been hyped by providers for years. On the other hand, a fundamental limit to this continuous improvement in intelligence, which robots could not exceed, has not yet been identified.

The physicist Professor Dr. Claudius Gros (University of Frankfurt) (left) gives information about hyperemotional and transhuman intelligences in an interview with “heise online” author Hans-Arthur Marsiske (right).

Gisela Schmalz, Fellow at the Cologne Institute for Media and Communication Policy, argued a little more profoundly. She put Levandowski in a row with Elon Musk, who warned of the dangers of AI, but has now turned the Open AI Foundation he founded into a profit-oriented company. It is therefore not entirely clear whether Musk is really afraid of AI or not more of his competitors. Perhaps, so they suspect, Musk and Levandowski wanted to stir up fears in order to “present their own companies as beacons in the fog of uncertainty, as the only trustworthy sources of potentially dangerous technologies”. Regardless of the motives that drive them, it is “good, however, that people from the tech sector are calling for an AI that is developed for people instead of being developed past people. But God and the devil? People mean seriously, shouldn’t paint the devil on the wall and let God slumber in his four-poster bed. “