Thousands of artificial intelligence experts are calling on governments to take preemptive action before it’s too late. The list is extensive and includes some of the most influential names in the overlapping worlds of technology, science and academia. From a report: Among them are billionaire inventor and OpenAI founder Elon Musk, Skype co-founder Jaan Tallinn, artificial intelligence researcher Stuart Russell, as well as the three founders of Google DeepMind — the company’s premier machine learning research group. In total, more than 160 organizations and 2,460 individuals from 90 countries promised this week to not participate in or support the development and use of lethal autonomous weapons. The pledge says artificial intelligence is expected to play an increasing role in military systems and calls upon governments and politicians to introduce laws regulating such weapons “to create a future with strong international norms.”
“Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems,” the pledge says. “Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage,” the pledge adds.
Powered by WPeMatico