U.N. Assembly Calls for Guidelines on AI-Driven Weaponry; Divergent Votes from Global Powers
In a landmark move, the United Nations General Assembly on Friday adopted its first-ever resolution on lethal autonomous weapons systems, or LAWS, aimed at establishing international guidelines for the use of AI in weapons capable of autonomously identifying and eliminating human targets.
The resolution, initiated by Austria, received overwhelming support from 152 nations, including major G7 countries like Japan and the United States. However, it faced opposition from four countries, notably Russia and India, both of which are advancing their military AI capabilities. Meanwhile, 11 countries, including China, Israel, and Iran, chose to abstain from the vote.
The urgency of this resolution stems from the rapid advancement of AI in warfare, raising concerns about the potential misuse of AI in conflict zones. These autonomous weapons, capable of processing vast amounts of data and making independent attack decisions, could inadvertently target civilians or spiral out of control.
Recent conflicts, such as in Ukraine, have highlighted the rapid progression of AI weaponry, bringing the issue to the forefront of international security discussions. The resolution emphasizes the applicability of the U.N. Charter and international humanitarian law to LAWS. It also expresses concern over the potential for an arms race, increased conflict, and the risk of such technology falling into the hands of terrorists.
The U.N. Secretary-General, Antonio Guterres, has been tasked with compiling member states’ perspectives on lethal autonomous weapons systems. A report on this matter is expected to be presented at the next U.N. session in September 2024.
While the United States and China, both leaders in AI technology, acknowledge the need for regulation, Russia remains opposed to such measures.
For 30+ years, I've been committed to protecting people, businesses, and the environment from the physical harm caused by cyber-kinetic threats, blending cybersecurity strategies and resilience and safety measures. Lately, my worries have grown due to the rapid, complex advancements in Artificial Intelligence (AI). Having observed AI's progression for two decades and penned a book on its future, I see it as a unique and escalating threat, especially when applied to military systems, disinformation, or integrated into critical infrastructure like 5G networks or smart grids. More about me, and about Defence.AI.