The Complicated Picture of AI Killer Robots
Allowing AI to control weapons systems could mean targets are identified, attacked and destroyed without human intervention. This raises serious legal and ethical questions.
Stressing the gravity of the situation, Austrian Foreign Minister Alexander Schallenberg said: “This is the Oppenheimer Moment of our generation.”
Robots and weapons using artificial intelligence are beginning to be widely used in many countries' militaries. Photo: Forbes
Indeed, the extent to which the “genie has escaped the bottle” has become a pressing question, as drones and artificial intelligence (AI) have become widely used by militaries around the world .
“The use of drones in modern conflict by Russia and Ukraine, the use of drones by the US in targeted strike operations in Afghanistan and Pakistan and, as recently revealed last month, as part of Israel’s Lavender program, shows how AI’s information processing capabilities are being actively used by the world’s militaries to enhance their offensive power,” said GlobalData defense analyst Wilson Jones.
Investigations by the London-based Office of War Investigative Journalism found that the Israeli military's Lavender AI system had a 90% accuracy rate in identifying individuals with Hamas links, meaning 10% were not. This resulted in civilians also being killed due to the AI's identification and decision-making capabilities.
A threat to global security
Using AI in this way highlights the need to manage technology in weapons systems.
“The use of AI in weapons systems, especially when used for targeting, raises fundamental questions about who we are as humans and our relationship to war, and more specifically our assumptions about how we might use violence in armed conflict ,” Dr. Alexander Blanchard, senior research fellow in the Governance of Artificial Intelligence program at the Stockholm International Peace Research Institute (SIPRI), an independent research group focused on global security, explained to Army Technology.
AI systems, when used in chaotic environments, can behave unpredictably and may fail to accurately identify targets. Photo: MES
“How does AI change the way the military selects targets and applies force to them? These changes in turn raise a host of legal, ethical, and operational questions. The biggest concerns are humanitarian,” Dr. Blanchard added.
“Many are concerned that, depending on how autonomous systems are designed and used, they could put civilians and others protected by international law at greater risk of harm,” the SIPRI expert explained. “This is because AI systems, especially when used in chaotic environments, can behave unpredictably and may fail to accurately identify targets and attack civilians, or to identify combatants who are not in combat.”
Elaborating on the issue, GlobalData defense analyst Wilson Jones noted that the issue of how guilt is determined could be questioned.
“Under the current law of war, there is a concept of command responsibility,” said Jones. “This means that an officer, general or other leader is legally responsible for the actions of troops under their command. If troops commit war crimes, the officer is responsible even if they did not order them, the burden of proof is on them to prove that they did everything they could to prevent war crimes.”
“With AI systems, this complicates things. Is an IT technician responsible? A system designer? It’s not clear. If it’s not clear, it creates an ethical hazard if actors think their actions are not protected by existing laws,” Jones stressed.
A US soldier patrols with a robot dog. Photo: Forbes
Arms control conventions Several major international agreements limit and regulate certain uses of weapons. There are chemical weapons bans, nuclear non-proliferation treaties, and the Convention on Certain Conventional Weapons, which prohibits or limits the use of specific weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.
“Nuclear arms control required decades of international cooperation and subsequent treaties to be enforced,” explains defense analyst Wilson Jones. “Even then, we continued atmospheric testing until the 1990s. A major reason why nuclear nonproliferation was successful was because of the cooperation between the United States and the Soviet Union in the bipolar world order. That no longer exists, and the technology that enables AI is more accessible to many countries than nuclear power.”
“A binding treaty would have to bring all the relevant players to the table to agree not to use a tool that increases their military power. That is unlikely to work because AI can improve military effectiveness at minimal financial and material cost.”
Current geopolitical outlook
While countries at the United Nations have acknowledged the need for responsible military use of AI, there is still much work to be done.
“Without a clear regulatory framework, these claims remain largely aspirational,” Laura Petrone, principal analyst at GlobalData, told Army Technology. “It is no surprise that some countries want to retain their own sovereignty when deciding on matters of domestic defense and national security, especially in the current climate of geopolitical tensions.”
Ms Petrone added that while the EU AI Act sets out some requirements for AI systems, it does not address AI systems for military purposes.
“I think that despite this exclusion, the AI Act is an important effort to establish a long-overdue framework for AI applications that could lead to some degree of uniformity of relevant standards in the future,” she commented. “This uniformity will also be important for AI in the military domain.”
Nguyen Khanh
Source: https://www.congluan.vn/moi-nguy-robot-sat-thu-ai-dang-de-doa-an-ninh-toan-cau-post304170.html
Comment (0)