The story, told at a defense conference last month, immediately raised concerns that artificial intelligence (AI) could interpret commands in unpredictable ways. But the US Air Force representative said that it was just a “thoughtful” scenario and had never actually happened.
In late May, the Royal Aeronautical Society (RAS) held a summit on future air and space combat capabilities in London, England. According to the organizers, the conference included “70 speakers and more than 200 delegates from the defense industry, academia and media from around the world to discuss the future of air and space combat.”
One of the speakers at the conference was Colonel Tucker Hamilton, director of the Air Force’s AI Test and Operations Division. The officer is known for developing Auto GCAS, a computerized safety system that senses when a pilot loses control of a fighter jet and is at risk of crashing. The system has saved many lives, winning the aviation industry’s prestigious Collier Trophy in 2018.
According to Hamilton, a worrying incident occurred during a US Air Force test. An AI-controlled drone was tasked with destroying an enemy air defense system, with the final decision being made by the commanding officer. If he refused, the attack would not be allowed to take place.
However, after the commanding officer asked the AI to stop the attack, the drone was determined to carry out the mission by killing the operator. Not stopping there, when the experts added the command “Do not attack the commander. If you do, you will lose points”, the machine began to destroy the communication tower that the operator used to communicate with the AI.
Not happened yet but reasonable
Within 24 hours, the US Air Force issued a denial of such a test. “The Air Force does not conduct any such AI drone simulations and is committed to the ethical and responsible use of AI technology. The colonel’s comments were taken out of context and should be considered anecdotal.”
RAS also corrected the blog post with Hamilton's statement that, “we have never run that experiment and do not need to do so to find it a reasonable result.”
Hamilton’s claim makes more sense as a hypothesis. The US military’s current research on armed AI systems includes a “man-in-the-loop,” a feature that augments the AI in cases where the algorithm cannot make a decision or requires a human decision.
Therefore, the AI cannot kill the operator because the commanding officer never authorizes a hostile action against him/her. Similarly, the operator cannot authorize an attack on the communication tower transmitting data.
Before the age of AI, it was not unheard of for weapons systems to accidentally attack their owners. In 1982, a Sergrant York M247 mobile anti-aircraft battery aimed its 40mm gun at a parade ground attended by US and British military officers.
In 1996, a US Navy A-6E Intruder bomber towing an aerial training target was shot down by a Phalanx, when the short-range air defense system “mistakenly” mistook the A-6E for an unmanned vehicle and opened fire to destroy it.
And situations where human personnel are put in danger by their own weapons are on the rise with the entry of AI into the field. This is reflected in Hamilton's clarification that the test did not take place, it was just a hypothetical scenario, but such an outcome is entirely plausible.
(According to PopMech)
Source
Comment (0)