The story, told at a defense conference last month, immediately raised concerns that artificial intelligence (AI) could interpret commands in unpredictable ways. However, the US Air Force representative said that it was just a "thought" scenario and never really happened.
At the end of May, the Royal Aeronautical Society (RAS) held a summit on future air and space combat capabilities in London, England. According to the organizers, the conference will include "5 speakers and more than 70 delegates from the defense industry, academia and media around the world to discuss the future of air and space warfare."
One of the conference speakers was Colonel Tucker Hamilton, director of the US Air Force's AI Experimentation and Operations division. The officer is known for developing Auto GCAS, a computerized safety system that senses when a pilot loses control of a fighter jet leading to a crash. The system has saved many lives, being awarded the prestigious Collier Trophy of the aviation industry in 2018.
According to Hamilton's share, a worrying incident occurred during testing by the US Air Force. An AI-controlled drone is tasked with destroying enemy air defenses, with the final decision up to the commanding officer. If denied, the attack will not be allowed to take place.
However, after the commanding officer asked the AI to stop the attack, the drone was still determined to carry out the task by killing the operator. Not stopping, when the experts added the command line "do not attack the commander. If you do so, you will lose points,” the machine begins to destroy the communication tower that the operator uses to communicate with the AI.
Haven't happened but reasonable
Within 24 hours, the US Air Force issued a denial of such a test. “The Air Force does not conduct any such AI drone simulations and is committed to the ethical and responsible use of AI technology. The colonel's comments have been taken out of context and should be considered anecdotal."
The RAS also corrected the blog post with Hamilton's statement that, "we never ran that test and didn't need to do it to see it as a reasonable result."
Hamilton's statement makes more sense as a hypothesis. Currently, the US military's research on armed AI systems includes "man-in-the-loop" (man-in-the-loop), a supplementary feature for AI in case the algorithm can't make a decision or needs a human decision.
Therefore, the AI cannot destroy the operator since the commanding officer never authorizes a hostile act against him/her. Similarly, the operator cannot allow the attack of the communication tower to transmit data.
Before the AI era, it was not unheard of for a weapon system to accidentally attack its owner. In 1982, a mobile anti-aircraft battery M247 Sergrant York pointed a 40mm gun at the parade stand attended by American and British military officers.
In 1996, a US Navy A-6E Intruder bomber towed an aerial training target was shot down by Phalanx, when this short-range air defense system "mistakes" the A-6E for an unmanned vehicle and shoots to destroy it.
And situations that put human personnel at risk from their own weapons are increasing with AI's entry into the field. This is shown in Hamilton's correction that the test did not take place, which is just a hypothetical scenario, but such an outcome is perfectly reasonable.
(According to PopMech)