The Israel Defense Forces (IDF) says the military is relying on an artificial intelligence (AI) system to help determine where to bomb in Gaza. The technology has been used by Israel before, but since the conflict with Hamas began, the system has helped the Israeli military increase its target count by 70 percent.
Israel intercepted rockets launched from Gaza Strip on December 1
The system is part of Israel's "Targeting Directorate." Launched in 2019, the Targeting Directorate is a unit of officers and soldiers tasked with maintaining cybersecurity, decryption, and research to find and identify additional Hamas operating sites in the Gaza Strip.
During the ongoing conflict, the force continues to work with other intelligence units in the Israeli military to quickly locate Hamas positions.
Ceasefire ends, fighting flares up again in Gaza
“This is a plan to enable brigade and division levels to have full access to intelligence and to immediately localize with the help of AI systems,” a senior IDF official said in a statement on November 2.
The senior IDF official also said that AI technology is used to cause maximum damage to the opponent, while ensuring that harm to unrelated people is minimized.
In a report released in November, the IDF said the AI system had been operating “around the clock” and had hit more than 12,000 targets. Nearly a month later, it’s unclear exactly how many that number has grown, though it’s likely significantly higher.
Israeli soldiers operating in the Gaza Strip
Sources also said the Israeli military knows in advance how many civilians could be killed in attacks on residential areas and that strikes are determined based on assessments of potential collateral damage.
It's unclear exactly what data is fed into the AI system. The IDF did not respond to Business Insider 's questions about the matter.
'Strange relationship' between Israeli prime minister and Hamas?
Israel’s use of AI is not a new concept in warfare. Nearly 50 countries have adopted a declaration issued earlier this year on the responsible use of AI for military purposes.
Experts in AI and international humanitarian law told The Guardian that even if humans are involved in decision-making about AI, they can still rely heavily on the system, and this could put civilian targets at risk.
Source link
Comment (0)