aa.com.tr

Israel’s AI use in Gaza normalizes civilian killings, obscures blame, exposes Big Tech…

_\- The sheer scale and complexity of AI models makes it ‘impossible to trace their decisions that can hold any individual or military accountable,’ warns Khlaaf, a former systems safety engineer at OpenAI_

_\- ‘Amazon, Google and Microsoft are explicitly working with the IDF to develop or allow them to use their technologies … despite being aware of the risks of AI’s low accuracy rates … and how the IDF intends to use their systems for targeting,’ says expert_

**ISTANBUL**

Israel’s use of artificial intelligence (AI) in its ongoing assault on the Gaza Strip – aided by tech giants such as Google, Microsoft, and Amazon – is fueling concerns over the normalization of mass civilian casualties and raising serious questions about the complicity of these firms in potential war crimes, according to a leading AI expert.

Multiple reports have confirmed that Israel has deployed AI models such as Lavender, Gospel, and Where’s Daddy? to conduct mass surveillance, identify targets, and direct strikes against tens of thousands of individuals in Gaza – often in their own homes – all with minimal human oversight.

Rights groups and experts say these systems have played a critical role in Israel’s incessant and apparently indiscriminate attacks, which have laid to waste massive swaths of the besieged enclave and killed more than 50,000 Palestinians, mostly women and children.

“With the explicit use of AI models that we know lack precision accuracy, we are only going to see the normalization of mass civilian casualties, as we have kind of seen with Gaza,” Heidy Khlaaf, a former systems safety engineer at OpenAI, told Anadolu.

Khlaaf, who is currently a chief AI scientist at AI Now Institute, warned that this trend could establish a dangerous precedent in warfare where military forces deflect responsibility for potential war crimes onto AI systems, while benefiting from the lack of a robust international mechanism to intervene or hold actors accountable.

“This is really a dangerous combination that can lead to military entities not being held accountable for potential war crimes, where they can simply point to an AI system and say, ‘Hey, it’s this algorithm that decided this. It wasn’t me,’” she said.

She stressed that Israel is using AI systems at “almost every stage” of its military operations – from intelligence collection and planning to final target selection.

The AI models, she explained, are trained on a variety of data sources, including satellite imagery, intercepted communications, drone surveillance, and the tracking of individuals or groups.

“They develop multiple AI algorithms that use a statistical or probabilistic calculation from this historical data that they’ve been trained on to predict where future targets may be,” she elaborated.

However, she emphasized that these predictions “do not necessarily reflect reality.”

Khlaaf pointed to recent revelations that commercial large language models (LLMs) like Google’s Gemini and OpenAI’s GPT-4 were used by the Israeli military to translate and transcribe intercepted Palestinian communications, automatically adding individuals to target lists “purely based on keywords.”

She noted that various investigations have confirmed that one of the Israeli military’s operational strategies involves generating large numbers of targets through AI without verifying their accuracy.

The expert underlined that AI models are fundamentally unreliable for tasks requiring high precision, such as targeting in military operations, because they rely on statistical probabilities rather than verified intelligence.

“Unfortunately, assessments have shown that AI models used for targeting can have an accuracy rate as low as 25%,” Khlaaf said.

“So, given this track record of AI’s high error rates, with a force like the IDF (Israel Defense Forces), who is willing to accept a large amount of civilian casualties to take one target out … then this sort of inaccurate automation of target selection is really not far from indiscriminate bombing at scale.”

**Automation without accountability**

Khlaaf further emphasized that the increasing use of AI in war is setting a dangerous precedent, where accountability is obscured.

“AI is setting this precedent that normalizes inaccurate targeting practices, and because of the sheer scale and complexity of these models, it then becomes impossible to trace their decisions that can hold any individual or military accountable,” she asserted.

Even the so-called “human in the loop” safeguard, often promoted as a fail-safe against AI errors, appears insufficient in the case of the IDF, she added.

Investigations revealed that the humans overseeing Israel’s AI-generated targets operated under “very loose guidance,” casting doubt on whether efforts were even made to minimize civilian casualties, according to Khlaaf.

She warned that the current trajectory could enable militaries to shield themselves from war crime allegations by blaming AI for erroneous targeting.

“If it’s hard to trace … why an AI may have contributed to civilian casualties, then you can very well imagine a case where it’s used heavily exactly to avoid accountability for killing a large amount of civilians,” she said.

**‘Amazon, Google and Microsoft explicitly working with IDF’**

Khlaaf confirmed that major US-based tech firms are directly involved in supplying AI and cloud computing capabilities to the Israeli military.

“This is not a new trend,” she noted, recalling that Google has been providing AI and cloud services to the Israeli military since 2021 through its $1.2 billion Project Nimbus, alongside Amazon.

Microsoft’s involvement also deepened after October 2023, as Israel relied more on its cloud computing services, AI models, and technical support, she said.

Other companies, including Palantir, have also been linked to Israeli military operations, although details of their roles remain sparse, she added.

Crucially, Khlaaf argued that these partnerships went beyond the sale of general-purpose AI tools.

“It’s important to point out that the IDF isn’t just using off-the-shelf cloud or AI services and taking them and just putting them in military applications,” she explained.

“Amazon, Google and Microsoft are explicitly working with the IDF to develop or allow them to use their technologies for intelligence and targeting, despite being aware of the risks of AI’s low accuracy rates, their failure modes, and how the IDF intends to use their systems for targeting.”

The implications suggest that tech companies were “complicit and directly enabling” Israeli actions, including those that “would be categorized or ruled as unlawful or that amount to war crimes,” Khlaaf said.

“If it has been determined that the IDF is committing specific war crimes, and the tech companies have guided them in committing those war crimes, then yes, that makes them very much complicit,” she added.

**‘An enormous gap’**

Khlaaf warned that the world is witnessing “the full embrace of automated targeting without due process or accountability,” a phenomenon backed by increasing investments from Israel, the US Department of Defense, and the EU.

“Our legal and technical frameworks are not prepared for this type of AI-based warfare,” she said.

Although existing international law, such as Article 36 of the 1949 Geneva Convention, mandates legal reviews for new weapons, there are currently no binding international regulations specific to AI-driven military technologies.

Additionally, while the US maintains export controls on specific AI-enabling technologies such as GPUs and certain datasets, there is no “wholesale ban on AI military technology specifically,” she noted.

“There’s an enormous gap there that hasn’t really been addressed as of yet,” Khlaaf said.

[Anadolu Agency website contains only a portion of the news stories offered to subscribers in the AA News Broadcasting System (HAS), and in summarized form. **Please contact us for subscription options.**](https://www.aa.com.tr/en/p/subscription/1001)

Read full news in source page