As AI targeting systems become more common in warfare, the effectiveness of human oversight is increasingly put into question. Israel’s use of Lavender AI in Gaza shows how operators often act as a “rubber stamp,” spending only seconds verifying AI-flagged targets. With a 10% error rate, this raises urgent concerns about the need for stronger oversight frameworks.
As autonomous weapon systems become central to military strategies worldwide, the importance of human oversight is frequently emphasized as a safeguard against AI-driven errors. Countries like China, the U.S., and members of the European Union have all introduced policies underscoring this commitment. But real-world applications, such as the Israeli military’s use of an AI system for identifying potential combatants in Gaza, reveal a troubling limitation: human oversight may be diminishing in quality, reduced to a mere checkbox as AI accelerates the speed of warfare.
Another AI system, Where’s Dad, reportedly used alongside Lavender, further exemplifies these concerns by assisting in tracking individuals within family networks, raising additional ethical and regulatory challenges. Without effective governance, the notion of human control may quickly become an illusion. And as warfare between AI-equipped states looms on the horizon, the need for enforceable standards becomes more urgent.
Countries worldwide claim to prioritize human oversight in AI-driven weapon systems, as exemplified by China’s recent emphasis on responsible AI development at the Convention on Certain Conventional Weapons (CCW) meeting in October 2024. At the 2023 CCW gathering, Ambassador Shen Jian, China’s head of delegation, stated that AI should be used responsibly in military operations and suggested a tiered, categorized approach to ensure weapon systems remain under human control.
Similarly, the U.S. Department of Defense’s Directive 3000.09 mandates that all autonomous and semi-autonomous weapon systems allow human operators to exercise judgment over force decisions. The European Commission echoes this approach, with policies requiring that human control be maintained throughout the weapon system’s lifecycle. Together, these principles reflect a global commitment to human oversight. But real-world applications highlight the need for directives that mandate not just oversight, but substantial oversight.
A recent and relevant example that shows the need for effective oversight is Israel’s use of the AI-driven targeting system, Lavender, in Gaza. Reports indicate that human personnel often acted as a mere “rubber stamp” for the machine’s decisions, typically spending only about 20 seconds per target before authorizing a strike—primarily to verify that the AI-marked individual was male.
Although technically “in control,” human operators exercised minimal critical assessment, despite knowing that Lavender had an estimated 10% error rate and sometimes flagged individuals with loose or no connection to militant groups. This illustrates the risks of minimal oversight, as rushed or superficial human involvement can result in flawed or unethical decisions. Psychological studies back this limitation, showing that time pressure causes individuals to prioritize speed over accuracy, leading to cognitive biases and errors. So, while many nations see human involvement in AI warfare as a safeguard, fast decision-making in battle exposes its weaknesses, requiring standard regulations.
On top of that, the modern warfare environment is increasingly saturated with misinformation and emotional manipulation. In conflicts like the Israel-Hamas and Russia-Ukraine wars, social media has amplified manipulated information. Such emotionally charged environments distort rational decision-making and compromise oversight quality. Deepfakes and narrative warfare are becoming tools to manipulate perceptions on the battlefield, influencing both soldiers and commanders. When time is limited and operators are bombarded with misinformation, decision-making becomes vulnerable to errors, biases, and impulses. As future conflicts grow more complex and information warfare continues to rise, policymakers will need to counteract the impact of misinformation on oversight.
These cognitive and emotional challenges become even more critical when we consider the scale of current AI-driven military systems. The global market for AL in the military is valued at $9.2 billion in 2023, is expected to increase to $38.8 billion by 2028. The autonomous weapons market, valued at $11.5 billion in 2020, is projected to reach $30 billion by 2030. In the U.S., military expenditures on AI technologies saw a significant rise between 2022 and 2023, with the Department of Defense accounting for nearly 90% of AI-labeled contract value within NAICS 54, though the actual growth rate for military specific AI contracts may be lower than 1,200%. But as nations funnel billions into developing faster, more precise AI-driven weapons, however, speed on investments in oversight mechanisms lag far behind. with the rapid investment in AI-driven military technologies.
While rapid action can be strategically advantageous, the lack of enforceable governance to ensure meaningful oversight could lead to disastrous miscalculations, particularly in situations where milliseconds make the difference between escalation and peace.
Governance standards are especially crucial when applied to swarm drones and other autonomous systems operating at speeds that human cognition cannot match. In a high-stakes scenario, even a 0.1% error rate can lead to devastating results, particularly when scaled across thousands of drones. Research suggests that a swarm of 39,000 armed drones would be capable of causing Nagasaki-level harm if deployed in a single coordinated strike.
Despite the minimal margin for error in such operations, many systems operate without substantial human oversight, and current directives fail to address this reality. If even a minor miscalculation occurs, the speed at which these systems execute decisions would render human intervention ineffective, raising the likelihood of large-scale civilian harm and conflict escalation.
The integration of AI into nuclear delivery systems compounds these challenges. Autonomous systems embedded within nuclear assets introduce risks of misidentifying threats and triggering unintended escalations. When systems respond faster than human operators can react, misinterpretations or technical glitches could inadvertently launch countermeasures, amplifying conflicts. The Defense Advanced Research Projects Agency (DARPA) is beginning to incorporate ethical, legal, and societal standards into AI development, but there is a need for global governance frameworks that enforce these principles across all military AI applications. For AI systems deployed on the battlefield, ethics and oversight cannot be external considerations—they must be hardwired into every decision-making process to reduce the risks of unintended consequences.
Ensuring data quality in AI weapon systems is another critical aspect of effective governance. Synthetic data, increasingly used to train AI, presents risks of bias and inaccuracy, which can result in misidentification and operational errors. Establishing strict standards for data quality can minimize these risks and ensure that autonomous systems operate reliably under battlefield conditions. International forums like the United Nations (UN) Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS) and initiatives such as the World Economic Forum’s Global Data Commons have begun addressing data quality issues in AI, yet their guidelines lack enforceability, clear definitions, and verification mechanisms.
For instance, the UN GGE has not established legally binding obligations, and its principles on ‘meaningful human control’ remain open to interpretation. This allows militaries to deploy AI in lethal systems without uniform safeguards. Similarly, data governance concerns persist, as seen in cases like Project Maven, where AI models trained on incomplete datasets raised concerns about battlefield reliability. Without strict and enforceable data standards, military AI risks unintended and potentially catastrophic failures.
To keep pace with AI’s rapid advancement, the international community must act urgently to develop governance standards that go beyond policy statements. While China’s 2023 CCW position, Directive 3000.09, and the EU’s AI policies all emphasize the importance of human oversight in military AI applications, their approaches differ in implementation and enforcement. These directives collectively contribute to advancing oversight efforts, yet the lack of enforceable global standards remains a key challenge.
But without enforceable standards, these efforts may fall short in the face of warfare that operates at a scale and speed never before seen. Meaningful governance requires that ethics, legal considerations, and human judgment be woven into the very architecture of these systems.
The attacks on Gaza demonstrate that simply involving humans in AI oversight does not guarantee ethical decision-making. The pace of AI decision-making in warfare calls for a robust governance framework that prioritizes substantial human control and data integrity. This means requiring AI-driven military systems to include a minimum review period where a trained human must validate a decision before execution-preventing AI from making immediate life-or-death choices on its own. Current guidelines, such as those from the UN and the US DoD, emphasize human oversight but lack enforceable time constraints or training standards for personnel.
By implementing structured international frameworks that mandate human validation within predefined time windows and comprehensive AI ethics training for military operators, the global community can ensure human oversight remains more than a symbolic safeguard.
Nimra Javed is a Research Officer at the Center for International Strategic Studies AJK. She holds an MPhil Degree in Strategic Studies from National Defence University, Islamabad and her research areas are Emerging Technologies & Strategies and New Trends in Warfare. She can be reached at Nimrahjaved42@gmail.com , X: NimrahJaved_ & LinkedIn: https://www.linkedin.com/in/nimra-javed-a6a4a91b1/ .