AI-Driven Warfare: Navigating the Legal Ambiguities, Accountability Gaps, and Escalation Concerns

by Maheen Butt

Modern Artificial Technology is transforming the operational landscape across several domains. However, its integration into military operations is raising grave concerns, and this is just the beginning. The use of Autonomous Weapon Systems (AWS), a by-product of AI, in the battlefield may prove to become a test for the credibility of the existing ethical and legal framework of the international law. This raises questions about the attribution of consequences, compliance with the International Humanitarian Law (IHL), and the danger of an unrestrained conflict escalation. Without a comprehensive dialogue between the international community, AI and its subsequent technologies in warfare may inflict a greater harm to the humanity.

Reports have shown several AI technologies being used for offensive and reconnaissance operations in Russia-Ukraine and Gaza conflict. Ukraine has used AI-driven drone swarms for small-scale precision strikes and reconnaissance. In 2021, Israel claimed to have fought the first AI War when they used an AI system called Gospel to identify target buildings sheltering Hamas members. Then in April 2024, IDF used an AI system called Lavender to identify Hamas members as targets. The ongoing conflicts becoming the “AI war labs” with the use of Lethal Autonomous Weapon Systems (LAWS) present enough evidence to push for amendments in international law and a broader global debate.

There are three key areas in which LAWS are posing serious humanitarian and ethical dangers. First, the enforcement and credibility of International Humanitarian Law (IHL) face a threat as the role of AI increases in operating these AWS. Is AI capable of reliably upholding the principle of distinction, which differentiates between a combatant and a non-combatant? Can itadhere to the principle of proportionality, which demands that the military actions bebalanced against the threat?At present, AI is not advanced enough to comply with these core principles. Yet,such systems are being used in two activeconflict zones. This has resulted in widespread and extensive damage to civilian lives and infrastructure.

Second, who stands accountable if AI makes the complex operational decisions of the battlefield? The reduction in the role of humans in warfare due to the deployment of AI-operated LAWS raises serious ethical concerns. If an unintended or disproportionate harm occurs on the battlefield, who can the affected parties hold accountable? These AI run AWS, also known as killer robots; reduce the government involvement in making these key combat decisions. In such cases, can the ICJ and international lawretail their legitimacy and relevancy?

As mentioned above, the use of AI-based program called Lavender in the first few weeks of war lead to the death of thousands of Palestinians who weren’t part of the conflict. In these weeks, the military officers were given complete approval to fully rely on Lavender’s data, without second checks. The officers were only spending as much as “20 seconds”, that too just ensure that the target was a male. All of this was being allowed despite knowing that the system was making errors in 10% of the decisions and marking targets that had a loose or even no connection to Hamas at all.

 

The decrease in human oversight and complete reliance on Lavender’s data lead to an escalation of the conflict, which still, as we speak, has not come to an end.

This brings us to our third point, the high risk of conflict escalation. The intensification of hostilities increases significantly if AI determines which areas to target and the extent military prowess to apply. Humans can navigate the complex layers of target selection that the AI systemsin their current form at least, can’t due to technological limitations. AI may reduce the response time; it alsoheightens the threat of an indiscriminate damage. Another AI-driven system called Gospel produced 100 targets a day in the 11 day war against Hamas in 2021. Previously, only 50 targets in “a year” were being produced, just to give a picture of the increase of target selection based on AI models.

One of the many reason for this may be the use of synthetic data to train AI models. Even though synthetic data has proven to be an efficient way to train AI models, especially when high quality and relevant real world data is scarce. However, the synthetic data can be manipulated to train AI to generate the required results. This means that synthetic data should not be treated as a silver bullet as it comes with its own set of challenges and threats.

The AI companies, unfortunately, have not played a constructive role in this regard.

There was a long-standing ban on the use of AI in the development of weapons, missile systems and other such offensive technologies. This is not the case anymore. For example, Alphabet, Google’s parent company, revised the usage guidelines of AI to lift this ban. These guidelines previously prohibited the use of AI to develop technologies that inflict harm to humans in any way or proportion. Despite harsh protest from several international human rights groups, Alphabet has not reversed its decision. Instead, it called on governments to enhance cooperation in using its technology for defense purposes. In the absence of clear accountability framework, governments, international courts and AI companies may face several legal complexities in determining responsibility when things go wrong in the battlefield.

While chatbots like ChatGPT may struggle even withbasic questions, the implications when AI starts running the military offenses can be far graver. Certain AI systems may be helpful in saving lives by making precise attacks;there may be another side to it too. This precision mayresult in more frequent wars as the damage can be controlled. This precision and control may give states a false sense of security, which may potentially escalate conflicts and push the world into the realm of never ending wars.

In this rapidly evolving landscape of AI-driven warfare, what measures can a developing country like Pakistantake? For starters, Pakistan became the first state in 2013 to openly call for a complete ban on Lethal Automated Weapon Systems (LAWS) at the meeting of UN Human Rights Council. The country argued that AI can’t be fully trusted, irrespective of any degree of sophistication. Another argument was that these weapons lower the threshold of initiating the war, thus increasing its likelihood.

Pakistan’s principled stance revolves around maintaining and increasing the human involvement in making operational decisions in military offences. Delegating such critical decisions to AI can cause disproportionate damage, as seen in the case of the use of Lavender and Gospel. This reliance has also impacted even the quality of human insight over the wartime decision making. It’s high time that there is an international dialogue on the integration of AI in military warfare. The global community needs to define operational boundaries that ensure compliance with the international law, particularly the IHL. Moreover, a mechanism needs to be crafted that mitigates the risks and the unintended consequences of an unchecked use of AI in the battlefield.

Author’s Bio

Maheen Butt is an Executive Content Writer with over 5 years of experience in the industry. With a particular interest in geopolitics and security studies, she holds an MPhil in International Relations from the Kinnaird College for Women, Lahore.

You may also like

Leave a Comment

Stay Connected

Follow and subscribe

Contact CISS AJK

Center for International Strategic Studies AJK, King Abdullah Campus Chatter kalas Muzaffarabad, Azad Jammu and Kashmir

05822922322

admin@cissajk.org.pk

career@cissajk.org.pk