Human Oversight in AI-Driven Intelligence: Rhetoric, Reality, and the Risks of Automation

by Zohaib Altaf and Nimra Javed

Abstract

Artificial Intelligence (AI) is reshaping intelligence and counterintelligence, speeding up analysis while also creating new dilemmas around oversight and accountability. This paper examines the paradox of “human oversight” in AI-enabled intelligence systems. Policy documents and official statements continue to emphasize the centrality of the human role, but our analysis shows that in practice oversight is often more symbolic than substantive. Using thematic coding of international policy documents—such as NATO’s 2021 AI Strategy, the U.S. Department of Defense’s 2023 Responsible AI Directive, and the European Union’s AI Act—alongside case studies like Israel’s AI-driven targeting in recent conflicts, the study identifies recurring themes of compressed decision cycles, automation bias, and vague definitions of human control. The research is guided by two central questions: Is human oversight of AI in intelligence operations meaningful, or does it function mainly as a rhetorical device? And how do AI policy frameworks conceptualize human oversight in intelligence operations, and how does this align with operational realities? Findings suggest that while human presence is maintained procedurally, the speed of decision-making and reliance on machine outputs leave little room for genuine human judgment. This creates risks for accountability, miscalculation, and the erosion of analytic expertise. To address the gap, the study argues for clearer operational definitions of “meaningful human control,” stronger institutional safeguards that preserve space for human deliberation, international cooperation on standards, training that makes officers aware of automation bias, and formal inclusion of AI in intelligence oversight debates at multilateral forums such as the UN Group of Governmental Experts. Only by restoring the substance of human oversight, the paper concludes, can AI function as an enabler of security rather than a source of fragility in intelligence practice.