OPINION – A global debate is underway over how much human involvement should be required when artificial intelligence is implemented in military operations. This is typically described on a spectrum: "human-in-the-loop," where a system can select targets and apply force only with human authorization; "human-on-the-loop," where a system selects and applies force without authorization but under human supervision with the ability to override; and "human-out-of-the-loop," where a system selects and applies force without human authorization, supervision, or intervention.
How much human control is necessary remains contested, but the debate is overwhelmingly normative rather than empirical. Ukraine, where these systems are being deployed at scale under active fire, offers a case study for testing those assumptions against battlefield reality.
What emerges is not a clean line between human control and machine autonomy but a continuum shaped by biology, budget, cognition, accountability, and ethics. The harder question — and the one this paper examines — is what happens when human-in-the-loop safeguards are preserved in name but the conditions that make them meaningful have already eroded because the volume of targets exceeds what any operator can review and the tempo of engagement outpaces human reaction time.
Biology
Proponents of autonomous weapons systems consider them a moral imperative. They argue that if technology can remove warfighters from danger, governments have an obligation to use it. Ukraine's leadership has arrived at the same conclusion under considerably more urgent circumstances.
Since 2024, Russia's elite drone unit, Rubicon, has wreaked havoc on Ukrainian forces well behind the frontlines. One brigade reported losing up to seventy percent of its drone operators in a single week. Another lost most of its vehicles, drone launch sites, antennas, and communications equipment. In Kursk, the pressure grew so severe that Ukrainian forces ultimately withdrew.
Compounding the problem is the time it takes to evacuate wounded soldiers. The medical "golden hour" standard has collapsed in Ukraine, where evacuation now takes twenty-four to seventy-two hours. A US veteran fighting in the war lamented that they now face a "golden three days," noting that a friend was hit by shrapnel — which should have been an easy fix — but required a leg amputation due to the long evacuation time.
The situation is also a matter of numbers. Ukrainian Minister of Defense Mykhailo Fedorov disclosed upon taking office that 200,000 troops have gone AWOL and two million men of military age are evading mobilization. Russia holds a significant advantage here. Ukrainian frontline units now operate at fifty to sixty percent of authorized strength, with some as low as thirty percent.
This combination of relentless danger and severe manpower shortages is pushing Ukraine toward autonomous weapons systems across the land, air, and sea.
Fedorov has stated the country "needs to remove UAV operators from the battlefield." The near-term goal is enabling operators to control drones from anywhere in the country. The ultimate objective is full drone autonomy.
Ukraine has also deployed armed ground robots in place of infantry on the battlefield. In late 2025, Ukraine's robot army held frontline positions for forty-five straight days. The systems were controlled remotely from safe locations and reloaded every forty-eight hours. Ukrainian officials called it a first in modern warfare. A commander within the Third Army Corps said, "Robots do not bleed."
By the end of 2025, drones were responsible for more than eighty percent of all enemy targets destroyed in Ukraine, according to officials. "We don't have infantry. We do drones. We kill with drones. We save with drones. We liberate with drones," one commander said.
Ukraine's ambassador to the UK, Valerii Zaluzhnyi, predicts a rapid evolution for these systems. He believes that in the near future, these robots will be used "not just on their own, but as part of large, AI-powered swarms of drones" across all domains.
Budget
Those biological and manpower pressures interact directly with the economics of drone warfare. One-way attack drones can be deployed for as little as $400, and in 2025 Ukraine allocated $2.8 billion to procure millions of them. The sheer volume of cheap drones, paired with AI-driven target identification, compresses the entire kill chain — reducing sensor-to-shooter timelines from days to minutes. The same dynamic is currently playing out in the U.S. and Israeli war with Iran, where both sides are deploying cheap semiautonomous attack drones in the thousands. But this shift is not only about money and hardware; vast numbers of cheap, AI-enabled drones also transform what any human can realistically perceive, decide, and authorize in time.
Cognition
These compressed timelines and target volumes force a rethink of human cognition as the limiting factor in AI-enabled warfare. Ukraine's experience with systems like the Avengers AI platform and the Delta command-and-control environment illustrates how quickly human oversight can be stretched to the breaking point.
The Avengers AI platform, used for offensive targeting and integrated into Ukraine's Delta command and control system, can identify up to 12,000 enemy assets per week through automated analysis of drone and camera feeds. The system does not fire weapons by itself; humans still validate targets, allocate scarce munitions, and manage escalation. Ukrainian officials emphasize that Avengers is meant to filter, not replace, human judgment. But the volume raises a governance question: at what point does human validation become a fiction, as exhausted analysts and commanders "rubber-stamp" AI recommendations they cannot meaningfully re-evaluate? This has already been observed in other conflicts, including Gaza.
By contrast, Ukraine's Octopus interceptor drone is designed to detect and destroy incoming Russian drones mid-air without requiring a human to approve each intercept. Requiring a human to approve every intercept in a saturation drone attack can result in more civilian and military casualties than allowing a supervised-autonomous system to fire within fractions of a second under pre-defined rules of engagement.
This mirrors what the 2023 U.S. Department of Defense Directive 3000.09 refers to as 'operator-supervised autonomous weapon systems,' permitting systems to select and engage targets under human supervision for time-critical defense — especially static defense of installations and defense of platforms against saturation attacks. Full autonomy remains rare: most systems remain operator-in-the-loop or operator-on-the-loop, with autonomy used for terminal guidance, navigation through jamming, or collision avoidance rather than independent target selection.
Systems like Avengers and Octopus show that autonomy is already being used in different parts of the kill chain — filtering targets at scale or firing within fractions of a second under predefined rules of engagement — often at speeds no human can match. As the volume and tempo of AI-generated recommendations rise, the risk grows that operators will "rubber-stamp" system outputs they can no longer meaningfully re-evaluate.
Accountability
As battlefield realities push humans further from direct control, questions emerge around accountability, and when human-in-the-loop oversight is meaningful and when it is theater.
These are not purely technical choices; they are institutional and doctrinal ones. Architecture becomes policy — the way the system is wired effectively decides how tightly humans are tied into day-to-day combat decisions. Documenting intent and assigning responsibility for civilian harm cannot be an afterthought; it must be designed into the system from the start.
The harder ethical question is whether preserving human-in-the-loop safeguards is always the right thing to do — or whether, in some cases, it is more ethical to admit where humans cannot keep pace. The real governance question is not whether to keep a human in the loop in the abstract, but which loops we deliberately anchor in human cognition and institutional authority, and which we are prepared to delegate.
Conclusion
Within the broader discourse on autonomous systems, Ukraine provides empirical evidence that the devolution of human oversight is a systemic reality of modern combat, not a hypothetical risk. The compounding forces of human biological limits — ranging from localized attrition to universal thresholds of reaction time — alongside the proliferation of low-cost drones and unparalleled data velocity, inevitably distance the human operator further from direct control. Consequently, true accountability cannot rest on an operator's final click under fire; it must be deliberately designed into the entire operational process — architectures, workflows, and governance — that lead up to that moment. The governance question is no longer whether to keep a human "in the loop" in the abstract, but which loops humans must own, how much cognitive load they can bear, and how fast wartime institutions can adapt command-and-control (C2) and oversight structures.
To help policymakers and practitioners translate these insights into practice, we offer three mutually reinforcing lines of effort.
First, decide which loops humans own. Make human placement an explicit design decision, not a slogan. For each mission type (for example, air defense; ISR; long-range strike; information operations), require a short statement of where humans sit on the continuum (in/on/out of the loop), why, and what tradeoffs you are accepting in speed, survivability, and escalation risk. Reserve true "human-in-the-loop" control for low-tempo, high-stakes decisions, and use Ukraine's experience to distinguish between high-volume, time-critical defensive engagements — better suited to supervised autonomy like Octopus-style interceptors — and lower-tempo but politically or ethically weighty decisions, where humans should remain the real bottleneck.
In parallel, reframe ethics around actual control, not formalities. Move policy language away from blanket promises that humans will "approve every shot" toward domain-specific statements about where humans truly control outcomes and where they supervise architectures that act faster than they can. Document human intent in system design, not only in rules of engagement, so accountability is anchored in what commanders ask AI systems to optimize, rather than solely in an operator's last-minute approval.
Second, design systems to manage cognitive overload. Treat human cognitive limits as a hard design constraint, not a staffing problem. Cap and structure AI output for human decision-makers by limiting how many "priority" alerts any individual can receive in a given timeframe, using tiered queues and automated de-duplication — especially in environments like Delta/Avengers, which can surface thousands of targets per week. Mandate machine-readable rationales and confidence scores so human review becomes targeted supervision rather than binary approve-or-reject decisions. Instrument "rubber-stamping" as a safety signal rather than a success metric. Treat near-100-percent approval rates under high load as a warning, require periodic audits of how often humans overrule or modify AI outputs, and adjust triage logic and escalation pathways based on those findings.
Third, govern battlefield AI at responsible speed. Align architectures, governance, and professional education with the operational realities Ukraine is already revealing. Build CJADC2-style systems around actual operational needs: follow lessons from Ukraine's Delta by starting with a single web-based common operational picture that fuses multi-domain data, then layering AI analytics on top. Co-design compute and command, recognizing that where you place compute (cloud, theater data centers, edge) determines which forms of human oversight are realistic at different echelons. Create wartime AI-governance playbooks with predefined fast-lane processes for testing, fielding, and monitoring AI tools in combat. Encourage modular autonomy packages that can be certified, updated, and reused, and tie funding to governance metrics such as robust logging, verification and validation, red-teaming, and post-incident review. Finally, prepare people and organizations for AI-enabled campaigns by making AI literacy and "AI tradecraft" core elements of professional military education, exercising AI-failure scenarios in wargames, and embedding small AI and data teams with operational units, as Ukraine and its advisers have already begun to do.
The Cipher Brief is committed to publishing a range of perspectives on national security issues submitted by deeply experienced national security professionals. Opinions expressed are those of the author and do not represent the views or opinions of The Cipher Brief.
Have a perspective to share based on your experience in the national security field? Send it to Editor@thecipherbrief.com for publication consideration.
Read more expert-driven national security insights, perspective and analysis in The Cipher Brief









