In “Survival of the Smartest? Defense AI in Ukraine” (DAIO, 2024), the Defense AI Observatory examines the shifting role of autonomy in combat and the evolving practicality of “human in the loop” (HITL) principles. Here’s a closer look at the lessons emerging from Ukraine’s battlefield on the evolving role of autonomy in warfare
The “human in the loop” (HITL) principle has been considered a safeguard in autonomous warfare - the war in Ukraine has challenged the practicality of this principle. By ensuring human oversight in critical decisions, especially in lethal engagements, HITL aimed to uphold ethical standards and prevent unintended harm. However, in the face of high-speed engagements, electronic warfare, and complex operational environments, the assumptions underlying HITL have proven difficult, if not impossible, to uphold effectively. This conflict has generated a shift in the approach to autonomous systems, revealing a new set of principles that could redefine how militaries worldwide think about AI in warfare.
The Problem with “Human in the Loop”
The HITL principle was built on noble intentions: ensuring ethical accountability and reducing collateral damage by keeping a human operator involved in the decision-making process. However, practical implementation in active conflict has highlighted several key flaws:
Speed of Modern Combat: In real-world scenarios, there is often only a 2-5 second window between target identification and engagement. This short timeframe leaves little room for meaningful human intervention, making HITL a mere formality rather than a functional safeguard. Delays introduced by HITL can render systems ineffective or put them at risk of failing to act on critical opportunities.
Disrupted Connectivity: The pervasive use of electronic warfare (EW) disrupts communication channels, making remote control and human intervention impractical. Systems designed around HITL rely on constant connectivity, which can be easily compromised in contested environments. Without a resilient link to human operators, autonomous systems need the capacity to make independent decisions, especially in high-stakes scenarios.
Verification Challenges: Distinguishing between autonomous and human-controlled operations is complex, particularly after engagements. Once a mission is executed, it is nearly impossible to verify whether human oversight was truly involved, complicating accountability and compliance with ethical guidelines.
These limitations suggest that while HITL has ethical appeal, its operational viability is questionable. The Ukrainian experience illustrates that HITL may inhibit rather than enhance defensive capabilities, especially against technologically advanced adversaries.
Assumptions vs. Reality: What Was Learned
In conventional settings, HITL was assumed to provide a moral and regulatory framework for autonomous warfare. It was seen as a way to mitigate risks, ensuring that lethal force would only be used under human authorization. The assumption was that human intervention would be feasible, maintaining ethical integrity and reducing unintended harm. However, Ukraine's battlefield has demonstrated that these assumptions do not hold under the speed and complexity of real-world combat.
The HITL principle, as applied in Ukraine, often became a barrier to rapid response and tactical agility. Human intervention was reduced to a “checkbox” step, insufficient for practical decision-making in real-time engagements. Additionally, as electronic warfare escalated, the reliance on connectivity for HITL control became a liability, leaving systems vulnerable and forcing a shift toward greater autonomy to ensure operational continuity.
These lessons have shown that the assumptions behind HITL—namely, the availability of time, connectivity, and human oversight—do not align with the realities of high-intensity conflict. This mismatch has spurred a new approach to autonomous warfare, one that prioritizes adaptive autonomy and context-based human oversight rather than rigid control.
Shaping the Future: New Principles of Modern Warfare
The experiences in Ukraine suggest that militaries need to adopt a new set of principles to effectively integrate AI in combat:
Contextual Autonomy: Rather than enforcing HITL at all times, AI systems should operate with adaptable levels of autonomy based on the operational context. In high-speed or signal-compromised scenarios, systems should be able to function autonomously. In other conditions, human oversight can be reinstated. This flexible approach allows AI to adapt to the battlefield’s demands without compromising speed or effectiveness.
Resilience to Electronic Warfare: Defence AI must be designed to operate in environments with limited connectivity, where human control may not be feasible. This includes developing offline capabilities, pre-programmed decision protocols, and fail-safe measures that enable autonomous systems to execute critical tasks without real-time human input.
Ethical Constraints in Code: Embedding ethical parameters directly into AI algorithms can serve as an alternative to HITL, ensuring adherence to international norms even when humans cannot directly intervene. These constraints can include automatic de-escalation protocols, target prioritization based on ethical considerations, and avoidance of non-combatant engagement.
Iterative Combat Testing and Adaptation: Ukraine’s rapid-cycle testing of AI systems on the battlefield has shown the value of real-time feedback loops. This process allows for immediate refinement of autonomous capabilities based on practical challenges and evolving threat landscapes. Continuous adaptation ensures that AI remains responsive to new conditions and optimizes its operational effectiveness.
Dynamic Delegation of Human Oversight: Instead of a fixed HITL approach, command hierarchies should have the flexibility to dynamically assign levels of human oversight. This "Dynamic Delegation of Control" enables commanders to adjust control based on the mission’s criticality and tactical environment, making oversight a fluid rather than rigid requirement.
Lessons for Other Militaries
Ukraine’s experience with defence AI offers valuable lessons for militaries around the world. Many countries continue to emphasize HITL in their autonomous warfare policies, assuming that human intervention is always feasible and beneficial. However, the Ukrainian model demonstrates that flexibility, adaptability, and autonomy are crucial to modern warfare. The challenges faced in Ukraine suggest that:
Rigid HITL policies may hinder effectiveness in fast-paced combat environments, creating vulnerabilities that adversaries can exploit.
A shift toward contextual autonomy allows for greater resilience and operational continuity, especially in EW-dominated scenarios where connectivity is unreliable.
Embedding ethical constraints in AI algorithms can provide a safeguard when direct human intervention is not possible, maintaining adherence to international norms.
For militaries invested in AI, these insights underscore the importance of re-evaluating traditional assumptions and adapting strategies to the demands of modern warfare. By adopting adaptive and resilient AI principles, militaries can enhance their responsiveness and effectiveness on the battlefield while upholding ethical standards.
Conclusion
The Ukraine conflict has reshaped the understanding of autonomous systems in warfare, revealing the limitations of the HITL principle. By prioritizing contextual autonomy, EW resilience, and embedded ethics, Ukraine has pioneered a new approach to defence AI, one that aligns with the realities of high-speed, high-intensity conflict. For other militaries, these insights offer a blueprint for building more effective and ethically responsible autonomous systems. As the nature of warfare continues to evolve, so too must the principles that guide the deployment of AI on the battlefield. This shift represents not just an operational necessity but a fundamental rethinking of how we integrate human oversight and machine autonomy in the defence of nations.
Great piece ! I belive for long run approach, it is not feasable and to good extend not even possible to have human in the loop, the nature of human is too slow.