Publications/AI in Defense
Foresight ReportFebruary 2025

AI in Defense: Future Scenarios and Governance Options

Sophia Park
Dr. Robert Miller
Julian Martinez
AI in Defense

Executive Summary

This foresight report examines potential futures for artificial intelligence in defense and security contexts, analyzing governance challenges and opportunities across multiple scenarios. Drawing on expert consultations, technical analysis, and international security frameworks, we present four distinct scenarios for AI in defense over the next decade and propose governance approaches for each.

The report identifies critical uncertainties around the pace of AI capability development, international cooperation on governance, deployment contexts, and human-machine teaming models. For each scenario, we assess implications for international stability, humanitarian protection, accountability, and strategic decision-making, offering tailored governance recommendations for policymakers, military organizations, and civil society.

Future Scenarios

Scenario 1: Regulated Integration

AI systems are integrated into defense capabilities under robust international governance frameworks, with strong verification mechanisms and human oversight requirements. Military AI development proceeds cautiously with significant transparency and accountability measures.

Key Governance Challenges

  • • Developing effective verification protocols for AI compliance
  • • Balancing transparency with legitimate security concerns
  • • Ensuring equitable access to defensive AI capabilities

Scenario 2: AI Arms Race

Geopolitical tensions drive competitive AI weapons development with minimal international coordination. Nations prioritize military advantage over safety and ethical considerations, leading to rapid deployment of increasingly autonomous systems with limited testing.

Key Governance Challenges

  • • Preventing escalation dynamics and strategic instability
  • • Establishing minimum safety standards despite competition
  • • Protecting humanitarian principles in conflict

Scenario 3: Fragmented Governance

Regional blocs develop divergent approaches to military AI governance, creating a patchwork of standards and practices. Some regions implement strict limitations while others pursue more permissive approaches, complicating interoperability and international operations.

Key Governance Challenges

  • • Managing interoperability between different governance regimes
  • • Preventing regulatory arbitrage and race-to-the-bottom dynamics
  • • Building bridges between divergent ethical frameworks

Scenario 4: Civilian-Led Restraint

Strong civil society movements and private sector initiatives drive restrictive norms on military AI applications, even in the absence of formal treaties. Public pressure and employee activism constrain government and corporate behavior in military AI development.

Key Governance Challenges

  • • Translating informal norms into durable governance mechanisms
  • • Addressing clandestine development programs
  • • Balancing legitimate security needs with ethical constraints

Governance Recommendations

For International Organizations

  • Develop a dedicated UN framework convention on military AI applications with flexible protocols that can adapt to technological developments
  • Establish international technical standards for verification, testing, and certification of military AI systems
  • Create confidence-building measures to reduce risks of misperception and unintended escalation involving AI systems

For National Governments

  • Implement robust testing and validation protocols for AI systems in defense contexts, with particular attention to edge cases and adversarial scenarios
  • Develop clear doctrines for human-machine teaming that maintain meaningful human judgment in use-of-force decisions
  • Establish cross-departmental governance bodies to coordinate military AI development with broader national AI strategies and international commitments

For Civil Society and Industry

  • Develop and promote technical standards for explainability, reliability, and safety in defense AI applications
  • Establish industry codes of conduct for responsible development of dual-use AI technologies with security applications
  • Create independent monitoring mechanisms to track military AI developments and assess compliance with ethical principles and legal obligations