AI • Deterrence • Research Design
01 / 21
Abstract / Overview
This presentation outlines a research design that examines how AI alters the way deterrence works.
Rather than focusing on policy recommendations, it explores the causal mechanisms through which AI reshapes decision-making, system survivability, and crisis stability.
Goal: update deterrence theory for the AI military era.
Rather than focusing on policy recommendations, it explores the causal mechanisms through which AI reshapes decision-making, system survivability, and crisis stability.
Goal: update deterrence theory for the AI military era.

AI • Deterrence • Research Design
02 / 21
Research Background
Deterrence theory was developed during the Cold War to explain how war can be avoided between powerful states.
Today, AI is increasingly used in ISR, early warning, and decision support.
These changes raise new questions about whether traditional deterrence logic still applies.
Today, AI is increasingly used in ISR, early warning, and decision support.
These changes raise new questions about whether traditional deterrence logic still applies.

AI • Deterrence • Research Design
03 / 21
Research Motivation
Taiwan faces a structural disadvantage in traditional military balance.
Deterrence based mainly on firepower offers limited leverage for an island polity.
AI introduces new strategic variables—such as speed and system resilience—that may reshape deterrence in asymmetric contexts.
Deterrence based mainly on firepower offers limited leverage for an island polity.
AI introduces new strategic variables—such as speed and system resilience—that may reshape deterrence in asymmetric contexts.

AI • Deterrence • Research Design
04 / 21
Research Question
Main Research Question:
How does artificial intelligence alter the causal mechanisms through which deterrence operates?
Sub-questions:
• Does AI strengthen or weaken crisis stability?
• How does AI affect deterrence dynamics in U.S.–China–Taiwan relations?
This study focuses on theory development rather than prediction.
How does artificial intelligence alter the causal mechanisms through which deterrence operates?
Sub-questions:
• Does AI strengthen or weaken crisis stability?
• How does AI affect deterrence dynamics in U.S.–China–Taiwan relations?
This study focuses on theory development rather than prediction.

AI • Deterrence • Research Design
05 / 21
Literature Review I: Traditional Deterrence Theory
Traditional deterrence theory highlights two forms: deterrence by punishment and deterrence by denial.
It is built on three core elements:
• Capability
• Credibility
• Communication
It is built on three core elements:
• Capability
• Credibility
• Communication

AI • Deterrence • Research Design
06 / 21
Literature Review II: Core Assumptions
Traditional deterrence assumes:
• Decision-making is relatively slow and deliberate
• Signals are interpreted by human decision-makers
• Escalation follows predictable and controllable steps
These assumptions support crisis stability—neither side wants to strike first.
• Decision-making is relatively slow and deliberate
• Signals are interpreted by human decision-makers
• Escalation follows predictable and controllable steps
These assumptions support crisis stability—neither side wants to strike first.

AI • Deterrence • Research Design
07 / 21
Literature Gap I: AI and Decision Speed
AI systems process information faster than humans.
They compress decision time during crises and reduce opportunities for deliberation.
As speed rises, deterrence becomes more sensitive to timing and system performance.
They compress decision time during crises and reduce opportunities for deliberation.
As speed rises, deterrence becomes more sensitive to timing and system performance.

AI • Deterrence • Research Design
08 / 21
Literature Gap II: AI and Misperception
AI systems classify events even when real-world situations are ambiguous.
Military exercises or defensive actions may be labeled as high-risk anomalies.
This raises the risk of misperception and unintended escalation.
Military exercises or defensive actions may be labeled as high-risk anomalies.
This raises the risk of misperception and unintended escalation.

AI • Deterrence • Research Design
09 / 21
Theoretical Framework: Three Core Variables
This study proposes three key variables introduced by AI:
• Speed — how quickly decisions are made
• Survivability — resilience of AI-dependent systems
• Crisis Stability — incentive to avoid striking first
Together, these variables reshape deterrence mechanisms.
• Speed — how quickly decisions are made
• Survivability — resilience of AI-dependent systems
• Crisis Stability — incentive to avoid striking first
Together, these variables reshape deterrence mechanisms.

AI • Deterrence • Research Design
10 / 21
Variable One: Speed
AI accelerates threat detection and analysis.
What once took hours or days may now take minutes or seconds.
Speed can improve responsiveness, but it can also increase pressure to act quickly—reducing restraint.
What once took hours or days may now take minutes or seconds.
Speed can improve responsiveness, but it can also increase pressure to act quickly—reducing restraint.

AI • Deterrence • Research Design
11 / 21
Variable Two: Survivability
In the AI era, survivability extends beyond weapons platforms.
It includes:
• Data centers
• Cloud infrastructure
• Command-and-control (C2)
• Data pipelines
If these systems fail, deterrence credibility can collapse even if forces remain intact.
It includes:
• Data centers
• Cloud infrastructure
• Command-and-control (C2)
• Data pipelines
If these systems fail, deterrence credibility can collapse even if forces remain intact.

AI • Deterrence • Research Design
12 / 21
Variable Three: Crisis Stability
Crisis stability exists when neither side believes striking first is safer than waiting.
High speed + fragile systems can increase fear of losing response capability, undermining stability.
High speed + fragile systems can increase fear of losing response capability, undermining stability.

AI • Deterrence • Research Design
13 / 21
Theoretical Implication
AI does not simply add new tools to deterrence.
It reshapes the conditions under which deterrence can succeed.
Deterrence becomes more fragile and more dependent on speed management and system survivability.
It reshapes the conditions under which deterrence can succeed.
Deterrence becomes more fragile and more dependent on speed management and system survivability.

AI • Deterrence • Research Design
14 / 21
Research Approach
This study adopts a theory-driven research approach.
It does not aim to provide immediate policy advice or quantitative prediction.
Instead, it seeks to refine deterrence theory by identifying key causal mechanisms introduced by AI.
It does not aim to provide immediate policy advice or quantitative prediction.
Instead, it seeks to refine deterrence theory by identifying key causal mechanisms introduced by AI.

AI • Deterrence • Research Design
15 / 21
Methodology I: Scientific Realism
Scientific realism assumes that real causal mechanisms exist even if they cannot be directly observed.
AI-driven decision speed, perception shifts, and infrastructure dependence are hard to observe directly, but they can produce real strategic outcomes.
This approach fits the research question.
AI-driven decision speed, perception shifts, and infrastructure dependence are hard to observe directly, but they can produce real strategic outcomes.
This approach fits the research question.

AI • Deterrence • Research Design
16 / 21
Methodology II: Pragmatism
Pragmatism allows methodological flexibility when studying complex and emerging technologies.
It supports combining theory, qualitative reasoning, and scenario-based illustration to address AI-related security issues.
It supports combining theory, qualitative reasoning, and scenario-based illustration to address AI-related security issues.

AI • Deterrence • Research Design
17 / 21
Research Design and Methods
The study uses:
• Mechanism-based analysis
• Process tracing
• Comparative reasoning (United States and China)
These methods connect theory to real-world dynamics without overclaiming certainty.
• Mechanism-based analysis
• Process tracing
• Comparative reasoning (United States and China)
These methods connect theory to real-world dynamics without overclaiming certainty.

AI • Deterrence • Research Design
18 / 21
Expected Findings
The study expects to find that deterrence remains possible under AI conditions, but becomes more fragile.
Speed and system survivability are likely to become critical conditions for maintaining crisis stability.
Speed and system survivability are likely to become critical conditions for maintaining crisis stability.

AI • Deterrence • Research Design
19 / 21
Expected Contributions
This research contributes to deterrence theory by:
• Identifying AI-driven causal mechanisms
• Refining the concept of crisis stability
• Providing a theoretical lens for U.S.–China–Taiwan security dynamics
• Identifying AI-driven causal mechanisms
• Refining the concept of crisis stability
• Providing a theoretical lens for U.S.–China–Taiwan security dynamics

AI • Deterrence • Research Design
20 / 21
Research Limitations and Future Research
Key limitations include data availability and rapid technological change.
Future research may expand cases, add quantitative measures, or compare additional regions as AI systems mature.
Future research may expand cases, add quantitative measures, or compare additional regions as AI systems mature.

AI • Deterrence • Research Design
21 / 21
References (Selected)
Flinders, M. (2018). The future of political science? European Political Science, 17(4), 587–600.
Levy, J. S. (Power Transition Theory & the Rise of China) [uploaded reading].
Geist (2023); Sazhin et al. (2024); George (2025);
Karamchand & Aramide (2025); Kazim (2025).
Course readings: 8 uploaded PDFs used for this presentation.
Levy, J. S. (Power Transition Theory & the Rise of China) [uploaded reading].
Geist (2023); Sazhin et al. (2024); George (2025);
Karamchand & Aramide (2025); Kazim (2025).
Course readings: 8 uploaded PDFs used for this presentation.

