← Back to overview

Geopolitical Competition

04

This topic examines AI development as a geopolitical race with national security implications. Aschenbrenner argues that superintelligence will confer decisive military advantage comparable to nuclear weapons, that China has a credible path to competitiveness, and that democracies must "win" this race to preserve freedom. This is among the most hawkish mainstream perspectives on AI governance.

Why this matters for Danish AI policy: As a small NATO member, Denmark must navigate US-China AI competition. Should Denmark align fully with US approaches, pursue European strategic autonomy, or seek a balanced position? What are the security implications of AI dependence?

Required Reading

The Free World Must Prevail

Author: Leopold Aschenbrenner

Publication: Situational Awareness Chapter IIId, June 2024

Length: ~6,500–7,500 words

URL: situational-awareness.ai/the-free-world-must-prevail

Full PDF: situational-awareness.ai PDF

Key Sections

Author Credentials

Leopold Aschenbrenner was a member of OpenAI's Superalignment team, working with Ilya Sutskever. He graduated as valedictorian of Columbia University at age 19. He was fired from OpenAI in April 2024 after raising security concerns. He subsequently founded Situational Awareness LP, an AI-focused investment fund managing over $1.5 billion.

Note: His departure from OpenAI was controversial. Some view his analysis as prescient; others see it as alarmist and self-serving.

Supplementary Materials

Podcasts & Videos

Additional Context

January 2026 Update: DeepSeek

Contrasting Perspectives

Guiding Questions

  1. The race framing: Aschenbrenner frames AI development as a race democracies must win. Is this framing accurate? What are the risks of accepting this framing? What are the risks of rejecting it?
  2. Military advantage: The essay argues superintelligence confers decisive military advantage. How credible is this claim? How should Denmark, as a NATO member, weigh AI military considerations?
  3. China assessment: Aschenbrenner argues China can be competitive through compute buildout and algorithm theft. How should Danish policymakers assess Chinese AI capabilities? What are Denmark's specific vulnerabilities?
  4. Small state strategy: Denmark cannot independently compete in an AI "race." What strategies are available to small democracies? Alignment with the US? European coordination? Neutrality? Niche specialization?
  5. Critique: Aschenbrenner was fired from OpenAI and now runs an AI investment fund. How should policymakers weigh his arguments given potential conflicts of interest? Does his insider knowledge add or detract from credibility?

Presentation Angle Ideas

  1. "Denmark's AI Security Strategy": Accept the geopolitical competition framing and propose how Denmark should position itself: deepening NATO AI cooperation, hosting allied AI infrastructure, or developing specific defensive capabilities.
  2. "Critique of the Race Framing": Challenge the premise that AI development is a zero-sum race. Argue for international cooperation, arms control analogies, or alternative framings that better serve Danish interests.
  3. "Balancing Alliance and Autonomy": Explore tension between US alignment and European strategic autonomy. How can Denmark maintain transatlantic alliance while supporting EU digital sovereignty (Topic 5)?
  4. "The Small State Playbook": Focus specifically on what strategies small democracies have in great power AI competition. Draw on Denmark's historical experience navigating Cold War dynamics, EU integration, and Nordic cooperation.

Note: This reading presents the opposite worldview from Topic 3 (Narayanan & Kapoor). Together they represent the key debate: Is AI development an urgent national security race, or a gradual technological transition that existing institutions can manage?