← Back to overview

AI as Normal Technology

03

This topic challenges the dominant narratives of AI exceptionalism. Narayanan and Kapoor, authors of the influential book AI Snake Oil, argue that AI should be understood as a "normal" general-purpose technology like electricity or the internet. Transformative, yes, but not an autonomous superintelligent entity requiring radically new governance approaches. They argue impacts will unfold gradually over decades and existing institutions are sufficient to maintain human control.

Why this matters for Danish AI policy: If AI is "normal," Denmark can rely more heavily on adapting existing regulatory frameworks. If AI is exceptional, entirely new institutions may be needed. This framing question underlies all AI policy decisions.

Required Reading

AI as Normal Technology

Authors: Arvind Narayanan & Sayash Kapoor

Publication: Knight First Amendment Institute at Columbia University, April 2025

Length: Full essay ~15,000–18,000 words; assign Parts I and IV only (~6,500 words)

URL: knightcolumbia.org/content/ai-as-normal-technology

Alternative: normaltech.ai (Substack with newsletter)

Sections to Focus On

Author Credentials

Arvind Narayanan is Professor of Computer Science at Princeton and Director of the Center for Information Technology Policy. Named in TIME's 100 Most Influential People in AI, he received the Presidential Early Career Award.

Sayash Kapoor is a PhD candidate at Princeton and Senior Fellow at Mozilla.

Both co-authored AI Snake Oil (2024), a widely-cited critique of AI hype.

Supplementary Materials

Podcasts & Videos

Debate Context

Additional Context

Guiding Questions

  1. The normality claim: Narayanan and Kapoor argue AI is "normal." What exactly do they mean by this? Is it a description of current AI, a prediction about future AI, or a prescription for how we should think about AI?
  2. Capability-reliability gap: The authors argue AI capabilities don't automatically translate to reliable real-world deployment. What are examples of this gap? How should Danish policy account for it?
  3. Institutional sufficiency: They claim existing institutions are sufficient to govern AI. Is this true for Denmark? What existing Danish/EU institutions are most relevant? What gaps, if any, exist?
  4. The timeline question: If AI impacts unfold over decades rather than years, how should this change Danish policy priorities? What's the cost of preparing for rapid change that doesn't materialize?
  5. Critique: Some argue this view underestimates discontinuous progress. How should policymakers weigh "normal technology" arguments against more alarmist perspectives like Aschenbrenner's (Topic 4)?

Presentation Angle Ideas

  1. "Adapt, Don't Reinvent": Argue that Denmark should focus on adapting existing regulatory frameworks (labor law, data protection, sector-specific regulation) rather than creating new AI-specific institutions.
  2. "The Capability-Reliability Gap in Danish Context": Use the authors' framework to analyze specific Danish use cases. Where is AI being deployed despite reliability concerns? What policies address this?
  3. "Hedging Between Worldviews": Acknowledge deep uncertainty about AI trajectories. Propose a Danish policy approach that's robust whether AI proves "normal" or "exceptional."
  4. "Lessons from Past Technologies": Use historical analogies (electricity, internet, previous automation waves) to inform Danish AI policy. What did Denmark get right and wrong in past technology transitions?

Note: This reading presents the opposite worldview from Topic 4 (Aschenbrenner). Together they represent the key debate in AI governance: Is AI a normal technology requiring incremental policy adaptation, or an exceptional technology requiring urgent, transformative governance?