← Back to overview
This topic challenges the dominant narratives of AI exceptionalism. Narayanan and Kapoor, authors of the influential book AI Snake Oil, argue that AI should be understood as a "normal" general-purpose technology like electricity or the internet. Transformative, yes, but not an autonomous superintelligent entity requiring radically new governance approaches. They argue impacts will unfold gradually over decades and existing institutions are sufficient to maintain human control.
Why this matters for Danish AI policy: If AI is "normal," Denmark can rely more heavily on adapting existing regulatory frameworks. If AI is exceptional, entirely new institutions may be needed. This framing question underlies all AI policy decisions.
Required Reading
AI as Normal Technology
Authors: Arvind Narayanan & Sayash Kapoor
Publication: Knight First Amendment Institute at Columbia University, April 2025
Length: Full essay ~15,000–18,000 words; assign Parts I and IV only (~6,500 words)
URL: knightcolumbia.org/content/ai-as-normal-technology
Alternative: normaltech.ai (Substack with newsletter)
Sections to Focus On
- Part I: "AI is normal" thesis. The core argument framed as description, prediction, and prescription.
- "Ladder of Generality" (Figure 2). Key conceptual framework.
- "Capability-reliability gap." Why AI capabilities don't automatically translate to reliable deployment.
- Part IV: Policy implications. What "normal technology" means for governance.
Author Credentials
Arvind Narayanan is Professor of Computer Science at Princeton and Director of the Center for Information Technology Policy. Named in TIME's 100 Most Influential People in AI, he received the Presidential Early Career Award.
Sayash Kapoor is a PhD candidate at Princeton and Senior Fellow at Mozilla.
Both co-authored AI Snake Oil (2024), a widely-cited critique of AI hype.
Supplementary Materials
Podcasts & Videos
Debate Context
- Both authors debated Daniel Kokotajlo of the AI 2027 project, representing opposing views on AI timelines.
- This reading pairs with Topic 4 (Aschenbrenner) as opposing worldviews.
Additional Context
- AI Snake Oil (book). Deeper background on their arguments.
- EU AI Act. Example of treating AI as "normal" (sector-specific, risk-based).
- Historical technology transitions. Electricity, internet, prior GPTs.
Guiding Questions
- The normality claim: Narayanan and Kapoor argue AI is "normal." What exactly do they mean by this? Is it a description of current AI, a prediction about future AI, or a prescription for how we should think about AI?
- Capability-reliability gap: The authors argue AI capabilities don't automatically translate to reliable real-world deployment. What are examples of this gap? How should Danish policy account for it?
- Institutional sufficiency: They claim existing institutions are sufficient to govern AI. Is this true for Denmark? What existing Danish/EU institutions are most relevant? What gaps, if any, exist?
- The timeline question: If AI impacts unfold over decades rather than years, how should this change Danish policy priorities? What's the cost of preparing for rapid change that doesn't materialize?
- Critique: Some argue this view underestimates discontinuous progress. How should policymakers weigh "normal technology" arguments against more alarmist perspectives like Aschenbrenner's (Topic 4)?
Presentation Angle Ideas
- "Adapt, Don't Reinvent": Argue that Denmark should focus on adapting existing regulatory frameworks (labor law, data protection, sector-specific regulation) rather than creating new AI-specific institutions.
- "The Capability-Reliability Gap in Danish Context": Use the authors' framework to analyze specific Danish use cases. Where is AI being deployed despite reliability concerns? What policies address this?
- "Hedging Between Worldviews": Acknowledge deep uncertainty about AI trajectories. Propose a Danish policy approach that's robust whether AI proves "normal" or "exceptional."
- "Lessons from Past Technologies": Use historical analogies (electricity, internet, previous automation waves) to inform Danish AI policy. What did Denmark get right and wrong in past technology transitions?
Note: This reading presents the opposite worldview from Topic 4 (Aschenbrenner). Together they represent the key debate in AI governance: Is AI a normal technology requiring incremental policy adaptation, or an exceptional technology requiring urgent, transformative governance?