← Back to overview
This topic critically examines claims that "open source" AI democratizes access and distributes power. Widder, West, and Whittaker argue that the terms "open" and "open source" in AI are used inconsistently, often more marketing than technical descriptor. Even maximally open AI systems don't ensure democratic access due to three bottlenecks: compute concentration, data curation costs, and corporate capture of "openness."
Why this matters for Danish AI policy: Denmark's public sector is increasingly adopting AI systems. Should procurement favor "open source" AI? Does openness actually deliver the benefits claimed? How should Denmark evaluate DeepSeek and similar "open" models from geopolitical competitors?
Required Reading
Open (For Business): Big Tech, Concentrated Power, and the Political Economy of Open AI
Authors: David Gray Widder, Sarah West, and Meredith Whittaker
Publication: SSRN (accepted to appear in Nature), August 2023
Length: ~8,000 words
URL: papers.ssrn.com/abstract_id=4543807
Author Credentials
Meredith Whittaker is President of Signal Foundation. She co-founded the AI Now Institute at NYU and served as Senior Advisor to FTC Chair Lina Khan. She is one of the most prominent critics of Big Tech AI concentration.
David Gray Widder is a researcher at Carnegie Mellon University focusing on responsible AI.
Sarah West is a researcher at the AI Now Institute specializing in AI governance.
Supplementary Materials
Podcasts & Videos
DeepSeek and Current Debates (Updated Feb 2026)
- Government bans: Australia and Czech Republic banned DeepSeek from government devices (Jan/Feb 2026) over data security concerns. DeepSeek stores personal data on servers in China.
- US Congressional concerns: Letter to Commerce Department (Jan 2026) raised concerns about NVIDIA support enabling DeepSeek capabilities now integrated into PLA systems.
- Lawfare: What DeepSeek R1 Means—and What It Doesn't. Analysis of export control implications.
- Stanford HAI: "How Disruptive is DeepSeek?" (February 2025). Faculty roundtable on R1's implications.
Additional Context
- Carnegie Endowment: "Beyond Open vs. Closed" (July 2024, ~7,500 words). Multi-stakeholder consensus identifying 7 areas of agreement and 17 open questions.
- Open Source Initiative: "Meta's LLaMa License is Still Not Open Source" (October 2024). Documents Llama's failure to meet Open Source Definition.
Danish Context
- Danish public sector AI procurement guidelines
- EU AI Act requirements for foundation models
- Danish Data Protection Agency guidance on AI systems
Guiding Questions
- Defining "open": The authors argue "open" is used inconsistently in AI. What are the different meanings of openness (open weights, open data, open training code, open governance)? Which matter most for the benefits claimed?
- The three bottlenecks: Compute concentration, data curation costs, and corporate capture are identified as barriers. Which is most significant? Can any be addressed through policy?
- DeepSeek puzzle: China's DeepSeek released powerful "open" models. How does this complicate the analysis? Should Denmark treat DeepSeek differently than Meta's Llama or Mistral's models?
- Procurement policy: Should Danish public sector procurement favor "open source" AI? What criteria should determine this? How should procurement weigh openness against other factors (security, performance, support)?
- Democratization claims: The authors are skeptical that openness democratizes AI. Under what conditions, if any, could openness genuinely distribute power? What complementary policies would be needed?
Presentation Angle Ideas
- "Procurement Guidelines for 'Open' AI": Develop specific criteria Denmark should use when evaluating "open source" AI for public sector use. What counts as genuinely open? What red flags should trigger skepticism?
- "The DeepSeek Dilemma": Focus on the specific challenge posed by capable open models from geopolitical competitors. How should Denmark balance the benefits of open access against security and sovereignty concerns?
- "Beyond Open vs. Closed": Argue that the open/closed binary is unhelpful. Propose a more nuanced framework for evaluating AI systems that accounts for governance, accountability, and actual accessibility.
- "Making Openness Work": Accept the authors' critique but propose complementary policies that could make openness deliver on its promises. What public investments, governance structures, or regulations would help?