← Back to overview
This topic examines how AI affects democratic processes and information ecosystems. Csernatoni argues that AI-generated content enables manipulation of information and disruption of electoral processes, but the collision between rapid AI advancement and eroding democratic safeguards requires comprehensive response combining technical solutions (watermarking, content provenance), governance tools, and digital literacy.
Why this matters for Danish AI policy: Denmark has strong democratic institutions but is not immune to AI-enabled disinformation. The Digital Democracy Initiative (2023–2026) and proposed deepfake legislation show Denmark is actively addressing these issues. What more should be done?
Required Reading
Author Credentials
Raluca Csernatoni is a Fellow at Carnegie Europe specializing in European security and emerging technologies. She is Professor at the Brussels School of Governance (VUB) and Senior Research Expert on the EU Cyber Direct project. She holds a PhD in International Relations from Central European University and has published in Minds and Machines, European Security, and Geopolitics.
Supplementary Materials
Podcasts & Videos
Alternative Perspectives
- Munich Security Conference: "AI-pocalypse Now?" (September 2024, ~5,500 words). Evidence-based finding that AI disinformation had negligible actual impact in 2024's "super election year"; covers EU, UK, France, Slovakia, Taiwan, US, India.
- AlgorithmWatch: "10 Questions About AI and Elections" (May 2024, ~2,500 words). Contrarian perspective arguing deepfake hype is overblown; real threat is recommendation algorithms and platform consolidation.
- Alan Turing Institute/CETaS: "AI-Enabled Influence Operations" (September 2024, ~6,500 words). Empirical study finding only 16 viral AI disinformation cases during UK election.
Danish Context
- Digital Democracy Initiative (2023–2026, 200m DKK). Supports civil society work on disinformation.
- Proposed deepfake legislation. Would give citizens copyright over their likeness.
- Act on Supplementary Provisions (May 2025). Denmark among first EU members implementing AI Act provisions.
- Danish media landscape. High trust, strong public broadcasting (DR).
Guiding Questions
- Threat assessment: How serious is the AI disinformation threat to Danish democracy? The Munich Security Conference found "negligible impact" in 2024 elections. Is this reassuring, or are we underestimating future risks?
- Technical solutions: Csernatoni discusses watermarking and content provenance. How effective are these technical approaches? What are their limitations? Should Denmark mandate them?
- Platform governance: The AlgorithmWatch piece argues recommendation algorithms are a bigger threat than deepfakes. How should Denmark regulate platforms' algorithmic amplification of content?
- Danish resilience: Denmark has high media trust and strong public broadcasting. Does this make Denmark more resilient to AI disinformation, or could these strengths be undermined?
- Freedom vs. safety: Regulating AI-generated content raises free expression concerns. How should Denmark balance protecting democratic discourse with preserving speech freedoms?
Presentation Angle Ideas
- "Denmark's AI Democracy Defense": Propose a comprehensive strategy combining technical standards (watermarking/provenance), platform regulation, and digital literacy initiatives tailored to Danish context.
- "Beyond Deepfake Panic": Challenge the focus on AI-generated content. Argue that Denmark should prioritize platform governance and algorithmic transparency over content authenticity verification.
- "The Digital Democracy Initiative 2.0": Evaluate the current initiative (2023–2026) and propose what should come next. What has worked? What gaps remain? What should the next phase prioritize?
- "Nordic Cooperation on AI and Democracy": Propose coordinated Nordic approaches to AI disinformation. How could Denmark, Sweden, Norway, and Finland share resources and best practices?