Skip to main content
Image
TED webinar Democracy and Elections in the Age of AI

On 1 October 2025, Team Europe Democracy (TED)’s Working Groups 2 and 3 hosted a webinar on “Safeguarding Democracy and Elections in the Age of AI.” Experts from The International Institute for Democracy and Electoral Assistance (International IDEA), ARTICLE19 and Safer Internet Lab (SAIL)PurpleCode Collective, moderated by International IDEA, explored AI’s dual role - as a driver for democratic innovation and a source of risk - and the governance frameworks needed to respond. 

Opening the discussion the German Federal Ministry for Economic Cooperation and Development (BMZ)stressed the urgency of updating “the guardrails of democracy” as AI reshapes politics and fundamental rights. BMZ advocated for a rights-based, people-centred approach aligned with international standards. They emphasised protecting civic space and media freedom, preventing technological misuse for surveillance and strengthening democratic resilience through inclusive, context-responsive, multi-stakeholder processes that ensure access to remedies and accountability.

ARTICLE 19, drawing on their BMZ-commissioned paper “Safeguarding Democracy in the Age of Artificial Intelligence”, cautioned that “panic is not a plan.” With democratic backsliding and deregulation shaping the global landscape, innovation is rapidly outpacing governance. While AI can support citizen engagement, policymaking and accountability - illustrated by civic tech initiatives such as Open Knowledge Brazil (tracking parliamentary bills for instance), Kenya’s corruption tracker and participatory governance efforts in Taiwan - it also carries risks, including algorithmic discrimination (e.g. childcare benefits case in the Netherlands), lock-in effects, opacity and deepening inequalities.

They pointed to the Paris AI Action Summit 2025 as a turning point, noting geopolitical competition (e.g. AI-race between the US and China) and corporate influence. ARTICLE 19 warned that the EU’s AI models are not universally applicable and requires context-sensitive adaptation. Recommendations included strengthening transparency and governance, engaging citizens in shaping AI strategies, ensuring inclusive access and adopting ethical procurement practices. The key message: human rights and inclusion are non-negotiable in the age of AI.

International IDEA highlighted how AI both amplifies risks and creates new opportunities for democracy, drawing on case studies from Bangladesh, Ghana, Indonesia, Mexico, Mongolia, Pakistan and South Africa. Their recently launched paper Safeguarding Democracy: EU Development at the Nexus of Elections, Information Integrity and Artificial Intelligence | International IDEA” studied threats like information pollution, erosion of trust, polarisation and minority targeting. “Coordinated networks of influence” use cross-posting, automated bots and rage-bait tactics to overwhelm the information space, public debate, blur legitimacy and demotivate participation. Examples ranged from fake experts in Bangladesh, AI-driven political re-branding in Indonesia, polarisation of LGBTQIA+ communities in Ghana, gendered attacks in Mexico to manipulation and deepfakes in Pakistan.

At the same time, AI can expand opportunities through translation, accessibility, censorship evasion and oversight tools. Positive countermeasures are emerging, for example, fact-checking hubs and media literacy programmes in Mexico, and in South Africa a multi-stakeholder privacy framework involving Google, Meta and TikTok together with an independent review commission and the Real411 complaints platform to flag disinformation. Practical steps forward include locally owned rules and oversight, partnerships between electoral institutions, improved civic education, inclusive participation of women, youth and minorities, and long-term trust-building. Citizens, as active participants rather than passive recipients of information, are central to restoring societal trust. 

Safer Internet Lab (SAIL) & PurpleCode Collective provided a country-specific perspective from Indonesia, where domestic manipulation dominates. The role of “buzzers” - digital operatives who amplify narratives - was introduced. Some are ideological, motivated by politics; others are pragmatic, treating it as business, benefiting from the “liar’s dividend” by discrediting or creating uncertainty around the authenticity of information. AI supercharges these activities, from generating content and translations to creating fake accounts and realistic fake personas at an unprecedented speed and scale, reaching even rural communities.  

Indonesia’s faces regulatory gaps: outdated electoral rules (e.g. limits on broadcast campaign ads but not online), misuse detection tools miss local languages (e.g. largely international/English dependent) and disproportionate government measures (such as the 2019 Papua internet shutdown or TikTok Live bans during protests and police brutality) raising risks of automated censorship. Reform priorities include stronger political advertising rules, binding AI guidelines for elections and campaigns, AI/digital literacy training for Electoral Management Bodies (EMBs), law enforcement and courts, and have accountability frameworks for both platforms and States. Echoing ARTICLE 19, caution was advised against replicating EU-models like the Digital Services Act (DSA) and Digital Markets Act (DMA), since previous frameworks like the Right to be Forgotten and GDPR did not translate well locally.

Looking ahead, Safer Internet Lab (SAIL) & PurpleCode Collective recommended to build local AI literacy and fact-checking capacity, strengthen media integrity and promote South-to-South collaboration on AI’s gender, environmental and linguistic dimensions. 

The discussion concluded that AI is a double-edged sword for democracy and elections. While it can expand civic engagement, improve accessibility and support oversight, it also fuels disinformation, online gender-based violence and public distrust. Safeguarding democratic processes requires governance frameworks grounded in context, human rights, inclusion and literacy combined with long-term investment in information integrity, ensuring that innovation never comes at the cost of democratic values.

Further resources:

Related topics

Related countries

Worldwide
Europe