AI-Generated Disinformation in Europe and Africa: Use Cases, Solutions and Transnational Learning
Artificial intelligence is rapidly changing the information landscape, not only by enhancing communication but also by enabling increasingly sophisticated forms of disinformation. Generative AI technologies are being exploited to manipulate political narratives, disrupt electoral processes, and deepen societal divisions, posing a growing threat to democratic resilience in both Europe and Africa.
This comparative study sheds light on how AI-generated disinformation is taking shape across two very different regional contexts. It draws from over 100 real-world examples to illustrate how malign actors—ranging from state-affiliated entities to private content farms and individual influencers—are using synthetic content to influence public perception and policy debates. The cases range from deepfake videos of political candidates in European democracies to AI-driven propaganda in conflict-affected states such as Sudan and Burkina Faso.
Rather than framing disinformation as a purely technological challenge, the study explores how it intersects with broader political and institutional vulnerabilities. In Europe, AI is increasingly used to amplify polarisation and erode trust in democratic institutions, especially in the context of upcoming elections. In parts of Africa, the weaponisation of AI-generated content is closely tied to fragile governance structures, low media literacy, and limited regulatory oversight, making communities particularly susceptible to manipulation.
The study also looks at how societies are responding to these threats. It identifies a range of promising practices and outlines conditions under which countermeasures are most effective. These include strong legal and regulatory frameworks, investment in detection technologies, the promotion of digital literacy, and the strengthening of independent media. It also stresses that solutions cannot be driven by governments alone. Instead, coordinated approaches involving civil society, media, academia, and the tech sector are crucial to protect information integrity and democratic norms.
One of the study’s contributions lies in its focus on transnational learning. By comparing experiences and responses across continents, it highlights the importance of cross-regional dialogue and collaboration. Whether it is adapting fact-checking methods, developing joint training programmes, or sharing legal approaches to platform regulation, there is considerable potential for mutual learning between African and European actors facing similar challenges.
At its core, the study makes a compelling case that addressing AI-generated disinformation is not only about responding to immediate threats, but about building long-term societal resilience. This means putting inclusive, rights-based approaches at the centre of strategies and ensuring that the fight against disinformation reinforces, rather than undermines, democratic values.
This publication provides critical insights for anyone working at the intersection of technology, governance, and civic space, especially those looking to understand the evolving landscape of digital threats and the tools available to confront them.
Log in with your EU Login account to post or comment on the platform.