In Kenya, as in many parts of the world, AI is undeniably playing a pivotal role in the information ecosystem. However, its influence is doubled edged such that, while it is being used to combat misinformation and aid fact-checking efforts, it is simultaneously being weaponized to spread disinformation, particularly in politically charged environments.
Over the last few months, for instance, Kenya has witnessed a surge in the use of AI-generated content both for beneficial and malicious purposes.
This article will explores how AI tools are impacting the information landscape in Kenya, using recent case studies and highlighting both the risks and opportunities.
AI Tools Fueling Misinformation in Kenya
- Political Disinformation Through Deepfakes In early 2025, the Kenyan government flagged coordinated disinformation campaign involving AI-generated deepfakes. These videos and images were crafted to mimic political figures and alleged government corruption, causing confusion and undermining public trust. The goal appeared to be diplomatic sabotage, with fabricated content targeting foreign policy initiatives and misrepresenting the intentions of visiting dignitaries.
- Smearing Protest Movements During the mid-2024 anti-tax protests, AI-generated visuals were deployed by suspected pro-government actors to discredit demonstrators. Some images depicted young Kenyan protesters bearing Russian flags or engaging in acts of vandalism scenes that were later debunked. These were designed to manipulate public perception and diminish the legitimacy of the movement.
- Targeted Harassment and Suppression of Dissent AI-generated images and false narratives have also been used to target individual activists and journalists. In several reported cases, individuals were depicted in compromising scenarios through manipulated images, which were circulated widely to shame or intimidate them. Some who engaged in sharing or reacting to this content reported incidents of abduction or harassment, raising alarms about the weaponization of AI to silence criticism.
AI as a Tool for Fighting Misinformation
- Monitoring Hate Speech and Toxic Content Organizations like Code for Africa and Shujaaz Inc., through the MAPEMA consortium, developed AI-powered tools that would scan social media platforms for hate speech. Reportedly, during the 2022 general elections, these tools helped identify over 550,000 toxic posts on Facebook alone. This early detection allowed for timely interventions that prevented potential violence or digital harm.
- Augmenting Fact-Checking Processes Fact-checking organizations in Kenya are increasingly using AI algorithms to automate parts of their verification workflows. These tools can cross-reference quotes, analyze images, detect repetition of false narratives, and even scan metadata to assess the credibility of shared content. By speeding up the verification process, AI enables faster rebuttals of false claims, improving the overall response to misinformation.
- Governmental and Regulatory Responses In light of AI’s dual impact, the Kenyan government announced the National AI strategy to govern the development and use of AI technologies in Kenya. These include measures aimed at ensuring transparency, accountability, and ethical use, particularly focusing on curbing AI-generated disinformation. The regulatory push is part of Kenya’s broader digital governance strategy that also touches on cybersecurity, data protection, and media ethics
Conclusion
In finality, there’s more potential with AI if its put into good use. See how AI tools are revolutionizing how misinformation is detected and debunked, offering much-needed support to especially fact-checkers and civil society organizations. A collaborative approach that would blend innovation with regulation, civic engagement, and public education will be crucial in ensuring that AI becomes more of a solution than a problem in the fight against misinformation