How Kenya Can Strike a Balance Between Safety and Free Expression in Content Moderation

The current digital landscape shows a  vibrant, chaotic, and powerful space for fierce political debate, groundbreaking entrepreneurship, and vital social connection. But this digital dynamism has a dark side. It’s also a fertile ground for hate speech, misinformation, and coordinated harassment, creating a monumental challenge for the nation.

So, how can Kenya strike a balance between protecting citizens from online harms and safeguarding the freedom of expression that underpins our democracy?

The explosive growth of social media platforms like TikTok, X (formerly Twitter), and Facebook has fundamentally reshaped Kenyan society. It has democratized voice, allowing ordinary citizens to hold power to account, organize social movements, and build businesses with little more than a smartphone. However, this same power to connect and mobilize can be weaponized into ethnically-charged hate speech, or Coordinated misinformation campaigns, often powered by shadowy actors to manipulate public opinion and erode trust in institutions. For many, especially women, activists, and journalists, the online world can be a hostile environment filled with targeted harassment designed to silence their voices.

how can we harness a free and open internet while mitigating its profound risks?

While this is doable, there are still no easy solutions to this. An overly aggressive approach to moderation risks sanitizing the public square, removing legitimate expression, and creating a “chilling effect” on speech. A hands-off approach allows harm to fester, threatening social cohesion and individual safety. Striking a sustainable balance requires a multi-faceted strategy moving beyond a simple “take it down” mentality.

First we can start with a shift towards co-regulation, where the government sets clear, democratically-legitimized standards for safety, while platforms are responsible for implementation with radical transparency and independent oversight.

Secondly,  massive investment in technology that understands Kenya would be another probable approach such as training AI models on local languages and contexts to better detect nuanced hate speech without flagging innocent content.

Third, and perhaps most crucial, is empowering users through digital literacy.

Another sure step to take is to empower the thousands of Kenyan content moderators Their deep understanding of local languages, culture, and context is something no algorithm can replicate. Therefore, should be empowered to directly inform platform rules and help train AI systems to better understand nuances like Sheng and sarcasm, making moderation smarter and more effective and also improve accuracy.

Lastly, The current legal landscape is a “tug-of-war” where vague laws create uncertainty and risk misuse. Their should be a possibility to move away from this adversarial relationship and shift towards a clear, collaborative, and rights-respecting system of governance. This can involve a fundamental redesign of the legal and regulatory environment or creation an independent multi-stakeholder body that would bring all the key players to one table including representatives from the government (CA, NCIC), the judiciary, tech platforms, civil society, academia, and the newly empowered moderators. Its mandate would be to drive Legislative Reform, implement co-regulation and establish an appeal body where users can appeal significant content decisions, thus creating a fair and transparent process that builds public trust and reduces the burden on courts.