As Kenya’s digital landscape becomes more complex, so do the tactics used by individuals to spread harmful content. One of the newest and most elusive trends gaining traction is “algospeak” a form of coded language, slang, or emojis used strategically to evade online moderation systems. While this linguistic creativity might seem harmless on the surface, it is increasingly being used to spread hate speech, incitement, and harmful narratives under the radar.
Originally, algospeak was coined to describe words used by content creators across social media platforms to avoid demonetization or censorship like writing “unalive” instead of “suicide”. However, the term has evolved into a powerful tool for evading content moderation. In Kenya, these words are being adapted for much more serious purposes such as fueling political propaganda, ethnic tension, and coordinated online abuse.
A recent example is the widespread use of the term“majamaa ya sirikali.” While it loosely translates to “the government guys,” the phrase is increasingly being used online to indirectly refer to police brutality. Users have employed it to describe instances of violent crackdown by law enforcement without ever naming the police. This linguistic camouflage helped them express anger, mobilize outrage, and share graphic content, all while staying below the moderation radar on platforms like Facebook and X.
But the trend gets darker. In encrypted channels and closed online groups, users are now using animal emojis, food references, and tribal metaphors to target specific communities. A watermelon emoji, goat, or even coded animal terms are being embedded in memes, videos, and hate-filled rants to dehumanize particular ethnic groups often in politically volatile conversations. These symbols, while seemingly harmless, are widely understood within certain circles and are weaponized in ways that make detection and moderation incredibly difficult.
The danger lies in how algospeak masks malicious intent. Most social media platforms rely on automated moderation systems that scan for keywords and known slurs. When hate is wrapped in code or sarcasm, it becomes almost invisible to these systems. Moreover, Kenya’s complex linguistic environment where hate speech often appears in Sheng, Kiswahili, or indigenous languages makes algorithmic detection even more ineffective.
Is this a new form of digital hate? Absolutely. But more critically, it is a form that evades the very systems put in place to protect users. Disinformation networks, online trolls, and even political operatives are exploiting this gap—using coded speech to stoke divisions without facing platform sanctions.
The question now is how do we respond?
Tackling algospeak will require more than keyword filters. Platforms must invest in localized content moderation, train AI models in regional slang and shifting speech patterns, and work closely with fact-checkers, journalists, and civil society actors who understand the nuance behind these codes.
At the same time, Kenyan internet users need digital literacy tools to decode harmful speech disguised as banter, memes, or slang. Just because hate no longer looks like a blatant slur doesn’t mean it has disappeared but is more evolved.
In the end, algospeak is more than just a linguistic workaround. It’s a digital smokescreen. And as long as it’s allowed to thrive unchecked, Kenya’s online spaces will continue to serve as breeding grounds for bigotry, disinformation, and division.