If you’ve spent any time on TikTok, Instagram, or YouTube lately, you’ve probably noticed people using some very strange words. Creators are talking about “seggs” instead of sex, “le dollar bean” instead of lesbian, and “unalive” instead of dead or suicide. This isn’t just Gen Z being quirky; it’s a survival tactic called Algospeak.
Towards the last quarter of 2025, Algospeak become the unofficial “second language” of the internet. It exists mainly because the Artificial Intelligence (AI) that moderates our social media is often a blunt instrument due to its programming.Moderation bots are designed to flag or “shadowban” (hide) posts containing certain keywords to keep the platform safe. However, the AI often can’t tell the difference between someone promoting violence and a person sharing a story about their mental health or identity.And so to keep their content visible and avoid being silenced, users have had to invent a coded way of speaking that humans understand, but bots do not.
This coded language has evolved from a quirky internet trend into a vital survival strategy for marginalized communities. As platforms have ramped up automated moderation to combat disinformation and hate speech, the collateral damage has been immense. Vulnerable groups often find their legitimate discussions about health, identity, and activism suppressed by over-eager bots that can’t distinguish between a slur and a community-reclaiming a term.
This phenomenon, often called “automated shadowbanning,” has turned the internet into a game of cat-and-mouse. For a space to be inclusive, its users must be able to speak their truth without fear of their content being “de-ranked” or hidden. When a trans creator has to use emojis to spell out their identity just to avoid being flagged as “adult content,” the platform has failed its inclusivity mission. The “Algospeak” movement is a protest against the “black box” of moderation. It highlights the desperate need for transparency in how algorithms are trained and the biases they carry.
Activists are pushing for a “Right to Remedy,” where users can quickly appeal a shadowban and get a human explanation for why their content was suppressed.
The goal is to move away from a “clean” internet that achieves safety through silence, and toward a “vibrant” internet that achieves safety through context. Until the algorithms can understand the nuance of human identity, the coded language of Algospeak will remain a necessary, albeit frustrating, tool for those who refuse to be erased from the digital narrative.
The rise of Algospeak is a clear sign that our current “AI-first” approach to digital safety is failing. It proves that human communication is too nuanced, emotional, and complex to be managed by a list of banned keywords. In an ideal 2026, we wouldn’t need to speak in riddles just to exist online.
True inclusivity means building platforms where the “Right to be Seen” isn’t something you have to trick an algorithm into granting you. Until tech companies prioritize human context over automated speed, we will continue to see the internet rewrite the dictionary—one emoji at a time.
