Is it Time to Shift the Misinformation Spread to Machines?

The conversation around misinformation is slowly  shifting from what people say to how platforms work. For years, social media companies operated like digital landlords, claiming they weren’t responsible for the “trash” their tenants (users) left on the lawn. But as AI-generated lies and deepfakes began to flood our feeds at an uncontrollable scale, governments and experts introduced a Safety by Design framework a that stops looking at misinformation as a series of individual mistakes and starts looking at it as a product flaw.

Think of it like a car if a car’s brakes fail, we don’t blame the driver for not stopping we blame the manufacturer for a bad design.  Under these new 2026 frameworks like the fully implemented UK Online Safety Act and the EU’s Digital Services Act platforms are no longer just passive hosts.They are now being held liable for the amplification of harmful content. It’s not about the fact that a lie exists it’s about the fact that the platform’s algorithm saw that lie, noticed it was getting angry clicks, and chose to show it to five million more people to keep them on the app longer.

In the Kenyan context, Safety by Design has moved from a theoretical concept to a critical regulatory and legal battleground. While the global trend focuses on holding big tech platforms liable, Kenya’s approach is uniquely shaped by a push-pull between national security, data privacy, and a very active judiciary.

How “Safety by Design” Works in Practice

In 2026, a platform that is “safe by design” doesn’t just react to reports, it builds friction into the system to prevent chaos before it starts.

  • Algorithmic Speed Bumps: If a piece of unverified, AI-generated news starts going viral too fast, the system automatically slows its reach until it can be checked.

  • Transparency by Default: Platforms must now show you why an algorithm chose a specific post for your feed. If it’s pushing a deepfake because it knows it will make you angry, the platform can be fined billions.

  • The End of “Safe Harbor”: In places like India, new 2026 amendments to IT rules mean that if a platform doesn’t clearly label AI-generated content (SGI), they lose their legal “safe harbor” protection. This means they can finally be sued for the damage that content causes.

Locally, the Office of the Data Protection Commissioner (ODPC) has been the primary driver of this shift. In 2026, Kenya entered a “mature enforcement phase” of the Data Protection Act that includes 

  • Proactive Audits: The ODPC now uses the Data Protection Compliance Audit Regulations to ensure organizations aren’t just reacting to breaches but are building systems that protect data from the start.

  • Youth & Children’s Data: There is a specific focus on the education and private security sectors. “Safety by Design” here means that any app or platform targeting Kenyan youth must have parental consent and identity verification built into the core code, rather than added as an afterthought.

  • Regulatory Focus: Increased scrutiny on data controller registration , cross-border transfers, and public education on data rights .

The goal of “Safety by Design” is to move us toward a “cleaner” internet. We are moving away from a world where tech companies can say, “Oops, our algorithm did that,” and toward a world where they have to prove their systems were built to protect the truth, not just the bottom line. By holding the architects of our digital world accountable, we are finally treating the information we consume with the same safety standards we expect from the food we eat and the cars we drive.

Image source