What Britain's Online Safety Act Actually Requires
"Papers, please" for the internet
This post was originally published in The Telegraph. The headline and links there were added by the editors.
I didn’t go looking for a fight with Britain’s Online Safety Act. But as the CEO of Substack, an online publishing platform for writers, journalists, and creators of all kinds, figuring out how our company would adhere to the new regulatory measures became part of my job.
Substack has a strong and growing presence in the UK, with independent journalists and major cultural and political figures publishing their work and communicating with their audiences through our platform. So when the OSA came into effect, we set out to comply.
What I’ve discovered as we’ve implemented these rules shocks me.
In a climate of genuine anxiety about children’s exposure to harmful content, the Online Safety Act can sound like a careful, commonsense response. But what I’ve learned is that in practice, it pushes toward something much darker: a system of mass political censorship unlike anywhere else in the western world.
What does it actually mean to “comply” with the Online Safety Act? It does not mean hiring a few extra moderators or adding a warning label. It means platforms must build systems that continuously classify and censor speech at scale, deciding—often in advance of any complaint—what a regulator might deem unsuitable for children. Armies of human moderators or AI must be employed to scan essays, journalism, satires, photography, and every type of comment and discussion thread for potential triggers. Notably, these systems are not only seeking out illegal materials; they are trying to predict regulatory risk of lawful, mainstream comment in the face of stiff penalties.
Once something is classified as potentially sensitive, the next step is age gating. Readers—who in our case are overwhelmingly adults reading lawful material—often must be asked to prove their age through third-party checks that may involve facial scanning, providing identification documents, or financial verification. These measures don’t technically block the content, but they gate it behind steps that prove a hassle at best, and an invasion of privacy at worst. Readers who hoped to engage with the material are deterred from doing so; writers and creators, some of whose livelihoods depend upon getting their work in front of potential subscribers, bear the penalties. The result is that vast swathes of legitimate cultural discourse are swept up, bogged down, and discouraged.
Substack is building an economic engine that supports authorial independence, and as such, we are strong defenders of the freedom of the press, which we believe is essential to a free society.
For me, this belief began early. Growing up, my parents allowed an exception to bedtime if you were reading a book, and I took full advantage. Those books gave me a window into a world far larger than my own Canadian suburb. As a teenager, I explored the wild internet of the 90s, encountering both bad ideas and transformative ones. I found perspectives and cultures that eventually led to my career as a technologist and my life in San Francisco. The opportunity to read and explore widely was essential to fostering curiosity and independent thought, and it’s something I encourage in my own children today.
None of this is to say that there aren’t serious problems that manifest online, or that the government has no role to play in protecting children on the internet. Fighting those dangers with effective state action is important. When it comes to crimes against children, governments have a responsibility to respond with every tool at their disposal, especially vigorous investigations and prosecution of perpetrators.
What the OSA does is something different.
It focuses on speech on online platforms, forcing companies like Substack to build systems that decide when an essay crosses the line from satire to threat, or when photojournalism is too graphic—or tries to predict when a regulator might deem it so.
This would be concerning anywhere, but the UK already has expansive policing of speech. Recent reports suggest that police make over ten thousand arrests each year for online communications offenses, while only a small fraction result in conviction. Now with the OSA, there will be pervasive classification and identify verification, too.
This is how you end up with “papers, please” for the internet.
Substack is bound by the law and will continue to comply with it. We have taken care to implement these measures in the most transparent way possible, and we continually refine our approach to maximize freedom of expression within the bounds of the law. Smaller publishers and platforms, however, may not have the resources to do the same. In those cases, writers, publishers, and readers all suffer.
The Online Safety Act does something different than what it says on the tin. It is not the most effective way to keep kids safe, and it hurts a free society.
If this model spreads, it won’t just block content for children. It will determine whether adults can read, write, and argue freely without first submitting to surveillance. Companies like ours will respect the laws you choose, so please be sure you choose the future you want.

Every writer and reader on Substack needs to repost this and forward to other platforms.
Chris it might be a good place to let non-subscribers comment on this.
Chris, I appreciate the sentiment but, as CEO of Substack, you are in an extremely powerful position and if you really feel this way, how are you fighting back (or planning to)?
To me, it seems as though Substack has just rolled over and, I would argue, even gone beyond what the UK Online Safety Act actually requires.
You do not need to use facial-recognition software to provide age assurance but you have chosen this most dystopian method.
All you are legally REQUIRED to do is risk assess and moderate illegal content, and stop kids seeing porn and harmful content.
Instead YOU HAVE DECIDED to add age-verification to content which is not 18+, blurred/blocked content for UK users who refuse to video their faces and implemented a rather aggressive auto-flagging system.
Even the UK government has said you are likely over-interpreting the Act.
So, once again, I appreciate the sentiment, but do something about it, fight it. If you can’t then who can. Don’t roll over and incorporate more draconian features than necessary because you know this is just giving authoritarians ideas which they WILL legalise next time around.
Substack grew exponentially due to people flocking towards a free speech arena. Don’t let it die because your legal team thought that same free speech was too risky.
P.S - Unbelievably, I had to pay to even comment on your post. I’m happy to do that with any other author, but the CEO of Substack - come on!