No Ban, No Rules: Understanding Substack’s Hands-Off Policy on AI-Generated Writing

If you have spent any time on Substack lately, you may have found yourself scrolling through a post and feeling a strange sense of familiarity, as if the words were strung together by something that has never felt a single emotion. You might have wondered whether the platform has any rules about this, any guardrails to ensure that the writers asking for your monthly five dollars are actually the ones doing the writing. The answer, it turns out, is surprisingly simple and surprisingly absent. Substack does not have an official policy prohibiting the use of artificial intelligence to generate blog posts, and it has no plans to create one .This lack of a formal stance is not an oversight. It is a deliberate choice. When asked about the prevalence of AI-generated content on the platform, Helen Tobin, Substack’s head of communications, made the company’s position clear. The company does not proactively monitor or remove content solely based on its having been generated by artificial intelligence . The reasoning is that there are numerous valid and constructive applications for AI-assisted content creation, from grammar checking to generating social media summaries, that do not warrant a blanket ban . For Substack, the tool is not the problem. The problem is how it is used.

This approach places the platform in an interesting middle ground. On one hand, it is not the wild west of completely unmoderated content. Substack does have mechanisms in place to detect and mitigate inauthentic or coordinated spam activities, things like copypasta, duplicate content, SEO spam, and outright bot activity . Many of these prohibited behaviors can certainly involve AI-generated text. But the key distinction is that the platform is policing the behavior, the deception, and the spam, rather than the technology itself. If a writer uses AI to help brainstorm ideas or clean up their grammar, that is perfectly acceptable. If a writer uses AI to churn out dozens of low-quality, keyword-stuffed posts designed to game search engines, that runs afoul of their existing rules .

The data suggests that this is not a hypothetical concern. An analysis conducted by the AI-detection startup GPTZero found that among the one hundred most popular newsletters on Substack, approximately ten percent likely use AI-generated content in some capacity . Seven percent of those top newsletters were found to significantly rely on it . When WIRED reached out to the publications flagged in the study, most of those who responded confirmed that artificial intelligence tools are indeed part of their process, though they emphasized using them for tasks like creating images, checking grammar, and aggregating information rather than for fully automating their writing . One writer, David Skilling of the Original Football Substack, defended the practice by drawing a clear line, arguing that there is a huge difference between something being AI-generated and AI-assisted.

The burden of transparency, therefore, falls entirely on the individual creator. Because Substack has no platform-wide mandate, it is up to each writer to decide whether, and how, to disclose their use of these tools. Some in the Substack community are pushing for greater accountability, arguing that subscribers deserve to know if the human they are paying actually wrote the words that human is professing to have written . There is a growing conversation about the ethics of it all, with some writers pointing out that if you ask AI to write your entire post and do not cite the sources the AI used, it is essentially plagiarism, passing off stolen, synthesized ideas as your own.

Others are taking matters into their own hands by publishing personal AI usage policies on their About pages. These policies often distinguish between using AI for brainstorming or research and using it for drafting entire posts . Some creators even go so far as to block AI training on their content through their dashboard settings, ensuring that their work is not fed back into the very models that could someday replace them . For now, though, this is all voluntary. The platform itself remains neutral, a reflection of the broader industry’s struggle to decide whether artificial intelligence is a helpful assistant or a threat to the very idea of authentic human expression.