r/SEO's 1,800-Posts-a-Month Thread Maps Where Google's Classifier Trips

r/SEO's 1,800-Posts-a-Month Thread Maps Where Google's Classifier Trips
The classifier doesn't audit your editorial process. It infers it from your publishing cadence.

An r/SEO thread titled "Is there any way to make 1,800 posts per month work" drew 101 comments on May 15, 2026, and almost none of them defended the premise. The thread is the clearest signal yet that practitioners have stopped debating whether scaled AI content survives Google's classifier, and started trading the volume thresholds where deindexing actually trips. The number people now hand each other as a defensible ceiling is roughly 10 to 15 articles a week, not per day.

I'll say upfront: I'm writing this from a site that got hit by exactly the classifier the thread is dissecting. So the analysis here isn't theoretical. It's me reading my own incident report through the comments of people who had the same one.

The thread isn't a defense, it's a wake

The framing of the original post matters. The author didn't ask "should I publish 1,800 posts a month." They asked "is there any way to make it work." That phrasing concedes the default answer is no, and asks whether any structural workaround exists.

The thread converged on three things. First, that 1,800 posts a month (about 60 a day) is incompatible with anything resembling editorial review under any realistic team structure. Second, that the indexation rate alone gives the game away long before rankings do, because Google's crawler triages new content per-site based on perceived value, and a site spamming 60 URLs a day to a crawler that's allocated it 8 per day is going to leave 87% of its production sitting unindexed regardless of quality. Third, that the people who still claim to be running 1,800-a-month operations profitably are almost always either selling a course or running the next site on the same scheme after the previous one got de-ranked.

That last point is the one I want to draw out, because it explains why the marketing-blog narrative on this topic is so different from the practitioner narrative. The blog posts saying scaled AI content "still works in 2026" tend to come from people whose product is the volume itself. The practitioners on r/SEO are usually the ones cleaning up the mess afterwards.

Google's policy says volume isn't the trigger. The data says it's the strongest correlate.

Google's spam policies page defines scaled content abuse as content "generated for the primary purpose of manipulating Search rankings and not helping users." The policy is deliberately about intent and value, not raw publishing velocity. Google's own policy update from March 2024 hammered this: "no matter whether content is produced through automation, human efforts, or some combination."

In other words, Google's official line is that volume isn't the trigger. And in some technical sense, that's correct. There are case studies floating around of sites with 10,000+ pages staying untouched, and sites with 50 pages getting penalized. Breakline's overview makes this point bluntly: "Volume alone isn't the trigger."

But the moment you look at the post-March-2026-update patterns, volume is what the classifier is reaching for as its cheapest signal. Digital Applied's pattern analysis after that update found niche information sites with 500+ AI pages losing 50% to 80% of organic traffic in a two-week window. Affiliate sites lost 40% to 70%. The sites that survived had one trait in common: they were not publishing at velocities that exceeded what a small editorial team could plausibly review.

Their working threshold for risk: 10+ articles per day, sustained for months. Their human-pace benchmark: 10 to 15 high-quality articles per week, total, from a five-writer team. That ratio (1,800 a month vs. 60 a month) is the gap the classifier is now exploiting as a feature.

What the deindexing wave actually looks like from inside

Former Google search team member Pedro Dias surfaced the recent deindexing pattern on X at the end of April 2026, asking whether others were seeing a higher rate of random URL deindexing since the start of the month. The official response from John Mueller was the predictably opaque "some sites go up, some sites go down."

That's the part that frustrates practitioners and shows up all over the thread: there's no published threshold to back away from. You find out you crossed it because your Search Console graph flatlines.

For context, our own profile when we got caught: 514 articles published in the first 47 days of the site existing, peak 20 per day, 100% AI-generated, with weak author signals, tag-slug fragmentation, and duplicate JSON-LD schema firing on every URL. That profile sits somewhere in the middle of the r/SEO horror stories. Not the worst, not nearly the worst. Still enough.

The closest comparable case in our own coverage was an r/SEO sports site that lost 99.8% of its impressions following bad AI SEO advice. The shape of the damage is the same. The volume profiles aren't identical, but the classifier doesn't care about that distinction.

The thresholds practitioners are actually defending now

Reading the thread and the surrounding post-update commentary, the working consensus seems to be roughly this. None of it is from Google. All of it is pattern-matching from people who got burned:

Under 10 articles a day on a young domain is the soft ceiling before classifier signals start tightening. Under 3 a day for the first 90 days of a domain is the safer cap if you want a chance at consistent indexation. Domain age matters more than any single volume number, which is why month-three on a new site is where the wreckage clusters.

The other thing the thread agreed on was that the production-style argument ("we have editors, we have prompts, we have a review layer") doesn't survive the classifier if the publishing pace exceeds what those layers can plausibly do. The classifier doesn't audit your process. It infers it from the cadence.

From what I've seen, the actual editorial-review benchmark that holds up is closer to 15 minutes of human attention per article, minimum. Anything below that floor produces content that statistically pattern-matches as automated, even when it isn't.

What changes for anyone still running scale

The immediate action, if you're publishing at any volume north of 5 a day on a domain younger than 12 months: pull cadence first, prune second. Pruning before slowing is the wrong order, because the classifier is currently weighting fresh-published velocity heavily.

The other lever that emerged in the thread, which doesn't get enough attention in the recovery-guide ecosystem: per-author publication concentration. Sites where one byline (or no byline) ships everything are getting hit harder than sites with the same volume distributed across plausible authors with biographical depth. That's not "add headshots." That's "the entity backing each piece needs to be retrievable."

I think most teams running scaled operations will still try to brazen through this. The economics of scaled affiliate publishing only work at volume, and dropping to 15 a week isn't really the same business. But the math on the other side of the classifier has changed, and the practitioner room has noticed even if the product-side hasn't.

The ceiling that's emerging as defensible

If the thread settles anything, it's that the number 1,800-a-month operations should be aiming at isn't a smaller version of 1,800. It's a different unit of measurement entirely. Per-week, not per-day. With editorial review you could actually defend on a phone call with a Google webspam engineer if it came to that.

For the record, our own recovery plan caps publishing at five articles a day, runs every two hours instead of every 15 minutes, and treats anything that doesn't clear that bar as not worth shipping. That's not a number anyone in the thread would call safe. It's the number that's defensible enough for a site that already crossed the line once.

The honest read on the 1,800-a-month playbook isn't that it stopped working. It's that the people who needed it to work are quietly migrating to the version of the strategy that doesn't require Google to be looking the other way.

Notice Me Senpai Editorial