Reddit Removes 100,000 Bot Accounts Daily. They Will Not Say How Many Are Left.

Reddit Removes 100,000 Bot Accounts Daily. They Will Not Say How Many Are Left.
Reddit ad products promise authentic community engagement, but the bot numbers suggest advertisers should verify that promise themselves.

Reddit is removing 100,000 bot accounts every day. That sounds like a company taking the problem seriously until you ask the obvious follow-up: how many bots are still on the platform right now? Reddit won't say. They also won't say what percentage of historical ad impressions were served to non-human traffic. And they won't commit to retroactive auditing of campaigns that already ran. For a platform pulling in $726 million in Q4 2025 ad revenue, that's a lot of questions left unanswered.

I think advertisers are underreacting to this. Probably because the disclosure was buried inside a broader "look how hard we're fighting bots" narrative, which is exactly how you'd frame it if the underlying numbers were uncomfortable.

The incentive problem nobody wants to name

Here's what makes the Reddit bot situation structurally different from, say, Meta or Google dealing with click fraud. Reddit's entire ad pitch is built on authentic community conversations. Their newer products, the Community Intelligence suite specifically, sell brands on access to real purchase intent signals and genuine sentiment data. The premise is that Reddit users are uniquely honest because they're pseudonymous and talking to peers, not performing for followers.

Bots break that premise completely. And not in an abstract way. If 5% of the engagement on a subreddit is automated (a number I'm pulling from thin air because Reddit refuses to provide a real one), every sentiment analysis report, every "conversation trend" dashboard, every behavioral signal feeding their ad targeting is degraded by that same percentage. Probably more, because bots tend to be disproportionately active.

The structural tension is straightforward: Reddit's advertising revenue depends on engagement metrics. Bots inflate engagement metrics. There is zero financial incentive for Reddit to be fully transparent about the scale of contamination. I'm not saying they're being deliberately deceptive. I'm saying the incentives don't point toward radical honesty, and the gaps in their disclosure are consistent with that.

Max campaigns and the signal contamination question

Reddit launched Max campaigns in January 2026, their answer to Meta's Advantage+ and Google's Performance Max. The pitch: AI-optimized campaigns that deliver 17% lower cost-per-action compared to standard Reddit ads. On paper, that sounds like an upgrade. And sometimes it probably is.

But the 17% CPA improvement claim depends entirely on the quality of the behavioral signals the AI is optimizing against. If bot accounts are clicking, upvoting, and engaging with content in ways that look human-ish but aren't, the optimization engine is partially learning from contaminated data. It's training on noise and calling it signal.

In one account we manage (roughly $40k/quarter on Reddit, mostly B2B SaaS targeting specific subreddits), we noticed something odd about eight months ago. Engagement rates on certain campaigns were suspiciously high relative to downstream conversions. The click-through numbers looked great in Reddit's dashboard, but pipeline attribution told a different story. We couldn't prove it was bot traffic. We still can't. But the gap between Reddit-reported engagement and actual business outcomes has been wider than what we see on comparable Meta campaigns, and it's been consistent enough that I stopped trusting Reddit's engagement metrics at face value.

That's anecdotal, obviously. One account. But when I've mentioned this to other media buyers running Reddit spend, the nods are pretty universal.

Dynamic Product Ads have the same vulnerability

Reddit's Dynamic Product Ads reportedly deliver 2x higher ROAS versus standard campaigns. That's a strong number. It's also a number that gets less impressive if a meaningful slice of the "engagement" driving the optimization was automated. DPA targeting relies on behavioral patterns. Users who browsed certain products, engaged with related content, showed purchase intent through their activity. If bot accounts are generating synthetic versions of those signals, the targeting model is partially optimized against fake behavior.

I don't think this means Reddit ads are worthless. From what I've seen, the platform still reaches genuine communities that are hard to find elsewhere, especially in niche B2B verticals and hobbyist categories. But I do think the performance numbers Reddit publishes should come with an asterisk until they're willing to disclose how much of the engagement underlying those numbers is verified human.

The verification rollout is vague on purpose

Reddit announced a tiered human verification system. Passkeys as the lightest option, World ID biometric verification in the middle, government ID only where legally required. That sounds reasonable until you notice there's no timeline for rollout. No target percentage of verified users. No commitment to making verification mandatory for accounts that can engage with ads.

The [App] Label System for authorized bots is a step, sure. Accounts using permitted automation get a visible tag. But permitted automation isn't really the problem here. The problem is the unpermitted kind, the accounts that look human and engage like humans and inflate metrics like humans, except they're not. Reddit CEO Steve Huffman acknowledged that AI-generated content is increasingly prevalent on the platform and called it "part of how people will communicate in the future (albeit annoying)." Which is a remarkably relaxed stance for someone whose ad products depend on the authenticity of exactly that content.

And to be fair, this isn't entirely a Reddit-specific problem. Every platform deals with bots. Meta has armies of fake accounts. X has, well, you know. But most of those platforms don't build their advertising pitch around the unique authenticity of their user base. Reddit does. That's what makes the bot problem existentially more threatening to their ad business specifically, and it's probably why the brands shifting experimental budgets toward Reddit should be demanding more transparency before scaling spend.

A 2024 lawsuit already tested this

LevelFields, an investment platform, sued Reddit in 2024 alleging the platform lacked adequate security to prevent automated clicks from inflating advertiser costs. The case highlighted something advertisers had been whispering about for a while: that Reddit's fraud detection wasn't keeping pace with the sophistication of bot operations on the platform.

The lawsuit didn't end in a massive settlement or a regulatory crackdown. But it did put the click fraud question on the record in a way Reddit couldn't completely ignore. The current wave of countermeasures, the verification testing, the enhanced reporting tools, the daily removal numbers, reads at least partially like a response to legal pressure rather than proactive transparency. That distinction matters. Companies that disclose because they have to behave differently than companies that disclose because they want to.

If you're spending on Reddit right now, do this

Pull your last 90 days of Reddit campaign data. Compare in-platform engagement metrics (clicks, upvotes, engagement rate) against your own first-party conversion data. CRM entries, demo requests, actual purchases, whatever your real KPI is. If the gap between Reddit-reported engagement and your downstream outcomes is wider than 30% compared to the same gap on Meta or Google campaigns, you probably have a bot exposure problem. Not definitely. But probably.

Second thing. If you're running Max campaigns, pull the audience composition reports and look at geographic and temporal patterns. Bot farms tend to cluster in specific regions and operate on predictable schedules. Sudden engagement spikes from unusual geos or at odd hours are worth flagging to your Reddit rep. Not because they'll necessarily fix it, but because having the complaint on record matters if this eventually becomes an industry-wide accountability issue the way Meta's legal exposure has.

Third. Don't kill your Reddit spend entirely. The platform genuinely reaches communities you can't access elsewhere. But cap it. I wouldn't allocate more than 10-15% of a paid social budget to Reddit until they commit to independent third-party verification of their engagement metrics. That's not an unreasonable ask. It's the standard we hold every other major platform to.

From what I've seen, the advertisers who do best on Reddit right now are the ones treating it as a high-potential, low-trust channel. They verify everything independently. They don't optimize against Reddit's own metrics. They use it for reach into specific communities and measure success entirely on their own terms. It's more work. It's also the only honest way to run the numbers until Reddit decides to run theirs honestly too.

By Notice Me Senpai Editorial