Agent Traffic Grew 7,851%. Most Analytics Still Counts It as Buyer Intent.
HUMAN Security's 2026 State of AI Traffic report measured automated traffic growth at 23.51% in 2025 against 3.10% for humans, with agentic browsers up 7,851% year over year. Most marketing analytics still credits those sessions as buyer intent, which silently inflates CPA on every campaign that depends on conversion signal. The fix is filtering by user-agent and ASN at the edge before the traffic ever reaches your attribution model.
The 8x growth gap is built on real telemetry
HUMAN Security runs one of the largest behavioral signal networks in ad tech and processes more than a quadrillion digital interactions a year, so when the company's 2026 State of AI Traffic report says automated traffic grew 23.51% year over year while human traffic grew 3.10%, the source is network telemetry, not a vendor survey panel.
The agentic-browser line is the one I keep coming back to. According to Implicator.ai's breakdown, traffic from agentic browsers, the kind OpenAI's Operator and Anthropic's computer-use mode generate, jumped 7,851% year over year. AI scrapers grew 597%. Training crawlers now account for 67.5% of all AI-driven traffic, which means roughly two-thirds of the AI traffic touching your site isn't even pretending to convert. It's just learning.
Concentration matters too. 95% of AI-driven automation lands on retail, streaming, and travel sites. AI traffic to U.S. retail sites surged 269% year over year in March 2026 alone, PPC Land's coverage of the report notes. If you sell something online, you are the dataset.
Why agent intent isn't buyer intent
This is where it gets uncomfortable for measurement teams. Agentic browsers render JavaScript, execute page events, hit your conversion pixels in some flows, and move through funnels in patterns that look superficially human. They will sit on a product page for thirty seconds. They will scroll. Some of them will fill in forms with placeholder data and stop short of submit. From the outside, that looks engaged.
It isn't. The PPC Land piece flags a stat that should be on every analytics review deck this quarter: OpenAI's crawl-to-visit ratio is roughly one human visit per 198 crawls, against Google's one visit per 6. The vast majority of what those agents are doing on your site is consumption with no human at the other end of the loop.
For paid media, the consequence is a quiet attribution leak. Smart Bidding, Meta's Andromeda, anything that scores audiences off behavioral signal, all of those are training partly on sessions where the buyer was a model. From what I've seen, this is the same failure mode that tripped up Trade Desk's Koa agents earlier this month, except in reverse. There the stack was filtering legitimate AI shoppers as bots. Here it's the opposite. Most stacks are letting non-buying agents pass straight into the intent layer.
The flip side worth naming: if your e-commerce funnel is built around an AI agent doing the shopping for a real human (the Operator-style flow), then filtering out all agent traffic will kick legitimate purchases off the conversion list too. That's the harder problem, and it's why HUMAN's framing of "behavioral validation" exists. For most marketing teams, though, the 67.5% of AI traffic that's training crawlers and the chunk of agentic browsers doing pure research are pure contamination, not buyers.
HUMAN Security's CEO Stu Solomon put it in measured terms in the press release: "AI-driven traffic is no longer experimental. It's becoming embedded in core digital customer experiences, particularly in industries like e-commerce, media, and travel." That's the polite version. The blunter version is that the audiences your bidding algorithms are learning from now include a meaningful chunk of code.
The filter most marketing teams haven't shipped yet
The good news is the action is small. The bad news is almost no one has done it.
At the edge (Cloudflare, Fastly, Akamai, your CDN of choice), build a user-agent rule that flags or strips known agent traffic before it reaches GA4 or your conversion pixel. The list to start with: ChatGPT-User, OAI-SearchBot, OAI-AdsBot, Anthropic-Computer-Use, ClaudeBot, PerplexityBot, Perplexity-User, GoogleAgent, GPTBot. These are public, documented, and rarely change. None of them require behavioral analysis to identify, just an allow-list approach to the User-Agent header.
Layer two is ASN-level filtering. Most agentic browser sessions originate from a small set of cloud datacenter ranges, AWS, GCP, Azure, Hetzner, OVH. You won't catch every agent that way (some run on residential proxies), but you will catch most of them, and the false-positive rate against actual buyers is low because consumers don't typically browse Shopify checkouts from EC2.
For attribution itself: don't fire conversion events when the user-agent matches an agent string. That alone will keep the worst contamination out of Smart Bidding and Andromeda's training data. If you want to keep the data for analysis, set a custom dimension in GA4 (something like traffic_origin = agent) so you can segment without losing the records.
HUMAN Security's argument in the report is that static identity-based controls aren't enough anymore, and that teams should move toward continuous behavioral validation. That's true at the security layer. At the marketing layer, the static controls still get you 80% of the way for a fraction of the effort. Start there.
One specific GA4 step worth doing this week: open Admin, Data Streams, the web stream, and check the "List unwanted referrals" and "Define internal traffic" panels. Neither of them filter agent user-agents natively. The closest workaround is sending a custom event parameter from the page (set via a server-side check on the User-Agent header) and creating an audience exclusion. Clunky, but it works, and it's better than waiting for Google to ship a feature.
Why this matters more in two months than it does today
HUMAN's Adobe integration announcement on April 21 is the part that should pull this story onto your roadmap. The company is now embedding its Agentic Visibility capability natively into Adobe Experience Platform, which means within the next quarter, marketers using Adobe will get an out-of-the-box read on how much of their inbound traffic is human and how much is agent. That's going to surface uncomfortable numbers across the industry.
Anyone running their own analytics stack should not wait for a vendor dashboard to tell them what their server logs already know. Pull a week of access logs, count the requests by user-agent, and segment by ASN. You'll have an honest read on your own contamination rate before lunch. Most teams I've talked to are surprised by what they find, and a few of them are quietly embarrassed.
Gartner's projection in the report, that AI agents could drive 80% of internet traffic by 2035, sounds far away. The 7,851% growth rate suggests the timeline is more like eighteen months to a serious measurement crisis if attribution stacks don't adapt. The teams that segment agent traffic out of their bidding signal this quarter will spend Q3 with cleaner audiences while everyone else wonders why their CPA is drifting up for no obvious reason.
Notice Me Senpai Editorial