Trade Desk's Koa Agents Hit a Stack That Treats AI Traffic as Bots

Trade Desk's Koa Agents Hit a Stack That Treats AI Traffic as Bots
The Trade Desk's Koa Agents launched into a measurement stack where most AI traffic still lands as 'Direct' or gets filtered out entirely.

The Trade Desk launched Koa Agents on April 21, 2026, with Stagwell as the first partner to pilot agentic media buying through the new Open Agentic Kit framework. The catch sits one layer below the announcement. Roughly 70.6% of AI-referred sessions already land as Direct in GA4, and the platform's IAB/ABC bot filter has been documented missing every standard-user-agent bot in controlled tests. Marketers will see noisier ROAS this quarter before anyone admits the cause.

What actually shipped this week

Koa Agents are in alpha. Buyers describe a campaign goal in natural language and an AI agent handles the planning, buying, optimization, and measurement steps that used to take days of manual setup. Stagwell is the first global network on board and plans to bring the capability to select clients in closed beta this summer.

Trade Desk is not alone, and that part matters more than the launch itself. Within a single week, the IAB Tech Lab published an Agentic Real-Time Framework, PubMatic introduced AgenticOS, Criteo extended its Model Context Protocol, and the broader Ad Context Protocol effort started circulating among DSPs and publishers, according to PPC Land's roundup. The Open Agentic Kit is Trade Desk's bid to be the integration layer everyone defaults to before any of those competing standards lock in.

There is also a reason the buy side is moving faster than the auction itself. Real-time bidding is brutal on agents. A hallucinated decimal on a CPM bid is a real money problem in milliseconds, so most early agentic work sits one layer up at planning, optimization, and measurement instead of inside the live auction. Personally, I think we are at least 18 months away from anyone trusting a frontier model to set bids autonomously without a human-defined floor.

The attribution stack has not moved an inch

This is where the optimism cracks. Clickport's recent attribution audit pegged AI-referred traffic at 70.6% landing in the GA4 Direct bucket because the referrer header gets stripped or sanitized by the AI client before the request even reaches your tag manager. The same audit found AI-referred sessions converting at 14.2% versus 2.8% from organic search, roughly a 5x lift, with 10 to 32% higher revenue per session. None of that lift shows up in your acquisition reports as anything other than Direct.

Then there is the bot filter problem, which is uglier. Ingest Labs ran controlled tests against GA4's IAB/ABC bot list and found it caught zero sessions when the agent used a standard browser user-agent string. Most modern agents do exactly that on purpose. They run automation libraries, simulate mouse movements, and use sandboxed browser environments specifically so they look like organic human users to the analytics layer.

For context, automated traffic now accounts for 72.4% of all internet activity in 2026, with AI scraper traffic up 300% year over year. Akamai opened a new Attack Insight specifically for AI bot traffic spikes on April 1 because volume started breaking their existing thresholds. Whatever your stack is, the inputs have changed shape and most analytics platforms have not caught up.

Why ROAS gets noisier before it gets cleaner

Take Koa Agent at face value for a second. A buy-side agent allocates spend across DSPs at machine speed, optimizing toward conversions it can see. A measurement agent on the same stack feeds it back what worked. Both rely on attribution data that, today, is unreliable in two specific ways.

First, AI-referred conversions look like Direct. From the buy-side agent's view, a paid placement that drove a session through a ChatGPT browse-then-buy flow looks like a free Direct conversion. Spend gets pulled from the campaign that actually drove the visit, and the agent quietly optimizes against itself. We covered the related case where ChatGPT started serving ads to logged-out users despite OpenAI's own help page, which adds another fuzzy referrer category to the same bucket.

Second, agent-driven sessions on the buyer's side often get quietly filtered. If a brand's competitor uses an AI agent to research the brand's site, that visit might be scrubbed before it shows up anywhere. From a security view, that is fine. From a brand view, it is invisible competitive intelligence and, more importantly, it is missing fingerprints on a session that may convert through another channel later.

Neither of these problems is theoretical. The Clickport data shows the attribution gap is already live, and the Akamai changelog confirms the volume side. On paper, the agentic launches sound like an upgrade to programmatic. And in some accounts they probably are. But measurement is where this gets messy.

The 30-minute audit that exposes the gap

There is a short, ugly version of this audit. It costs nothing and pays for itself the first time it surfaces a misallocated dollar.

  1. Open GA4 and segment Direct traffic by user-agent string. Look for elevated session counts on UA strings that contain Mozilla and Chrome but show anomalous viewport sizes, language headers, or session durations under 8 seconds. If you have server-side analytics, do this there. The data is cleaner.
  2. Cross-reference any UA classes you find against the Akamai April 1 changelog and your own ad platform's invalid-traffic taxonomy. The Akamai list is publicly maintained and gets updated more often than GA4's filter does.
  3. In your ad platform, check whether the platform's invalid-traffic classification is excluding sessions you would consider valid. The Trade Desk's UI does not yet expose Koa Agents' own assumptions on this. Until it does, treat any week-over-week ROAS shift larger than 15% with extra scrutiny instead of letting the buying agent learn from it.
  4. Add a referrer-fallback rule in your tag manager. If the referrer is empty but the request UA matches a known AI client, set a synthetic source like ai_agent so it does not collapse into Direct. This is not perfect attribution. It is at least a category to argue with.

If your stack is built on GA4 plus a single ad platform's reported conversions, this audit will probably find at least one signal you have been treating as noise. From what I have seen, the gap is bigger for brands with strong organic AI referrals than for pure paid social shops, but nobody is fully clean.

Where this lands by Q3

I think the agentic ad story breaks roughly along the same lines GDPR did. Platforms with first-party data and server-side measurement absorb the change and look smarter. The rest watch their ROAS get noisier, and somewhere around late Q3 a major brand publishes a postmortem about why their measured ROAS dropped 20% on flat sales.

Trade Desk knows this is the gap. The Open Agentic Kit is partly a positioning move and partly an attempt to wire measurement into the same agent that did the buying, which would close the loop on at least the campaigns Koa runs. That helps the buyers using Koa. It does not help anyone whose attribution still leans on GA4's Direct bucket and a bot list whose shape has not changed since 2022.

If you are running paid social or programmatic and you have not segmented Direct traffic in the last month, that is the cheapest win sitting on your desk this week. The agents are already in your funnel. Whether you can see them is a tooling decision now, not a strategic one.

Notice Me Senpai Editorial