Adobe Lifted Firefly Citations 5x in a Week. Nobody Knows If It Sold Anything.

Adobe Lifted Firefly Citations 5x in a Week. Nobody Knows If It Sold Anything.
AI visibility tools sell brands a measurable proxy for a market they can no longer see directly.

Adobe ran its own LLM Optimizer on its own pages last quarter and watched Firefly citations climb fivefold in seven days. Acrobat's LLM visibility jumped 200%. Adobe.com saw a 41% lift in LLM-referred traffic. These are real numbers from Adobe's own blog post, not vendor marketing.

There is one number Adobe will not publish. How many of those citations turned into a Creative Cloud subscription, an Acrobat upgrade, or a single piece of incremental revenue. The metric being measured is not the metric that signs the checks.

That gap is the entire problem the AI visibility category is walking into. And two senior ExchangeWire executives just spelled out, on a podcast, exactly why this happens.

The quiet part ExchangeWire said out loud

On a recent MadTech episode summarized by PPC Land, ExchangeWire COO Lindsay Rowntree casually buried fifteen years of programmatic orthodoxy. Her actual quote: "Effectiveness was conflated with the word precision...we just used precision and effectiveness together."

Read that line twice. It is an admission, from inside the industry, that the entire pitch the programmatic supply chain ran on, you can target the right person at the right moment, therefore your ads work, was a sleight of hand. Precision and effectiveness were two different things, and the industry quietly let buyers assume one delivered the other.

CEO Rachel Smith piled on, saying the industry is "absolutely moving away from this kind of hyper targeted" approach toward contextual signals and attention metrics. Translation: cookies are gone, mobile IDs are gone, walled gardens have eaten what's left, and the people who built the precision pitch are quietly getting back to attention buying. The same attention buying their fathers were doing in the 1980s.

The vendors that grew on the rubble

The AI visibility category appeared in the gap. Profound, LLM Refs, Semrush Sensor, Adobe LLM Optimizer, Ahrefs Brand Radar. Pricing runs from $99 a month for the budget end to $699 a month for the enterprise tier, and the pitch is the same across all of them: we will tell you how often your brand shows up in ChatGPT, Claude, Gemini, and Perplexity responses, plus what share of voice you have versus competitors.

Semrush published its own case study showing it grew its AI share of voice from 13% to 32% in a month after running its product on itself. That is a real lift on a real metric. It is also impossible to translate into anything you can put in front of a CFO.

"Share of voice" inside a chatbot output is not what it is inside Google's SERP. PPC Land has reported that all four major LLMs produced completely different calculations on identical prompts, and that 40-60% of cited sources change month to month. The number that went up by 200% might not be there next Tuesday. And the prompts you are scoring against may not be the prompts your actual customers type.

What you end up with, honestly, is a category that looks a lot like SEO tooling from 2007. Real signals, real movement, and a dotted line to revenue that nobody draws because they cannot.

Meanwhile the supply side is locking in the new plumbing

While buyers debate whether $499 a month is worth it for the visibility data, the supply side is rebuilding the rails underneath them.

On March 12, PubMatic announced a partnership with Optable that plugs Optable's Audience Agent directly into PubMatic's AgenticOS. The pitch: publishers' first-party audiences become activatable through AI agents without the data ever leaving the publisher's environment. Buyers send an agent to discover and bid against audiences they can no longer see directly.

This sounds like a privacy upgrade. From a buyer's seat, it is also a control downgrade. Your agent talks to their agent. You see the result, not the path. The optimization knobs you used to turn now belong to a system you do not own and cannot inspect.

Ari Paparo, who tried to build something close to this at Google back in 2010, is one of the few industry voices willing to flag the obvious problem. He called the Ad Context Protocol "brilliant for creative automation, deeply problematic for media buying", and pointed out that the value tends to accrue to the largest publishers and cross-publisher networks. Long-tail buyers, niche advertisers, and anyone whose edge came from ad ops craft just lost the seat at the table.

The bargain marketers are being asked to accept

The industry is quietly asking marketers to swap one form of tracking they could verify, badly, for two forms they cannot verify at all. AI visibility scores measured against a non-deterministic system whose answers shift week to week. And agent-to-agent negotiations executed inside infrastructure marketers neither own nor audit.

This is not necessarily worse than the status quo. The status quo was already cracked. IAS and Mastercard put a recent number on it: most programmatic impressions do not drive incremental sales at all. Targeting precision was a story everyone wanted to believe, and the receipts have been thin for a while.

It is a different bargain, though. The old system sold precision. The new one sells presence. Presence is harder to fake but also harder to bill against. From what I have seen on the buy side this year, most teams have not noticed the swap.

What to do with this in the next thirty days

One. If you are paying for an LLM visibility tool, answer a single question before the next renewal. Which line item on the P&L does this number affect? If you cannot answer it in two sentences, the tool is selling you confidence, not outcomes. You can build a workable version of the tracker yourself for around $100, which we walked through previously. Use that as the test of whether the paid tool is worth the markup.

Two. If you keep the tool, measure delta, not absolute. Score yourself once. Make one structural change. Score again. The absolute number is noise. The change inside your control is the only signal that means anything.

Three. Pull your DSP contract out and search it for "agent" and "agentic." If your DSP is rolling agentic integrations into your account and you do not understand what data is being routed through which agent on whose behalf, you are about to be the smaller party in a negotiation between two systems you did not design.

Four. Run one campaign on pure contextual targeting for thirty days. Same creative, same budget, same landing page, no audiences. You are going to need a baseline for what attention buying actually performs at, because the contextual era is coming back whether your team is ready for it or not.

Rowntree's quieter line

One of Rowntree's lower-key quotes stuck with me more than the precision admission. Talking about putting the open web into the hands of agents, she said: "If we put the current version of the open web into...hands [of agents], we're handing over all the problems."

That is the real warning, and it is the part most coverage of the agentic shift skipped right past. The infrastructure being built right now does not fix the broken signal. It hands the broken signal to a different operator at a different price. Most marketers I talk to are going to discover this when their first agentic invoice lands and the line items do not match the strategy they thought they bought.

I think a lot of marketing teams will figure this out about a year after their renewal date. Not because they are slow, but because the invoices land before the diagnostics do.

Notice Me Senpai Editorial