Kevin Indig Found Only 2.37% of AI Citations Survive All Three Engines
Kevin Indig analyzed 3.7 million URL citations across ChatGPT, Perplexity, and Google AI Overviews from Q3 2025 to Q1 2026, drawing from a 20,000-prompt random sample. Only 2.37% of cited URLs appeared in all three engines for the same prompt, and 91.07% appeared in only one. The implication: there is no single "AEO rank," and one optimization play cannot serve all three surfaces.
The 91% number is the headline, not the 2.37%
Most readers anchor on the 2.37% overlap figure because it sounds shocking. The 91.07% single-engine number is the one that should change budgets. It means nine out of every ten citations Indig measured were the property of a single engine. If your visibility lives on Perplexity, almost none of it transfers to AI Overviews. If your wins are inside ChatGPT, they probably do not show up when a different engine answers the same question.
Indig held the pattern across four separate prompt cohorts. Three 5,000-prompt samples pulled in January 2025, July 2025, and January 2026, plus the larger 20,000-prompt random sample. The variance between samples was small. He also broke it down by content type: 2.3% overlap for guides and tutorials, 1.8% for blog articles, 1.1% for homepages. By query intent: 2.4% for commercial prompts, 2.0% for informational. The skew was consistent rather than noisy.
I think most teams will read this and assume it just means "we need three strategies now." That is the obvious read. The harder read is that two of those three strategies are probably not worth running, and the data tells you which two.
Engines pull from different pools, not the same pool ranked differently
Indig's framing in the Growth Memo piece is that engines draw from largely disjoint citation libraries. The retrieval logic, the trust signals, the format weights are all engine-specific. That is the part most AEO vendor decks gloss over. The standard pitch is "we optimize your content for AI search." But there is no shared internet being ranked. There are three small internets.
The platform skew is dramatic. ChatGPT leans encyclopedic. Wikipedia accounts for roughly 47.9% of ChatGPT's top-10 citations, per Profound's earlier analysis. Perplexity overweights Reddit, which sits at 46.7% of its top-10 share. Google AI Overviews push toward YouTube and multimodal content. NMS covered the Wikipedia weighting problem for ChatGPT brand audits earlier this quarter, and the same logic applies in reverse for Perplexity practitioners staring at a citation graph that looks nothing like the ChatGPT one.
From what I have seen, the agencies that have already internalized this are quietly switching their reporting layer first, before their tactics. The dashboard reframe is cheap. The strategy reframe takes a quarter.
Indig's three-metric replacement
Instead of "AEO visibility," Indig proposes three:
- Presence. The percentage of tracked prompts where your domain appears in any engine. Are you visible at all?
- Portability. The percentage of your cited URLs that appear in all three engines. Which pages survive cross-engine ranking?
- Concentration. The share of your total citations coming from a single engine. How exposed are you to one platform changing its retrieval logic?
Portability is the metric most worth chasing because it identifies the small set of pages that earn citations across the board. A page that gets cited by all three is doing something the others are not. Those pages are the templates worth cloning, and there will be far fewer of them than your CMO expects.
Concentration is the warning metric. If 80% of your AI citations come from Perplexity, you are one platform pivot away from losing most of your AI search surface. That is the variant of platform risk nobody on the marketing org chart owns yet. We saw a softer version of the same problem when Google quietly killed FAQ rich results 33 months after strangling them, and most teams found out from a traffic drop, not a release note.
Presence is the vanity metric, which is exactly why it dominates most current dashboards. Indig's earlier framing in Search Engine Land hinted at this distinction without naming it, but the new dataset is what makes the case unambiguous. A blended visibility score lets a vendor show "you're up 14% this quarter" while you are invisible in two of three engines. The math hides the exposure.
The audit to run this week
If you have any AI visibility tracking in place (Profound, Otterly, the manual prompt logs people stitched together in a spreadsheet), pull the last 60 days of citations and bucket each cited URL by how many engines cited it. You will end up with three piles:
- Universal (3 engines): rare, your most valuable assets. Treat them like templates.
- Pair overlap (2 engines): tactical opportunity. These are the pages closest to becoming Universal with editorial work.
- Single-engine only: 91% of what you have. Audit the concentration risk this represents.
Then look at what your Universal pages have in common. From what I have seen in a couple of audits this quarter, they usually share three traits: a non-trivial standalone definition or framework near the top, structured comparison content (tables, ordered comparisons), and at least one third-party citation that has been picked up elsewhere on the open web. None of those are formatting tricks. They are editorial choices, which is harder to template and harder to sell as a service.
A separate Ahrefs study covering 1,885 pages that added schema markup found essentially zero lift in AI citations after the rollout. Schema is not the missing variable. The missing variable is whether the page actually defines something and disagrees with consensus on a specific axis, which is harder to bolt onto an existing content workflow than a JSON-LD block.
Why the "be the consensus answer" advice is quietly broken
The dominant AEO advice through 2025 was "be the consensus answer." Indig's data quietly contradicts this. Because the engines are not converging on a consensus, being the cleanest restatement of common knowledge gets you outranked by Wikipedia (inside ChatGPT) or Reddit (inside Perplexity), since those properties already own the encyclopedic and conversational slots respectively. You cannot out-Wikipedia Wikipedia. You also cannot out-Reddit Reddit.
I would not call this a contrarian-content thesis exactly. It is closer to: write a page that is the only credible source for a specific subclaim, structure it so retrieval can lift it cleanly, and accept that 91% of your AI citations will still belong to a single engine no matter what you do. That last part is the bit most teams refuse to internalize, which is why they keep funding "universal AEO" strategies that the data does not support.
On paper, "different engines, different strategies" sounds like a license to do more work. In practice, it is permission to do less. Drop the surfaces you are losing on, double down on the page templates already earning Universal citations, and stop measuring AEO as one blended number.
What I would watch from here
The data was clean through Q1 2026, with cohorts pulled at three distinct points. The interesting question is whether the overlap gap narrows. If Google starts surfacing Reddit and Wikipedia more aggressively in AI Overviews, which it has been edging toward, the 2.37% overlap could shift fast. The number that matters next quarter is whether Portability moves more than two points. Anything above that suggests engines are starting to converge on a shared citation graph. Anything less and the fragmentation thesis holds.
Personally, I would not bet on convergence in the next two quarters. The retrieval architectures are too different, each platform has a commercial incentive to keep its citation graph distinct, and the trust signals each engine weighs are getting more proprietary, not less. NMS already noted that Ahrefs's AEO frameworks were calibrated for ChatGPT in ways that hurt Perplexity performance, and that gap has not closed.
If you do one thing this week from this, audit your Concentration. If a single engine is producing more than 70% of your AI citations, that is a platform-risk number worth bringing into the next planning meeting. The pages do not need to change yet. The dashboard does.
Notice Me Senpai Editorial