ChatGPT 5.4 Cut Per-Answer Citations From 19 Domains to 15

ChatGPT 5.4 Cut Per-Answer Citations From 19 Domains to 15
OpenAI's March 4 swap from GPT-5.3 to 5.4 reshaped the web.run tool surface and quietly compressed citations by four domains per answer.

OpenAI switched ChatGPT Search to GPT-5.4 on March 4, 2026, replacing the text-command web.run tool with a JSON-based interface. RESONEO co-founder Olivier de Segonzac tracked 400 daily prompts for 14 weeks and found average unique domains cited per response dropped from 19 to 15, a 21% decline. AEO now hinges on fan-out branch coverage, not single-query optimization.

The JSON tool format that nobody documented

Before March 4, ChatGPT Search shipped queries to its internal web.run tool as compact text strings: fast|query|recency. After the GPT-5.3 to 5.4 swap, those same calls became structured JSON objects with typed parameters, and the tool surface area grew to 12 operations. The new set includes search_query, open, find, click, screenshot, and a product_query action that handles shopping prompts on its own track.

OpenAI shipped none of this in the public docs. Segonzac reconstructed it from network packet captures, honeypot pages designed to log bot behavior, and access logs from Oncrawl's Jérôme Salomon. The picture is consistent across his five evidence streams, which is more validation than most AEO claims survive.

The practical implication: tools that scraped the old text-command format from the browser console stopped working overnight. Nectiv published a Python script that pulls fan-out data via the OpenAI API instead, since GPT-5.4 hides those queries in the web interface. If your AEO vendor still claims to "see" ChatGPT's sub-queries through the browser, ask them what they switched to in March. The good ones already swapped to API-side capture and forgot to publish about it.

Why GPT-5.4 chains 5 to 10 search rounds, and GPT-5.3 Instant chains 2

Fan-out chaining is the part that breaks the old SEO mental model. A user types one question. ChatGPT internally generates 5 to more than 10 sub-queries on GPT-5.4, hits the search index for each one, opens different pages from different result sets, and stitches the citations into one answer. GPT-5.3 Instant only ran 2 or 3 rounds. The extended-reasoning paths run a lot more, depending on the prompt.

Peec AI's analysis of 20 million fan-out queries between October 2025 and January 2026 also showed the average sub-query doubled in length, from roughly 6 words to roughly 12. Around week 49 the peak hit 16 words. Translation: ChatGPT is asking longer, more specific questions of the index now, which means the page that ranks for "best CRM" is no longer the page that gets cited for "best CRM for boutique law firms under 20 seats with Outlook calendar sync."

We covered something adjacent last month in Omniscient's finding that 48% of AI brand citations live off your own site. Pair that with longer, more specific fan-outs and the math gets uncomfortable. Most brand sites optimize for a clean entity page. Fan-out coverage rewards the long tail of comparison and qualifier pages that brands historically deprioritized because they didn't ladder neatly to a head term.

browse_rewritten_queries: the product fan-out OpenAI didn't ship in docs

Segonzac flagged an undocumented fan-out type that only fires on product prompts in GPT-5.4 Instant, labeled browse_rewritten_queries. It runs separate retrieval commands per product candidate, not per user intent. This is consistent with Search Engine Land's earlier finding that ChatGPT sources 83% of carousel products from Google Shopping via shopping-specific fan-outs, but the new wrinkle is that the per-product rewrite happens before the search hits, and it's invisible from the browser.

For ecommerce, the practical effect is that your Shopify product page can be the best match for the original prompt and still lose the citation, because the rewrite step routes ChatGPT to a comparison roundup that happens to mention you in passing. I'd been skeptical that "the citation goes to the listicle, not the product" was a stable pattern. After seeing the per-product rewrite, it looks structural.

If you run paid retail media, the Google Shopping dependency also matters. Practical Ecommerce noted ChatGPT generates 8 to 15 sub-queries per shopping prompt, each pointing at a different intent slice. Losing one of them is fine. Losing the rewrite layer probably isn't.

Why 4 fewer domains per answer is a structural compression, not a tuning bug

Going from 19 unique domains cited to 15 sounds like a tuning change. It isn't. The fan-out count went up. The number of cited domains per answer went down. That means ChatGPT is now deduping more aggressively across rounds and concentrating citations on a smaller, higher-trust set.

Connect that to Kevin Indig's earlier finding that only 2.37% of AI citations survive across all three engines and the picture clarifies. The citation pool was already thin. Now ChatGPT 5.4 is making it thinner inside its own response. The structural read: AEO is becoming a winner-takes-most game inside each fan-out branch, and the consolation prize of "ranked but not cited" is getting larger.

From what I've seen across the AEO chatter on r/SEO over the past month, agencies are still selling AEO as a per-query exercise. The per-query exercise was already a stretch in late 2025. Post-March 4, it's mostly theater. The job is fan-out branch coverage: making sure your domain shows up for the qualifier and comparison rewrites of the prompts that matter to your category, not the head term itself.

The fan-out audit that fits in a Tuesday afternoon

If you want a concrete starting move, this is what I'd run this week. Pick your top 5 commercial prompts. For each, pull the actual fan-out sub-queries from the OpenAI API using Nectiv's script or your AEO vendor's equivalent. Then audit two things: how many of those sub-queries return a page from your domain in the top 10 of a Bing search (the engine ChatGPT-User pings under the hood), and whether each landing page is pre-rendered HTML, since the ChatGPT-User bot doesn't execute JavaScript.

Most brand sites I've checked fail on the second test alone. A React SPA might rank fine in Google but return an empty shell to ChatGPT-User, which kills the citation chance before the fan-out logic even gets a vote. Pre-rendering is the cheapest fix on this list and the one that compounds across every future model swap, because the bot identity carries forward even when the tool format doesn't.

The bigger lesson from Segonzac's reverse-engineering is that the public AEO conversation is roughly six months behind the actual tool surface. Anyone shipping an AEO product without packet captures and access logs is mostly guessing. The good news is that the guessing gap is also the opportunity. Brands that audit their fan-out coverage and pre-render their pages will land citations the rest of the category is invisible for, at least until the next undocumented model swap quietly rewrites the rules again. I'd start the audit Tuesday.

Notice Me Senpai Editorial