SE Ranking Faked a Brand. AI Mode Ranked It #1 for 90% of Branded Queries.
SE Ranking's Bogdan Babiak built a fictional brand on a brand-new domain in March 2026 and tracked 825 prompts across 15,835 AI answers. Google's AI Mode placed the fake brand at position 1 for roughly 90% of branded queries, while a single "Complete Guide" page on the site earned 1,799 AI citations in one month. The fake brand outranked Domain Trust 40+ competitors by up to 32x on its own brand-specific queries.
That last number is the part nobody is reading carefully. A site with no backlink history, no brand mentions, no E-E-A-T scaffolding, and one month of age outranked established competitors by thirty-two times on its own brand queries. The mechanism wasn't authority. It wasn't trust. It was structure plus volume.
What Babiak actually built (and why the design matters)
The experiment ran across one brand-new domain plus 11 supporting domains aged a year or older, with seven content formats split between deep guides, "alternatives" listicles, "best of" listicles, reviews, comparisons, how-tos, and clickbait pieces. Babiak's team tracked answers from ChatGPT, Google AI Overviews, Google AI Mode, Perplexity, and Gemini. The full 16-month writeup is on Search Engine Land, but the first month is already enough to argue with.
The design choice that matters: the fake brand didn't try to outrank Wikipedia on a generic query. It positioned itself as the only viable answer to questions only the brand could answer. 72% of total visibility came from those branded, narrow queries. 96% came from branded searches overall.
That's not a viral hit. That's a fence around an unclaimed semantic space.
Where the citation volume actually came from
Three pages did most of the work:
- The Complete Guide page: 1,799 AI citations.
- The About Us page: 1,500 AI citations.
- 30 thin pages of 500-750 words, combined: 1,897 citations.
Notice what's missing from that list. There are no link-built pillar pages, no PR-seeded thought leadership, no expert author bylines. The "About Us" page out-cited 30 thin pages combined on its own. An About Us page. The same boilerplate marketers usually let an intern draft.
Babiak's quote in the writeup was the only one I underlined: "AI systems appear to respond more strongly to consistency, repetition, and availability than to strict verification."
If you're a marketer who has been told to ship 5,000-word deep guides for AI visibility, that sentence should rearrange your roadmap.
One caveat worth holding onto. The thin pages did pull citations, but only when they sat alongside the Complete Guide and the About Us page in Babiak's setup. Volume on its own probably doesn't replicate this; the structured anchor pages seem to be doing most of the heavy lifting, and the thin pages appear to fill in the long-tail variations. The 1,897 combined-citation number for 30 thin pages averages out to roughly 63 citations per page, which is thin gruel without an anchor page hub feeding the brand definition into the engine first.
The AI Mode vs Perplexity split worth budgeting around
Engine behavior diverged in a way that matters for how teams allocate this work.
- AI Mode was the most stable. Position 1 for ~90% of branded queries. Once the structure was in place, AI Mode locked in.
- Perplexity discovered new content fastest (1-3 days) but tended to cite supporting domains rather than the brand domain itself. Useful for broad visibility, weak for owned-channel attribution.
- ChatGPT was the slowest to respond initially but climbed steadily over the month, especially on review and comparison content.
- Google AI Overviews showed high visibility with heavy week-to-week swings.
- Gemini was the laggard, failing to cite the fake brand in 60% of responses even when the query named it directly.
A practical read: AI Mode and ChatGPT seem to reward structured, repetitive brand pages. Perplexity rewards volume of supporting mentions across many domains. Gemini is, for now, mostly noise. If your team is building one playbook for "AI visibility," you're going to underperform on at least three of these engines.
This roughly tracks separate research from Ahrefs showing that 86% of top mentioned sources are not shared across ChatGPT, Perplexity, and AI Overviews. Different engines, different citation graphs, different winners. SE Ranking's experiment is what the practitioner side of that statistic looks like when you actually try to get cited.
Why this isn't a green light to spam
I don't think the takeaway here is "spin up a fake brand and watch the citations roll in." Two reasons.
First, this isn't the only fake-brand experiment in the wild. Ahrefs ran a separate version with a fictional luxury paperweight brand called Xarumei, seeding three contradictory false narratives across the web over two months. Most LLMs, including Gemini, Grok, AI Mode, Perplexity, and Copilot, ended up trusting a Medium article over the brand's own FAQ. Mateusz Makosiewicz's conclusion: "in AI search, the most detailed story wins, even if it's false." PPC Land's critique flagged that the Ahrefs brand lacked Knowledge Graph entries and external validation, which is a fair caveat. SE Ranking's findings hold up against that critique because the win wasn't about beating verification systems. It was about owning queries no one else was answering.
Second, citations aren't traffic. A 1,799-citation Complete Guide is impressive. A 1,799-citation Complete Guide that drives zero qualified leads is a vanity dashboard. Babiak's experiment didn't track conversion downstream because the brand isn't real. A live business has to.
What real teams should test this week
From what I've seen in the SaaS and B2B marketing budgets I've looked at recently, most teams are spending on author bios, schema audits, and link-building campaigns to boost AI visibility. That work isn't wrong. It's just probably not the highest-leverage move right now.
A tighter test, runnable in seven days:
- Pick three branded queries your competitors don't answer well. Not "best CRM for SaaS." Things like "[your product] integration with [partner]" or "[your category] alternative to [competitor]."
- Ship one Complete Guide page (3,000-5,000 words, structured H2s, comparison table, FAQ schema). One page, not ten.
- Rewrite the About Us page so it answers four to six branded questions in plain prose. Not a story. Answers.
- Publish 10-15 short comparison pages (500-750 words each) targeting variations of the branded queries.
- Track citations across AI Mode, ChatGPT, and Perplexity weekly. Tools that do this now include Profound, SE Ranking's own Sources feature, and Otterly.
If the structure works for a fake brand with no domain history, it should work harder for a real brand with both. And on the engines where it doesn't, you'll know fast.
The number I keep coming back to
The 32x outranking number isn't an indictment of E-E-A-T. Google's organic rankings are still using domain authority signals; that hasn't changed. It's a clue about what AI engines weight when they generate answers, which is a different process than ranking ten blue links.
We covered something related when Datos pinned AI search at 1.72% of total search visits while Google climbed back to 94.3%. Small share, growing fast, and the playbook is wide open. Most teams haven't built a single Complete Guide page targeting their own brand queries, let alone 30 thin pages. The teams that ship these structures first probably get the citations Babiak's fake brand is currently sitting on.
The uncomfortable part for marketers selling complex AI visibility audits: a budget pulled into structured brand pages and away from generic E-E-A-T checklists might outperform either, at least until AI engines tighten verification. From what I've seen across the last quarter, that tightening hasn't really started yet.