Ahrefs's 4 AEO Frameworks Are Calibrated for ChatGPT, Not Perplexity
Ahrefs published four AEO writing frameworks (BLUF, declarative statements, entity density, and strategic repetition) on May 8, 2026, citing Kevin Indig research that 44.2% of ChatGPT citations come from the first 30% of a page. The frameworks are accurate, but they are mono-platform. ChatGPT pulls roughly 47.9% of its citations from Wikipedia, while Perplexity pulls 46.7% from Reddit. The same on-page tactic does not earn citations on both engines.
That gap is the part the Ahrefs piece does not address, and it is the part most marketers have to decide about this week.
What the Indig data actually measured
The numbers under all four frameworks come from one body of research. Kevin Indig analyzed 3 million ChatGPT responses and 30 million citations, isolating 18,012 verified citations by matching ChatGPT outputs to source sentences using sentence-transformer embeddings. Search Engine Land's writeup of the study is the cleanest summary if you have not read the original.
Three of the four Ahrefs frameworks lean directly on that dataset:
- BLUF sits on the 44.2% finding (citations cluster in the first third).
- Declarative statements sit on Indig's 36.2% vs 20.2% finding (cited passages are nearly twice as likely to use definitional language like "is defined as" or "refers to").
- Entity density sits on his 20.6% vs 5–8% finding (cited passages name roughly three to four times more proper nouns than standard prose).
The fourth, strategic repetition, leans on Dan Petrovic's work showing LLMs retrieve snippets rather than reading whole pages. Different methodology, same instinct.
So far, so good. The problem is the dataset. All 18,012 citations come from ChatGPT. None come from Perplexity, none from Google AI Overviews, none from Claude. Ahrefs treats the patterns as universal AEO rules. They are universal ChatGPT rules.
Where the frameworks hold up cleanly
For ChatGPT specifically, the four frameworks are probably the cleanest on-page advice anyone has published this year. ChatGPT's grounding diet is Wikipedia-heavy and definitional. Pages that lead with the answer, define their terms, name their entities, and repeat the key claim mid-page and at the close align with how OpenAI's retrieval is tuned.
If 87% of your AI referral traffic is coming from ChatGPT (which a lot of mid-funnel B2B sites are seeing right now per Profound's platform tracking), the four-framework playbook is roughly the right thing to do. Skip nothing.
Where it gets messier is when a meaningful share of your visibility lives outside ChatGPT.
Perplexity rewards a different page
Perplexity pulls 46.7% of its top citations from Reddit, nearly 2x its citation rate from Wikipedia, according to Profound's analysis of 680 million citations. That is not because Reddit happens to rank well; it is because Perplexity is tuned to surface community Q&A. Real people answering real questions in threads.
The implication for the frameworks:
- Declarative language matters less. Reddit threads do not usually open with "X is defined as." They open with "I tried this and here is what happened." Perplexity is happy to cite that.
- Entity density still matters, but the entities that score are products and tools named in context, not encyclopedia-style proper-noun packing.
- BLUF still helps because Perplexity grabs from snippet positions, but the "first sentence" that wins is closer to "TL;DR for impatient lurkers" than "definition for an encyclopedia entry."
This is the same gap covered in Wikipedia, Reddit, and G2 Drive More AI Citations Than Your Own Pages Do: if your strategy stops at on-page rewrites, you are leaving Perplexity's top citation surface untouched.
Google AI Overviews are still mostly Google rankings
Google AI Overviews and AI Mode are tightly coupled to traditional Google rankings. Pages ranking #1 in Google get cited by ChatGPT 43.2% of the time per Indig's data, but inside Overviews the dependency is even tighter. Multimodal weight is real here: Overviews lean on YouTube and image content disproportionately.
From what I have seen, the four frameworks do help inside Overviews, but only as a tiebreaker. If your page is not already ranking in the top 10 organically, declarative language is not going to drag you in. Cleaning up rankings has to come first. The framework rewrite earns its lift after.
Worth noting: Google's recent Overview link expansion didn't loosen the underlying citation lock, which is why on-page rewrites alone aren't doing what teams hoped.
Claude synthesizes, which changes what gets retained
Claude tends to synthesize across sources rather than quote them directly. That makes the BLUF framework still useful (Claude reads the top of pages preferentially), but the entity-density framework needs adjustment. Claude does not need you to pack proper nouns; it needs your page to be the cleanest, most internally consistent source on a given subtopic.
Long-form, comprehensive pieces tend to get pulled more often, which is the opposite of the snippet-heavy briefing format Indig found ChatGPT rewards. So if you rewrote a guide into a tight, BLUF-first briefing for ChatGPT and watched your Claude citations drop, that is probably why.
The audit that actually matters this quarter
The honest version of the four-framework advice is: run it, but tag your gains by platform.
One practical sequence:
- Pick five priority pages. Use a tool like Otterly at the cheap end (~$29/month for the smaller stack) or Profound at the enterprise end to baseline citation share across ChatGPT, Perplexity, Google AI Overviews, and Claude. Do this before you rewrite anything.
- Rewrite for ChatGPT first using the four frameworks as written. ChatGPT is still the largest AI referral surface for most content sites, so the highest expected lift lives here.
- For pages where Perplexity is meaningful, treat your Reddit and forum surface as a separate AEO project. Editing your own page will not fix it. Seeding tools-named answers in real threads (without spamming) is the actual lever.
- For Google AI Overviews, audit organic ranking before audit anything else on the page. Frameworks help only if you're already in the top 10.
- For Claude, prefer keeping or restoring depth on the few pages where Claude citations matter for your funnel. Briefing-format compression hurts you here.
If a benchmark helps: most B2B sites we see in audit data have meaningful Perplexity exposure once they cross ~10K monthly organic visits, and meaningful Claude exposure only on long-tail technical or research-heavy pages. If your traffic profile does not fit either, you can probably skip step 3 and 5 and just run the ChatGPT playbook hard.
One messy aside before the close
It is fair to push back on this whole framing. Maybe in twelve months retrievers all converge and the platform-specific advice collapses back to one playbook. Anthropic has been hinting at agentic browsing changes, OpenAI is still rebalancing what gets weighted in retrieval, and Perplexity's Reddit dependency may dilute as the data partnership economics shift. None of that is a reason to skip the per-platform audit now. It is a reason to bake the audit into something you re-run quarterly, not as a one-time project.
What to put on the calendar this week
Pull a baseline citation report for your top 10 commercial-intent pages across at least ChatGPT and Perplexity before you queue any AEO rewrites. If those two engines disagree about who deserves citation share on a given query, the four-framework rewrite is going to help one and quietly under-serve the other. The framework Ahrefs published is real. It is just one platform's ruleset, written without that label.
Notice Me Senpai Editorial