Frank Olivo's Framework for the ChatGPT SEO Email Every Agency Now Gets
Frank Olivo published a 4-step response template at Search Engine Land on May 14, 2026, for the ChatGPT SEO recommendations clients are now routinely forwarding to their agencies. The framework opens by validating the client's effort, leads with what the model got right, then walks the stakeholder through the contradictory evidence so they reach the correction on their own. Used right, the full reply takes about 12 minutes per email and doubles as a quiet discovery call.
What the 4-step reply actually does
The structure is mechanical. Validate first (a one-line thanks), surface the recommendations that have actual merit, walk the client through the data that contradicts the bad ones, then redirect the conversation toward better prompts and better context for next time. The full breakdown is in Olivo's piece.
The reason this works is not that it is a clever client-management trick. It is that the standard agency reply is some variation of "well, AI does not really understand SEO, so here is why we know better." That reply loses regardless of who is technically correct, because the client just got told their effort was misplaced. Olivo's version flips it. The client did the work. The agency is refining it.
The two ChatGPT outputs he picked apart are the two every agency keeps seeing
Olivo gives two real examples from his own client emails. The first is a plastic surgeon competitor analysis claiming the local competition had "focused their SEO" around single procedures. The actual SERPs showed those competitors ranking for multiple procedures at once. The model invented a strategic narrative that was not in the data. The second is a recommendation that procedure pages exceed 3,000 words, when the top-ranking pages were noticeably shorter. Length was not the variable doing the work.
Both fall into the same pattern. ChatGPT inferred a plausible-sounding strategic claim from limited data and stated it with high confidence. That pattern shows up across every published citation-accuracy benchmark. Across frontier models in 2026, citation-related tasks hallucinate at an average rate of 12.4% even with extended thinking enabled. For SEO specifically, roughly 2.38% of ChatGPT's cited URLs return a 404. The errors are not random noise. They cluster around the specifics: numbers, citations, and strategic claims that require live verification.
The word-count one is the example I see the most right now. The "more words equals better rankings" myth predates ChatGPT, but the model amplifies it because so much of its training corpus is older SEO content that pushed long-form as a ranking factor. AIOSEO's 2026 word-count breakdown says the ideal length is the one that fully answers the intent, not a fixed threshold. ClickRank's 2026 piece goes further and notes that short content can rank perfectly well when the intent is navigational or transactional. Length is the wrong knob, and the client emailing you a "make every page 3,000 words" recommendation is the casualty of an outdated dataset, not a bad operator.
Where the framework needs sharpening
Olivo's framework is good. I think most agencies will still under-use it.
The piece treats the forwarded email as a client-management problem. From what I have seen across in-house and agency rooms over the last six months, that is underselling it. The fact that a client is sending you ChatGPT SEO advice is a signal that the client is now spending energy on SEO without you in the loop. That looks like a threat. It functions as a discovery call instead.
When my last in-house team got these forwards, the useful move was never just to reply. It was to schedule a 30-minute working session and walk through the prompt itself. What was the client actually trying to figure out? Why did they think to ask ChatGPT and not us? That part surprised me. Almost every time, the underlying business question was something we should have been hearing in our weekly check-ins, and the forward was a flag that our intake process was not catching it.
Turn the forward into a deliverable instead of a defense
Concrete action: build a single shared doc with one section per common ChatGPT SEO claim. Title each section with the exact bad recommendation as the H2 ("Procedure pages should be 3,000+ words"). Below it, paste the actual top-ranking SERP screenshot, the actual word counts, and the actual primary source. Link out so the doc updates itself when behavior shifts.
When the forward lands, you do not write a 400-word reply. You paste the relevant section of the doc into your response and add two sentences of context about the client's specific case. Each forward takes about 8 minutes once the doc exists. The doc itself takes a working day. That is the benchmark. If your reply is taking longer than 12 minutes (Olivo's 4-step reply plus your custom paste), the doc has not been built yet.
The downside risk of not having the doc is well-documented in our own coverage. One r/SEO operator followed ChatGPT-generated SEO advice for a sports site and watched daily impressions fall 99.8% inside three months. That outcome is what an unchecked forwarded recommendation looks like 90 days later, and "we did not push back" is not a story you can tell in a QBR without losing the retainer.
The reply most agencies are sending right now (and why it costs the retainer)
I have seen three patterns of bad replies in agency Slacks over the last quarter.
The first is the credentials reply: "We have been doing SEO for 12 years, and that is not how it works." That reply tells the client their effort was wasted and pulls rank instead of educating. The second is the silent reply: thumbs-up emoji, nothing implemented, the recommendation quietly rots in the client's inbox until they ask why nothing changed. The third is the over-correction reply: a 600-word breakdown of every flaw in the ChatGPT output. That one signals defensiveness more than expertise. Clients read it and feel like they have embarrassed someone, and embarrassed clients tend to start price-shopping.
What Olivo's framework gets right is the order. Validation, then the useful bits, then the contradiction, then the redirect. The order matters more than the specific wording. Reverse it and you have written the over-correction reply by accident.
What 12 months of these forwards will train your retainer book to do
The forwards are not going away. A March 2026 piece on AI content production noted that roughly 90% of AI-written SEO articles never rank, which means the same volume of bad ChatGPT-generated advice will keep getting forwarded long after the model improves. From what I have seen across three agencies in the last six months, clients who send three or more of these forwards in a quarter are noticeably more likely to bring up "switching to an AI-first agency" in their next QBR. The forwards are early-warning, not minor noise.
Notice Me Senpai has already covered a related discovery: Kevin Indig's consensus-gap analysis found that only about 2.37% of AI citations survive across all three major engines. The model is confident, the citations are fragile, and the client is forwarding both to you with the assumption that the recommendation is grounded somewhere it usually is not.
The forward is the brief
Build the doc this week. Once it exists, every ChatGPT forward becomes a 12-minute proof that the agency caught what the model missed. From what I have watched in the renewal cycle, the clients who get that proof tend to renew at higher rates than the ones who get the silent reply or the credentials reply. I would rather be the agency the client screenshots than the one the client switches away from.
Notice Me Senpai Editorial