Google Conceded the Search Query Report Is Now Just an Approximation
On May 13, 2026, Google updated a Google Ads help page to say Search Query Reports may show the “closest approximation” of user queries, not the literal searches. The change, spotted by AdSquire founder Anthony Higman and surfaced by Search Engine Land, retroactively turns years of negative-keyword and match-type audits into work built on partly fictional data. The 30-minute fix is re-running your last 90 days of negatives against the AI Max search-terms export.
How a help-page edit reset the meaning of a 15-year-old report
The Search Terms Report launched in 2010 as the canonical record of what people actually typed before clicking your ad. Higman noticed the wording change buried inside an ad-group and asset-group prioritization help page, not in a press release, not in a policy update email. The line reframes the report as “the closest approximation” of a query because of the complexity of modern search behavior.
In practice, this is Google catching its documentation up to a system change it already shipped. Since broad match got promoted as the default recommended match type in 2021, and AI Max went account-level this spring, Google has been routing queries through an intent-modeling layer before deciding what to attribute a click to. The Search Terms Report has been showing the rewritten or paraphrased version for a while. The help page now just admits it.
The wording matters because it changes what the report legally is. A canonical record of user input is one thing. An approximation generated by a matching algorithm is another. The metrics attached to those terms (clicks, cost, conversions) are still real. The linguistic label on each row is not necessarily what anyone searched.
Why this breaks the standard negative-keyword playbook
The negative-keyword audit, taught in every PPC course for 14 years, assumes the search terms list shows what users typed. You pull terms with bad ROAS, you add them as negatives, you reclaim spend. If the terms are approximations, two things go wrong.
First, you can add a negative for “free roof inspection” because the report shows that term, but the actual query might have been “roof leak emergency” that Google decided to bundle as the same intent. Your negative now blocks queries you never actually saw. Whether that hurts you depends on how aggressive Google’s intent grouping is in your account, which is the part you cannot inspect.
Second, the bad-performing variants might never appear in the report at all. Search Engine Land documented in 2020 that Google quietly cut the volume of visible search terms, and follow-up agency analyses estimated 20 to 30 percent of spend on many accounts goes to terms Google considers too thin to surface. AI Max widens that gap by inserting paraphrased terms in place of literal ones.
I think the implication is uncomfortable: a meaningful slice of your negative-keyword list is probably blocking queries that didn’t exist and missing queries that did. The audit ritual still produces a tidy spreadsheet. It just doesn’t produce the certainty it used to.
What the report still tells you reliably
Not everything is broken. The Match Type column still tells you whether a term was triggered by an exact, phrase, or broad-match keyword, and Exact-match terms are still close to actual queries because exact has tighter close-variant rules than broad. The performance data (clicks, cost, conversions) is real, no matter how the term itself was labeled. What’s approximate is the words attached to that performance, not the performance itself.
For AI Max specifically, Google ships a separate search-terms-and-landing-pages view that shows the headline, landing page, campaign, and ad group that made up the customer’s full ad journey. That view is the closer-to-truth picture for AI Max campaigns. If you are running AI Max and only looking at the default search terms tab, you are looking at the more abstracted layer.
The 30-minute audit worth running before next QBR
Pull your last 90 days of search-terms data and split it into three buckets, then treat each bucket differently.
Exact-match terms are still the highest-signal source for negative keywords. Trust the wording, trust the performance, treat these as the audit’s core. Broad-match terms with AI Max enabled should be treated as approximate. Only add negatives when the term is clearly off-topic (wrong industry, wrong intent category), not when it’s a borderline variant of a keyword you want. Terms with “Other” in the match-type column should be ignored for negatives entirely, since you don’t know what Google rewrote them from.
For ROAS analysis, group performance by ad group rather than by term. Ad group performance is real. Per-term performance under broad match and AI Max is now a story Google is partly authoring on your behalf.
Where this leaves the keyword-as-strategy debate
Frederick Vallaeys at Optmyzr already eulogized keywords as a targeting unit this quarter, though his own study quietly showed phrase match still beats broad on conversion rate. The Higman discovery is a different kind of evidence. Google isn’t just deprioritizing keywords as a targeting unit, it’s quietly editing what the post-hoc record of them looks like.
From what I’ve seen, the teams who treat the search-terms report as a directional signal rather than ground truth end up in roughly the right place. The teams who run weekly negative-keyword drills off broad-match terms are doing work with diminishing returns, and probably some hidden cost. And to be fair, that’s not entirely new. Google has been moving this direction for years. It just feels a lot less plausibly deniable now that the help page itself says “approximation.”
One question worth raising with your Google rep this week
Ask whether your account’s broad-match coverage includes AI-rewritten queries in the search-terms export, and whether there’s any way to see the raw query alongside the approximation. The honest answer is going to be no, you can’t get the raw query, but the conversation gets useful when the rep has to confirm that on a recorded call. It changes how your next QBR reads when there’s a documented gap between what Google shows you and what users actually typed.
The trust deficit Google still hasn’t addressed
The bigger problem isn’t this one help-page edit. It’s that Google has been layering AI-rewriting steps onto its match system for two years, and the documentation only catches up after someone like Higman spots it. Each layer makes the report less of a record and more of an interpretation.
I don’t think this is the last reclassification we’ll see. Google has already announced that Dynamic Search Ads, automatically created assets, and campaign-level broad match will auto-upgrade to AI Max in September 2026, which will pull more accounts under the same approximation layer. The right posture, at least for the rest of this year, is to stop treating the search-terms report as truth, treat it as one of several signals, and put more weight on ad group and asset-level performance where Google still gives you the actual numbers.
It’s a small wording change. The reaudit hours it quietly implies are not.
Notice Me Senpai Editorial