PPC Measurement Broke the Moment Google Started Keeping Score
There's a question buried in every PPC performance review that most teams skip over. Not "what's our ROAS" or "should we increase budget." The uncomfortable one: when the platform setting your bids is the same platform reporting your results, how much of that dashboard is measurement and how much is a sales pitch?
Brooke Osmundson at Search Engine Journal published a framework this week for measuring PPC when AI controls the auction. The framework is solid. But I think the more interesting question is the one it circles around without quite saying: Google is now the auctioneer, the bidder, and the scorekeeper, all at once. And the scorekeeper's parent company made $300 billion in ad revenue last year.
Call it incentive math. When 74% of your total revenue comes from advertising, the self-reported metrics deserve a footnote.
The referee also owns the team
A lot shifted in the last 18 months. Performance Max campaigns now drive 62% of all Google ad clicks. Smart Bidding processes over 70 million signals per auction, including your browser history, location, time of day, and conversion likelihood scores that you can't inspect. AI Max expands your keyword lists using "contextual signals" that Google describes in the vaguest possible terms.
And this part is worth sitting with for a second: search term visibility has been deteriorating since 2020. A significant chunk of traffic in Performance Max is invisible to you as the advertiser. You can't see which queries triggered your ads, which placements ran, or how the algorithm decided to allocate your budget across Search, YouTube, Display, Discover, Gmail, and Maps.
Google did add channel reporting for PMax in April 2025, which was a step forward. But it's a bit like your financial advisor finally telling you which asset classes they invested in while still keeping the individual stock picks private.
An independent analysis from Smarter Ecommerce across 250+ retail campaigns found that AI Max delivers conversions at roughly 35% lower ROAS compared to traditional targeting within the same campaigns. Google, naturally, reports different numbers. Their internal data shows a 14% conversion lift for non-retail advertisers using AI Max. Both of these things can technically be true, and that's sort of the problem. When the platform runs the test and writes the report, you end up trusting whichever number confirms what you already believed.
Why your dashboard numbers feel increasingly fictional
The traditional PPC measurement stack was built for a world where you picked keywords, set bids, and tracked clicks through a conversion funnel. Each input had a clear relationship to each output.
That world is gone. AI systems optimize for outcomes rather than inputs. The algorithm evaluates combinations of signals that no human could replicate or audit. Which means the relationship between "what I did in the campaign" and "what the dashboard says happened" has gotten genuinely murky.
SparkToro and Datos research shows that nearly 60% of Google searches now end without a click to any website. The zero-click rate for queries with AI Overviews jumps to 83%. Your paid and organic strategies are fighting over a shrinking pool of actual clicks, and the platform selling you ads is the same one reducing how many clicks exist.
I don't think most teams have fully processed what that means for measurement. You're paying for visibility in a system that's actively reducing visibility's value.
Google just made the best measurement tool accessible (and most teams don't know)
The quiet development that actually matters: Google reduced the minimum spend for incrementality testing from $100,000 to $5,000. That happened in late 2025, and the updated methodology delivers conclusive results up to 50% more frequently than the previous version.
For years, incrementality testing was something only enterprise advertisers could afford. You'd run a geo holdout test, pause ads in a set of matched markets, and compare conversion rates against markets where ads kept running. It's the closest thing to a genuine controlled experiment in digital advertising. And at $100K minimum, it was locked behind a budget most teams don't have.
At $5,000, this changes who gets to ask the hardest question in PPC: "Would these conversions have happened without the ad?"
The answer, from what I've seen in discussions on r/PPC and various case studies, is often uncomfortable. Branded search campaigns frequently show 40-60% of their conversions would have happened organically. The campaigns still helped, but the dashboard was taking credit for demand that already existed.
If you're spending more than $5,000 a month on Google Ads and you haven't run an incrementality test, you're essentially trusting the casino's math on your winnings.
Blended CAC: the metric that doesn't care who takes credit
Osmundson's framework recommends blended customer acquisition cost, and I think she's right that this is where measurement needs to land. The formula is simple: total acquisition spend divided by total new customers acquired.
It's deliberately crude. It doesn't try to attribute individual conversions to individual campaigns. It doesn't care whether Google or Meta or organic search gets the credit. It just asks: across everything you spent, how much did each new customer cost?
This matters because attribution models are increasingly fictional. Google's data-driven attribution spreads credit across touchpoints using a model you can't inspect. Last-click is a lie. First-click is a different lie. Blended CAC sidesteps the entire argument by measuring business outcomes, not platform metrics.
The catch is that blended CAC requires clean data on actual new customer acquisition, not just conversion events. You need CRM data, you need to separate new from returning customers, and you need to import offline conversions. Most mid-market teams don't have this infrastructure yet, which is why they default to whatever number Google puts on the dashboard.
What to pull up in your account before Friday
Open your Google Ads account. Go to conversion lag reports. If more than 20% of your conversions are reporting after 7+ days, your attribution window is probably too short. Extending to 60-90 days gives the algorithm better feedback and gives you more honest numbers. Though I'll admit "more honest" is relative when the grader writes the rubric.
Second: if you're running Performance Max, separate your branded search into its own campaign. PMax loves to cannibalize brand traffic because it's easy to convert and makes the campaign metrics look great. Pulling brand out forces PMax to justify itself on non-brand performance, which is the conversion lift you're actually paying for. We covered a similar auditability problem with Enhanced Conversions recently.
Third: go to GA4, pull time-to-conversion data, and compare 3-month, 6-month, and 9-month windows. If the average time to conversion is getting longer (and for most accounts, it is), that's the algorithm chasing harder-to-convert users further down the funnel. Not necessarily bad. But it changes what "good performance" looks like.
And if your monthly spend clears $5,000, seriously, set up a geo holdout test. Google's own incrementality testing tool is free to use. Pick 4-6 matched DMAs, pause ads in half of them for 4 weeks, and compare. The number that comes back might be uncomfortable. That's the point.
The measurement gap that's widening, not closing
I think the PPC industry is heading toward a split. On one side, teams that treat Google's self-reported metrics as directional and build independent measurement around blended CAC, incrementality, and first-party conversion quality. On the other side, teams that keep optimizing ROAS in Ads Manager and wondering why their business growth doesn't match the dashboard.
The uncomfortable part is that building independent measurement costs real money and engineering time. It requires CRM integration, clean data pipelines, and a willingness to discover that some of your "best performing" campaigns are mostly capturing demand that already existed.
But the alternative gets less defensible every quarter. Especially when the platform processing 70 million signals per auction still can't tell you which ones mattered, or why it decided your budget was best spent on a Gmail placement at 2 AM.