Meta's Suite of Truth Is a Measurement Framework Where Meta Grades Itself

Meta's Suite of Truth Is a Measurement Framework Where Meta Grades Itself
Meta's Suite of Truth stacks three measurement methods and quietly calibrates all of them against its own Conversion Lift tool.

Meta published "Building a Suite of Truth" on May 28, 2025, recommending advertisers combine rules-based attribution, MMM, and randomized experiments. The framework is backed by 307 Meta Conversion Lift studies and claims last-click attribution hides 31% of Meta's incremental conversions. All three measurement tiers, as Meta frames them, loop back to Meta's own lift tool as the calibration anchor.

That last sentence is where I want to start, because it is the whole thing in one line. You can read Meta's original white paper and come away thinking this is a pro-methodology move. It is, partly. It is also a very good argument for why your board should trust Meta's numbers more than your MMM.

The three tiers, in Meta's own order

Meta's framework stacks three measurement methods by what it calls causal rigor:

  1. Rules-based attribution at the bottom. Last-click, last-touch, first-click, whatever your MTA vendor does.
  2. MMM and multi-touch attribution in the middle. Top-down, regression-driven, no user-level data needed.
  3. Randomized experiments at the top. Conversion Lift, geo-lift, holdout tests.

The underlying research sits inside Meta's Measurement 360 brief, drawn from 307 Conversion Lift studies across 54 advertisers between March 2022 and November 2024. Meta's recommendation is blunt: use tier 3 outputs to calibrate tiers 1 and 2, and feed the calibrated numbers into ad delivery.

Nothing about that stack is controversial on its own. Every measurement scientist worth paying attention to has been saying "triangulate your methods" for a decade. The part worth pausing on is the verb Meta uses. Calibrate. Your MMM does not get to tell Meta it is wrong. Meta's lift study tells your MMM to adjust.

The 31% number is doing a lot of work

Meta's signature finding is that last-click misallocates 31% of Meta's incremental conversions to other channels. Rules-based outputs need a 1.45X minimum multiplier to match the incrementality Meta measured. Those numbers came from Meta's Conversion Lift dataset, which is reasonable in isolation. You cannot benchmark incrementality without some kind of ground truth, and Meta has cleaner user-level data than almost anyone.

But it also means the "true" value of Meta ads was measured by Meta's system, and every other channel's incremental contribution was treated as noise by comparison. That does not invalidate the framework. It does mean the word "truth" is doing more work than the data warrants.

Meta paired the white paper with Incremental Attribution, which rolled out June 4, 2025 and posted a 46% average lift in incremental conversions during testing. The testing ran across 30 advertisers and eight verticals from July to October 2024. It is a legitimately better signal than last-click for bidding inside Meta. It is also measured against Meta's in-house counterfactual, which you cannot audit from the outside.

The case studies are strong. H&M tripled incremental ROAS between 2023 and 2025. Huel claims a 6X jump in marginal return and 11.8 million additional incremental people reached. Laura Geller from AS Beauty posted 3.3X higher incremental ROAS and 71% lower incremental cost per acquisition. I believe the direction of those numbers. I am less sure the magnitudes travel to accounts that do not get hands-on support from Meta's measurement team.

Where the circular logic lives

This is the part agency write-ups tend to skim. Meta's framework says:

  • Calibrate your MTA and MMM using randomized experiments.
  • The randomized experiments Meta recommends are Meta's own Conversion Lift studies.
  • The "truth" those experiments output is what Meta's ad system will optimize against going forward.

On paper, that looks like a closed loop that makes Meta more accurate. In practice, it is a closed loop that makes Meta's numbers less comparable to everything else in your media mix. If your MMM keeps insisting Meta should drop from 40% of spend to 28%, and the Conversion Lift calibration keeps bumping it back up, your planner eventually stops trusting the MMM. CFOs do the same thing in the other direction when the spend goes up and ROAS somehow looks steady.

And to be fair, this is not entirely new. Google has pushed similar calibration moves with Meridian and its own lift tools. Google also dropped its incrementality budget threshold to $5,000 on May 22, 2025 to make lift studies available to smaller accounts. The difference is Meta is pushing the calibration-as-default posture much more aggressively and tying it directly to the bidding layer through Incremental Attribution.

Kantar's research, cited in Meta's own deck, found that the average advertiser uses 3.8 measurement solutions and 55% see contradicting results across them. Roughly 51% default to whichever solution feels most internally credible, regardless of methodological rigor. Meta's Suite of Truth is a very explicit bid to be the solution you default to, and the 31% and 46% numbers are engineered to sit in a deck that convinces your CMO to let it happen.

Custom Attribution is the olive branch, with strings

The most interesting technical move sits in Custom Attribution. It lets advertisers push granular click-level data from Adobe Advertising, Northbeam, Rockerbox, and Triple Whale into Meta's optimization layer. Meta then tests whether those external signals improve its own models. Fospha's breakdown frames it as the first real door Meta has cracked open to external MTA tools since ATT.

The direction of data flow is one-way, though. Third parties send Meta more signal. Meta uses that signal to optimize Meta ads. None of it makes your third-party tool more accurate, and none of it gives you a window into Meta's counterfactual. If you are reading Custom Attribution as "Meta is opening up," you are reading it wrong. It is Meta saying "send us better signal and we will deliver more of what you already measure."

The pattern across the platforms is consistent now. OpenAI is building ChatGPT conversion tracking that only OpenAI can read. Google just added a PMax spend-over-time chart that shows where your budget goes but not what it bought. The major platforms are all narrowing the space where their own numbers get audited from the outside. Meta is just the most polished about it.

The cross-platform check Meta's framework skips

The easy move is to adopt Suite of Truth wholesale. The harder, better move is to keep one piece of measurement that is not calibrated against Meta at all.

From what I have seen, a geo-holdout that Meta does not design and does not read outputs from is the most useful piece of protection you can still run. Split DMAs, keep Meta dark in a handful of them for two full purchase cycles, compare revenue lift against matched control geos. It is expensive and annoying. It also produces one number that is not cross-contaminated by Meta's optimization layer.

A practical benchmark: if your geo-lift lands within 20% of what Meta's Conversion Lift reports, the calibration is probably fine. If the gap stays above 40% across two consecutive tests, your MMM is telling you something Meta's framework is engineered to ignore. That is when you keep the MMM weighting high in budget decisions, no matter how polished the Suite of Truth deck looked on the agency call.

I do not think Suite of Truth is bad measurement. I think it is measurement that looks most generous to the thing doing the measuring. Keeping one geo-holdout you own end-to-end probably costs a weekend of setup and saves you from reading Meta's grade on Meta's homework for the rest of 2026.

Notice Me Senpai Editorial