Nestlé, Haleon, and Molson Coors All Conceded Their Measurement Is Broken

Nestlé, Haleon, and Molson Coors All Conceded Their Measurement Is Broken
CPG measurement is publicly broken in 2026. Nestlé, Haleon, and Molson Coors said so on the record, but stopped short of naming what they are testing instead.

Three CPG marketing leads at Nestlé, Haleon, and Molson Coors said in late April that their measurement systems are broken. None of them named the tool replacing them. The IAB State of Data 2026 report explains why: 75% of marketers say their attribution, incrementality, and MMM are not delivering the speed, accuracy, or trust they need.

What three CPG leads actually said

Nicole Lesinski, Nestlé's director of ecommerce strategy, told Adweek that "marketers have been trying to measure incrementality for decades, and there just aren't great solutions." Marissa Solan, Haleon's U.S. director of earned and social media (Advil, Theraflu, TUMS), framed AI as the long-overdue catalyst that might finally make a dent in the incrementality problem. Anna Johnson, Molson Coors' director of precision media and marketing, focused on something quieter and weirder: even with a well-planned CTV buy, the algorithm pulls spend toward whatever inventory scales easiest, usually the same handful of dominant publishers.

That last one is the most interesting admission of the three, and it's the one nobody in the trade press is reading carefully. Johnson is talking about delivery, not planning. The plan was fine. The system pulled her spend somewhere else.

You will notice none of these executives named what they were using to fix any of this. That silence is not a coincidence. Three CPG companies do not call out their measurement provider at the same conference unless they want their CFOs comparing notes, and none of them want that yet.

The structural problem with MMM in CPG

Most legacy MMM tools refresh quarterly. By the time the next budget meeting starts, the model is producing recommendations off data that can be up to 90 days old. In a category where CPG promo cycles run two to four weeks, that is the equivalent of driving by looking at the road from a mile back.

The deeper issue is what researchers call the Trade Promotion Conflation problem. In CPG, roughly 30 to 40 percent of what an MMM credits to media spend is actually being driven by promotional elasticity (price cuts, end-cap displays, Sunday inserts). That number alone should worry anyone who has ever made a budget cut based on an MMM output. As the IAB walked through, CPG brands collectively spend around $500 billion a year on trade promotions, and somewhere between 35 and 40 percent of that spend is considered wasted partly because no measurement system cleanly separates trade and media effects.

So when Lesinski says incrementality is unsolved, she is being more accurate than most. The problem is not that we lack tools. The problem is that the tools we have keep crediting work to the wrong line item.

What Johnson at Molson Coors is really describing

The CTV "algorithm pulled my spend" admission deserves its own paragraph, because every CPG team running connected TV right now is dealing with a version of it.

When you buy CTV through a DSP that auctions across smart TV manufacturers, streaming apps, and various publisher SSPs, the system is making thousands of micro-decisions about where your impressions land. Some of those decisions are fine. Some of them route you into invalid traffic. According to Measured's 2026 CTV insights research, invalid traffic in CTV runs at a 3.5 percent median, which is roughly seven times higher than the equivalent in non-CTV digital inventory. Some marketers see IVT as high as 26 percent on individual campaigns.

That is the gap Johnson is talking about, even if she did not put it in those terms. You buy premium inventory and your delivery slides toward whatever has the most volume, which is usually whatever has the loosest fraud controls and the most aggregated, opaque measurement. The tracker tells you the campaign hit plan. The lift study tells you it did not.

Why the silence on tooling matters

Most of the names in this space (Mutinex, Tracer, Measured, Liftlab, Recast, Haus) are pitching some combination of MMM plus incrementality testing plus cleaner causal modeling. The category has consolidated into a fairly clear bake-off, and most large CPG marketers I've talked to are running pilots on at least two of them in parallel. We covered the same dynamic in our piece on Hershey's $450M media team running 2024 data halfway through 2025 and in the MiQ household measurement work that found 43 percent of marketers do not trust their current setup.

What is happening at Nestlé, Haleon, and Molson Coors is the same thing happening at most multinational CPG companies: pilots running, no public commitments, no comparative ROAS published, no one yet willing to plant the flag. From what I've seen, the executives are right to be cautious. The output spread between two of these tools on the same campaign data can land north of 30 percent.

That is enough to move budgets in opposite directions on the same input.

The reason none of them named the replacement is that none of them are sure the replacement is right yet. That is more honest than the press release version most vendors are running.

A 60-day audit any CPG marketer can run today

I would not wait for the next vendor pitch deck before doing this. Pull your last full quarter of MMM output. For each line item credited with positive ROAS over a 1.5x threshold, check three things: was there a price promotion (TPR, BOGO, end-cap) running concurrently in that channel, what was the delivered placement mix on any CTV or YouTube line item versus what was planned, and what was the model refresh date relative to the promo cycles you ran in the period.

If more than 30 percent of your "winning" line items had concurrent trade promotions you didn't isolate from in the model, your MMM is conflating trade and media. If more than 20 percent of your CTV delivered placements diverged from plan, you have a CTV inventory drift problem before you have a measurement problem. If your model refresh is older than 60 days against a 4-week promo cadence, your data is pointing one budget cycle behind.

These three checks are not a substitute for a real causal model. They are a triage protocol you can finish in an afternoon, and they will tell you whether the next pilot conversation should focus on incrementality testing, attribution, or just better delivery audit. Most teams I've seen need all three, but in different orders.

The honest read

There is something almost reassuring about three CPG executives in a row saying their measurement does not work. The polite consensus until pretty recently was that the discipline was "evolving" or "improving." That language was useful for vendors and useless for everyone else.

Saying it is broken, in public, on the record, is the first step toward actually fixing it. The next step is a category-wide push to publish comparative incrementality results so CFOs can stop relying on whichever vendor wrote the most confident slide deck. I do not think we see that in 2026. But the conversation is finally adult, and that is a meaningful change from where it was even six months ago.

Notice Me Senpai Editorial