Meta Built the Tool That Decides How Much Credit Meta Deserves
By Notice Me Senpai Editorial
Meta described Robyn's mission as "democratizing modeling knowledge" and "reducing human bias." Those are real goals. They also happen to serve Meta's commercial interests perfectly, which is the part that doesn't make the press release.
Nearly half of US marketers plan to increase their MMM investment this year, according to EMARKETER and TransUnion. A meaningful chunk of that budget will land on Robyn because it's free, it's good, and Meta's measurement team recommends it. The question nobody seems to be asking loudly enough: what happens when the tool measuring your media effectiveness was built by your largest media vendor?
When the Referee Also Plays for One of the Teams
Robyn is Meta's open-source marketing mix model. Free to download, well-documented, actively maintained on GitHub. It solves a genuine problem: most brands can't afford custom MMM, and the alternative (trusting platform-reported attribution from inside the walled garden) is arguably worse. So Meta open-sourced a tool that measures marketing effectiveness across all channels, not just theirs.
The catch is structural. Meta sells media. Meta also built the tool that tells you how much credit that media deserves. The code is public, so anyone can audit it in theory. In practice, most marketing teams don't have a data scientist who can read Ridge regression implementations and identify where the default assumptions might favor one channel type over another.
Israel Grintz put this plainly in a LinkedIn discussion covered by PPC Land: MMM "sits right in the middle of one of the biggest fights in advertising, who gets to influence where the next dollar goes." When the platform building the tool is also competing for that dollar, transparency alone isn't sufficient. You need independent verification. And that's the step most teams skip.
The Case Study That Says the Quiet Part Out Loud
Meta published a case study on Resident (the mattress company) documenting a 15-20% lift in Facebook's share of Resident's media budget after implementing Robyn. Revenue grew 20% quarter over quarter. Those are real results, and Meta deserves credit for publishing them openly.
But read the details more carefully. Resident had been using first-click prospecting and last-click retargeting attribution before Robyn. That model systematically undercredited upper-funnel activity, which is exactly where Meta's ads tend to live. Switching from a broken attribution model to any decent MMM would probably have shifted budget toward Meta. That part isn't surprising.
The question worth asking is whether Robyn produced the best possible allocation, or one that's structurally generous to Meta's channels. Nobody published a comparison against an independent model. Maybe one exists internally. But the public case study reads like a success story for Robyn and Meta simultaneously, and separating those two outcomes is harder than it looks.
Ridge Regression Has Predictable Blind Spots
This is where it gets technical, and honestly a bit uncomfortable for anyone relying on Robyn without customizing it.
Robyn uses Ridge regression, which introduces systematic bias toward zero coefficients. In plain English: channels with less historical data get measured as less effective than they might actually be. That's a known statistical trade-off, not a conspiracy. But it has a practical consequence.
New channels, smaller channels, and channels where your spend is inconsistent will tend to look worse in Robyn than in a Bayesian model that lets you inject prior knowledge about expected performance. Meanwhile, Meta channels (where most advertisers have deep historical data and years of consistent spending) tend to look solid. Maybe better than solid.
Goal Aligned Media documented this clearly: the bias toward zero coefficients "hampers growth" by making newer opportunities look less promising than they are. If your Robyn model keeps recommending you shift budget toward established Meta campaigns and away from newer TikTok or CTV experiments, that could be the model working correctly. Or it could be a byproduct of the regression method and your data distribution. You genuinely can't tell without running independent calibration.
Google's Meridian Carries the Same Structural Question
Before anyone concludes this is a Meta problem specifically: Google launched Meridian, their own open-source MMM, in February 2025. Meridian uses Bayesian inference instead of Ridge regression, which handles sparse data better on paper. But Google also sells media. Google also benefits commercially when its tool credits Google channels generously.
Meridian now supports "channel-level contribution priors," meaning you can inject your own assumptions about how each channel should perform. That's technically more flexible than Robyn's defaults. Whether most teams actually customize those priors, or just run the model out of the box and trust what comes back... from what I've seen, the defaults usually win. People are busy. Customization requires conviction, and conviction requires data you probably don't have yet (which is the whole reason you adopted MMM in the first place).
Every free measurement tool from a media platform carries this same tension. A perfectly neutral tool can still produce biased results. It depends entirely on the defaults.
The Real Risk Is Treating Model Output as Ground Truth
I think the 46.9% number from the EMARKETER survey is actually underweighting the trend. My guess is that by mid-2027, closer to 65-70% of mid-market brands will be running some form of MMM, and at least half of those will use either Robyn or Meridian because the price is right. That's a lot of budget allocation decisions flowing through tools built by the two largest digital media sellers on the planet.
The teams I'd worry about aren't the ones using Robyn. The tool is genuinely good for what it does. I'd worry about the teams treating its output as ground truth without ever testing the recommendations independently. If your data science team has never run a holdout to validate Robyn's channel-level findings, you're operating on faith. As Dan Rubel at Currys put it, MMM "feels more and more like religion... everyone acts on faith anyway."
And to be fair, this isn't entirely Meta's problem to solve. Most teams adopted MMM specifically to escape the chaos of multi-touch attribution. They wanted one model that would just tell them where to spend. The idea that you then need to audit your measurement tool with yet another measurement technique feels like it defeats the purpose. I get that frustration. But we're sort of stuck with it.
A Two-Week Holdout Test That Actually Resolves the Question
If you're running Robyn (or Meridian, or any platform-built MMM), there's a straightforward way to check whether the model is over-crediting the platform that built it.
Pick the channel Robyn identifies as your most efficient. If it's a Meta channel, run a two-week geo holdout. Kill spend entirely in 2-3 test markets and compare conversion rates against your control markets. If the holdout markets barely dip, your model was over-attributing. If conversions drop meaningfully, the model was right and you can move forward with more confidence than you had before.
Jones Road Beauty ran a similar exercise and found their Meta ASC campaigns were getting more credit than the incrementality data supported. Their response wasn't to abandon Meta. They reallocated roughly 15% of budget toward channels their model had been under-crediting. Revenue held. What changed was where the money went, not how much they spent.
The whole exercise takes two weeks and costs nothing except temporary spend reduction in test markets. If you're making six-figure monthly media decisions based on Robyn output you've never validated, that's cheap insurance.
What Your Q3 Plan Needs Before the Budget Locks
I think Robyn is probably the best free MMM tool available right now. It's meaningfully better than what most mid-market brands were using two years ago, and Meta's measurement science team built something real. The open-source community around it is doing serious work.
But "open-source" means you can check the code. It doesn't mean anyone on your team actually did.
And "free" stops meaning "neutral" the moment the organization funding the development is also competing for the budget the tool allocates. That's not an accusation. It's the structure of the situation, and pretending otherwise is how you end up with a measurement framework nobody questions until someone finally runs a holdout and the numbers don't match.
If your Q3 budget depends on Robyn and you haven't run a single calibration test, you don't have a measurement strategy. You have a recommendation from your media vendor with better packaging.