Jones Road Beauty’s Best Channel Was Overspending on the Wrong Customers. A New Kind of MMM Found It.

Jones Road Beauty’s Best Channel Was Overspending on the Wrong Customers. A New Kind of MMM Found It.
Jones Road Beauty ran geo-split incrementality tests and found their top channel was inflating results.

By Notice Me Senpai Editorial

Jones Road Beauty just discovered that the channel eating most of their budget was mostly finding people who would have bought anyway. That’s the kind of finding that makes a CMO go quiet in a meeting. And it came from a tool that most DTC brands have historically considered too expensive, too slow, and too hard to trust: a marketing mix model.

Except this one is built differently. Haus calls it Causal MMM, and the “causal” part is doing a lot of work. Instead of correlating your spend to your outcomes and hoping the model gets it right, it anchors the model in real experiment data. Geo-split incrementality tests. Actual holdouts. Measured lift. The kind of evidence most brands only run once a year, if that.

Jones Road’s CMO Cody Plofker (who also happens to be CEO) partnered with Haus to build their measurement system this way, and the results are worth reading carefully. Not because the numbers are flashy, but because the first real finding was uncomfortable.

Meta ASC was finding the wrong customers

This is the part that probably matters most if you’re running Advantage+ Shopping campaigns, which, at this point, is basically everyone in DTC.

Jones Road found that Meta ASC was overspending on returning customers, not acquiring new ones. On paper, ASC looked like the top performer. The ROAS numbers were strong. The dashboard was green. But when they layered in incrementality data, the picture shifted: a big chunk of those conversions were people who already knew the brand and would have purchased regardless.

I think this is one of those findings that a lot of brands quietly suspect but can’t prove. Your ASC campaigns look efficient because Meta’s algorithm optimizes for the easiest conversion, and a repeat buyer who’s already sitting on a retargeting list is always going to be easier to convert than a cold prospect. The reported ROAS is real. The incremental value is not what you think it is.

Jones Road’s response was to pull back ASC spend and redirect toward mid-funnel campaigns. Not a dramatic overhaul, just a meaningful reallocation. And honestly, it took a certain amount of nerve. You’re looking at a channel that’s showing great numbers and deciding to spend less on it because a model told you those numbers are partially an illusion. That requires trust in the measurement. Which is sort of the whole point.

YouTube was doing more than anyone gave it credit for

The other finding went in the opposite direction. Jones Road ran a 3-cell incrementality test on YouTube: one-third of geos got standard spend, one-third got double spend, one-third was a complete holdout with zero YouTube ads.

The results: at normal spend, YouTube drove 1.82X more orders than click-based attribution showed. At doubled spend, it generated 2.26X more new customer orders. Plofker put it plainly: “We found that not only has our YouTube spend been quite profitable, but ramping up is likely going to be a very good idea.”

This probably isn’t surprising if you’ve spent any time thinking about how YouTube actually works in a media mix. People watch a video, they don’t click. They Google the brand three days later, or they see a Meta retargeting ad and convert there. Click-based attribution gives YouTube almost no credit for that entire chain. Everyone sort of knows this. But “sort of knowing” and “having experiment data that proves it” are very different things when you’re asking a CFO to move six figures into a channel that doesn’t show up in last-click reports.

And to be fair, the YouTube finding alone isn’t what makes this interesting. Brands have been saying YouTube is undervalued by attribution models for years. What’s different here is the mechanism for proving it is now accessible to a brand spending in the low millions, not just to Unilever and Procter & Gamble.

Why DTC brands never trusted traditional MMM

Traditional marketing mix modeling has a reputation problem, and it’s mostly deserved. The classic version costs hundreds of thousands of dollars, takes months to deliver results, and runs on correlational data that can be influenced by seasonality, competitive moves, or a dozen other variables the model doesn’t account for. A survey from Haus found that DTC brands ranked MMM as one of their least-trusted measurement tools.

The “causal” approach tries to fix this by requiring that the model be anchored in experiment data before it makes recommendations. You run geo-split tests on your key channels, measure incremental lift directly, and then feed those findings into the model as calibration points. The model still does the interpolation between experiments (you can’t run holdouts on every channel simultaneously, that would be insane), but it’s constrained by real-world evidence at the anchor points.

From what I’ve seen, this seems to address the main objection, which was always “how do I know the model isn’t just telling me a plausible story?” When the model’s recommendations come with a built-in experiment roadmap (here’s the test that would confirm or deny this finding), it turns the output from a verdict into a hypothesis. As Plofker framed it, recommendations become “the next experiment on the road map, rather than a blind bet.” That’s a meaningful shift in how you use the tool.

The $1M-$5M brand is the real story here

If you’re spending $50M a year on media, you already have an MMM. Probably a couple of them. You have a data science team that maintains them and a consulting firm that argues about them. This isn’t news for you.

The story is for the brand spending $1M to $5M annually on paid media. That’s the range where measurement gaps hurt the most, because every dollar matters more and you don’t have the budget to absorb a six-figure misallocation for a quarter while you figure it out. These brands have been flying on platform-reported ROAS and gut instinct, supplemented by the occasional incrementality test when someone could make the business case for it.

Lightweight, experiment-anchored MMM at an accessible price point changes that equation. I don’t want to oversell what’s still a relatively new approach (Haus isn’t the only company working on this, and the methodology will keep evolving), but the direction seems right. Measurement tools that require proof before making claims are better than ones that don’t. That feels like a low bar, but it’s one that most of the industry’s measurement stack doesn’t currently clear.

If you’re running DTC paid media and you’ve been relying on platform dashboards for channel allocation, this is probably the year to run at least one proper incrementality test. Not because Haus specifically is the answer for every brand (I’d want to see more case studies across different verticals and spend levels before I said that), but because the gap between what platforms report and what’s actually incremental is consistently wider than most teams assume. Jones Road found a 1.82X gap on YouTube and a meaningful overcounting of value on Meta ASC. Those aren’t rounding errors. Those are budget-changing numbers.

The uncomfortable part nobody wants to talk about

There’s an awkward truth in this story that doesn’t get enough attention. If your measurement tool’s first real finding is that your best-performing channel was partially inflated, that’s a good measurement tool. It’s also an extremely uncomfortable conversation with whatever team has been running that channel and reporting those numbers up the chain.

I think most brands will eventually get here. The pressure from CFOs to prove incrementality is only going in one direction. But the transition period, where you go from trusting platform data to trusting experiment data, is going to be messy for teams that have built their reporting (and sometimes their performance reviews) around ROAS numbers that are about to get re-examined.

Plofker deserves some credit for being public about the findings. It would have been easy to quietly adjust the media mix and never mention that Meta ASC wasn’t doing what the dashboard said. Going on the record about it is the kind of thing that moves the industry conversation forward, even if it makes a few vendor relationships slightly awkward.

Anyway, if the measurement tool you’re using has never told you something you didn’t want to hear, that’s probably not because your media plan is perfect. It’s more likely because the tool isn’t measuring the right thing.