Crumbl's CTV Win Over Social Tells You More About Measurement Than Performance
A cookie company accidentally surfaced the biggest problem in CTV advertising. Not the creative. Not the targeting. The fact that almost nobody can measure whether any of this actually works.
Crumbl ran a seven-week CTV campaign with Samsung Ads and hit 213% of its app download target, racking up more than 16,000 downloads. First week alone generated over $20,000 in attributed revenue with a 32.37% return on ad spend. Crumbl's paid media specialist, Giana Flores, said the results were "surprisingly competitive with our core social platforms."
That word "surprisingly" is doing a lot of work in that sentence. Crumbl didn't expect these numbers. And most brands running CTV right now wouldn't know if they got similar numbers, because the attribution infrastructure to prove CTV-to-conversion barely exists for anyone who isn't running a Samsung case study.
How a Cookie Brand Got TV to Act Like Instagram
The campaign used Samsung's CTV-to-Mobile solution, which does something most CTV buys can't: it connects an ad shown on a Samsung SmartTV to an action taken on a mobile device in the same household. Machine learning optimized placements across Samsung TV Plus, immersive masthead units, and native screen-first environments.
But the creative strategy is the part that actually matters here. Crumbl didn't build traditional TV spots. They adapted the same visual language that performs on their social channels for the bigger screen. Close-up product shots, quick cuts, bright colors, minimal copy. The campaign started as an awareness test and ended up driving measurable conversion and revenue.
That approach, treating CTV creative like social creative, is showing up across the industry. Cetaphil worked with The Trade Desk on what they called a "Social CTV" format, repurposing social-style video for connected TV aimed at Gen Z. Brand lift studies showed meaningful jumps in consideration among younger viewers.
The pattern seems pretty clear at this point. Social-native creative on CTV screens performs better than traditional 30-second spots. Which makes sense if you think about it: the audience watching CTV grew up on YouTube and TikTok. They respond to content that looks like the content they chose to watch, not the content that interrupts it.
Proving Any of This Is Where It Falls Apart
Here's where I get a little less optimistic. Crumbl's results are real. But they had Samsung's proprietary attribution layer doing the heavy lifting. Samsung can connect ad exposure on their TVs to behavior on other devices because they own the hardware and the data. That's a closed-loop system most CTV buyers don't have access to.
For everyone else, the measurement landscape looks roughly like this: CTV ads show on a shared screen in a living room. The conversion happens later, on a phone, on a laptop, sometimes days later. Traditional click-based attribution can't track that journey. The IAB Europe CTV Working Group says cross-device attribution and fragmented identifiers still block CTV's shift from a brand channel to a performance channel.
The numbers are kind of stark. CTV's share of ad budgets doubled from 14% in 2023 to 28% in 2025. But 48% of advertisers cite measuring incremental reach as their top challenge. 70% of European advertisers report measurement frustration. And there's an estimated $1 billion in annual wasted CTV spend just from ads delivered to TVs that were physically turned off.
One stat that really stuck with me: CTV precedes search 96% of the time and social 94% of the time in the same app install journey. Yet US brands allocate just 7.4% of ad budgets to CTV despite it commanding 17.9% of media attention. The gap isn't that CTV doesn't work. It's that most teams can't prove it works, so they underinvest.
Social Budgets Are Migrating. Most of That Money Is Moving Blind.
Performance marketers are reallocating 15-20% of their social budgets toward CTV, and honestly, I get why. Social CPMs keep climbing. Algorithmic changes keep flattening reach. Three out of four performance marketers report declining social media efficacy.
CTV reaches 88% of Americans compared to Facebook's 54%, according to Samsung Ads. The screen is bigger, the attention quality is higher, and the emotional lift from video on a TV still outperforms the same video in a scrolling phone feed.
But when you move budget from social (where Meta and Google give you deterministic click-level attribution, however flawed) to CTV (where the best you typically get is probabilistic household matching), you're trading a measurement system you understand for one you don't. And if you can't close the attribution loop, you're essentially running on faith that the brand lift numbers justify the spend.
That's not necessarily wrong. Brand building matters, and not everything needs to be a last-click conversion. I think the recent data on creative format performance suggests that the format of the ad matters at least as much as the channel it runs on. But if you're a performance marketer shifting a meaningful chunk of social budget, you should know exactly what you're giving up on the measurement side before you do it.
How to Structure a CTV Test That Survives the Measurement Gap
I wouldn't tell anyone to avoid CTV in 2026. The channel clearly works for the right creative and the right audience. What I'd say is this: structure your test so you can actually read the results even with imperfect attribution.
Three things that seem to matter from what I've seen:
Run a geographic holdout. Pick one region for the CTV campaign and a comparable region without it. Measure the difference in site traffic, app downloads, or whatever your primary conversion event is. It's crude, but it gives you something concrete when finance asks for proof. And they will ask.
Extend your attribution window. CTV consumer journeys span days, not hours. If you're measuring with the same 1-day click, 7-day view window you use for Meta, you're missing the real impact. Push it to 14-30 days. The IAB data suggests short attribution windows miss the majority of CTV-influenced conversions.
Use social-native creative, not TV creative. This is probably the single most underrated lever. Crumbl's approach worked because they didn't build "TV ads." They built social ads for a bigger screen. 15-30 second spots, visual-first, minimal copy. The data consistently shows social-style creative outperforms traditional TV spots on CTV, especially with audiences under 40. If you're reusing your cable TV :30 on streaming, you're spending money to learn that cable TV creative doesn't work on CTV. Which is not a useful lesson.
One benchmark to calibrate against: Crumbl's 32.37% ROAS in the first week is competitive with mid-funnel Meta campaigns. Not spectacular, but also not the kind of number you'd expect from a channel that supposedly "only does brand." If your CTV test can't get within range of that with social creative, the issue is probably creative quality, not the channel itself.
CTV Has a Performance Problem Wearing a Measurement Disguise
The frustrating thing about CTV right now is that the channel probably works better than most brands give it credit for. The Crumbl results aren't an anomaly. They're what happens when someone bothers to measure CTV with performance-grade attribution. Samsung, Roku, and a handful of walled-garden CTV platforms can provide that level of measurement. Everyone else is doing incrementality studies, geographic holdouts, and media mix modeling. Which is fine, but it's also the same measurement toolbox we had years ago.
I don't think that means you should wait. The social-to-CTV budget migration is probably directionally correct, especially as social efficacy continues to flatten. But go in knowing you're partly trading measurement certainty for audience quality. And build your tests to produce readable data even with imperfect attribution.
The brands pulling useful numbers out of CTV aren't necessarily the ones with the largest budgets. From what I can see, they're the ones who designed their campaigns around producing measurable results in the first place. Crumbl didn't stumble into 213% of their download target. They built the test in a way that let them actually see it.