CTV Ad Fraud in 2026: A Media Buyer's Audit Guide
Connected TV advertising is forecast to hit roughly $38 billion in US spend this year. That makes it the fastest-growing video channel by a comfortable margin. It also makes it the most attractive target for fraud operators since early programmatic display.
Here's what I find genuinely unsettling: depending on which measurement vendor you ask, somewhere between 1% and 19% of your open programmatic CTV impressions are invalid traffic. That's not a rounding error. That's the difference between "manageable cost of doing business" and "$7 billion disappearing into server farms every year."
Pixalate's Q4 2025 benchmarks put the US CTV invalid traffic rate at 19%. Globally, 21%. DoubleVerify's 2025 Global Insights report says more than 25% of CTV impressions fail their fraud-free criteria. Meanwhile, Comscore published a blog post titled "Fearmongering in CTV Advertising" arguing the post-bid IVT rate is just over 1%.
Someone is very wrong. Or, more likely, everyone is measuring something slightly different and nobody has agreed on the ruler. I'll come back to why this discrepancy exists. But first, it helps to understand what CTV fraud actually looks like in practice, because it has gotten considerably more sophisticated than the early bot-traffic stuff.
The Fraud Schemes Have Names (and Real Dollar Figures)
If you've been buying CTV for any length of time, you've probably heard vague warnings about "invalid traffic" without much specificity. The specificity exists. The fraud operations have names, documented methodologies, and estimated financial damage.
ICEBUCKET was the largest known CTV bot attack when HUMAN Security (formerly White Ops) exposed it. At its peak, it generated 1.9 billion ad requests per day, impersonated over 2 million people across 30+ countries, and at one point represented roughly 28% of all programmatic CTV traffic. That number still rattles me a bit. Nearly a third of the supply was fake.
CycloneBot, uncovered by DoubleVerify, operated differently. It generated 250 million fake ad requests daily by spoofing 1.5 million devices, costing advertisers an estimated $7.5 million per month. The bot mimicked realistic viewing patterns well enough to evade standard detection for months.
SneakyTerra was the first scheme to hijack real CTV device sessions through server-side ad insertion (SSAI), spoofing 2+ million devices daily and siphoning over $5 million per month. What made it novel: it didn't create fake devices. It hijacked legitimate ones, which makes the traffic look authentic to most verification tools.
StreamScam spoofed 28 million households and stole an estimated $14.5 million across roughly 3,600 app IDs and 3,400 device models. SmokeScreen used screensaver apps (of all things) to generate 10 million fake requests per day from about 10,000 devices, running up $6 million per month.
I mention these names because most CTV fraud explainers talk about "types of fraud" in abstract terms. Knowing the names matters. It means you can ask your verification vendor: "Would your system have caught CycloneBot?" That's a more useful question than "do you have fraud detection?"
Why CTV Fraud Numbers Are So Wildly Inconsistent
Back to that 1% vs 19% discrepancy. It's not that one vendor is lying and the other is honest. The gap comes from three things.
First, pre-bid vs post-bid measurement. Comscore's 1% figure reflects post-bid IVT, after verification filtering has already removed the worst traffic. Pixalate's 19% measures all open programmatic traffic before filtering. Both are technically accurate. They're just measuring different stages of the supply chain.
Second, open exchange vs. private marketplace (PMP) inventory. If you buy exclusively through premium PMPs with tight publisher relationships, your fraud rate is genuinely lower. Madhive's data shows protected campaigns see a 0.6% fraud rate versus 11.2% for unprotected ones. That's an 18x difference. But most advertisers aren't buying 100% PMP. They're running a mix, and the open exchange portion drags the average up considerably.
Third, what counts as "invalid." General invalid traffic (GIVT), things like data center traffic and known bots, is relatively easy to filter. Sophisticated invalid traffic (SIVT), things like CycloneBot's device-mimicking bots, requires behavioral analysis and is much harder to catch consistently. Different vendors draw the line differently.
The practical takeaway: if your DSP or verification vendor quotes you a sub-2% IVT rate, ask which measurement stage, which inventory type, and whether they're counting SIVT. If they can't answer all three, the number is probably more comforting than it is accurate. We covered a similar measurement gap problem with programmatic display in the IAS and Mastercard study, and the pattern is the same: the reported numbers often look better than reality because of how they're filtered.
Your TV Was Off. The Ad Still Ran.
This deserves its own section because it's not traditional bot fraud, and it's not a small number.
A GroupM and iSpot study estimated that roughly $1 billion per year in CTV ads are served to televisions that are off. Approximately 8-10% of all CTV ad impressions go to screens nobody is watching. Not bots. Not spoofed devices. Real TVs in real homes that happen to be off while the streaming app is still running in the background.
This is a category of waste that most verification tools don't catch because, technically, the impression was delivered to a legitimate device. The VAST tag fired. The pixels loaded. Nobody was home.
From what I've seen, this problem is worse on certain smart TV platforms where the OS keeps apps alive in the background more aggressively. It's also worse with FAST (free ad-supported streaming) channels that auto-play content indefinitely. And it's a problem that will probably get harder to measure, not easier, as CTV platforms optimize for "engagement time" metrics that don't distinguish between active and passive viewing.
If you're spending six figures monthly on CTV, you should be asking your vendor whether they have any methodology for detecting "TV off" impressions. Most don't. Some are starting to use audio return channel data and power state signals, but it's early.
Some Devices Are Dramatically Worse Than Others
This is the part that I think most media buyers haven't internalized yet. The fraud rate on your CTV campaign depends enormously on which devices your impressions are serving to.
Pixalate's Q2 2024 benchmarks broke it down by device manufacturer:
- Vizio: 40% invalid traffic rate
- Xiaomi: 29%
- Samsung Smart TV: 21%
- Roku: 8%
That's a 5x spread between the best and worst major platforms. And yet, most CTV campaign setups don't differentiate between device manufacturers at all. Your DSP treats a Vizio impression and a Roku impression as interchangeable inventory. They are not.
The gap exists partly because of app ecosystem differences. Vizio's SmartCast platform historically had a more open app sideloading environment, which fraudsters exploited. Roku's walled garden approach, with a tighter app review process, filters out more questionable publishers before they ever serve an ad.
This data is from Q2 2024, so the exact numbers may have shifted somewhat. But the structural differences between platforms persist. If your campaign reporting breaks down by device and you're seeing disproportionate spend on high-IVT platforms, that's worth flagging with your buyer.
SSAI Makes Everything Harder to Verify
Server-side ad insertion (SSAI) is the technical layer that stitches ads into CTV content streams on the server side, as opposed to the client (device) side. It's how most CTV advertising is delivered. It's also a significant fraud vector.
According to Pixalate's analysis, IVT rates are 110% higher when SSAI is used, and 26% of all SSAI traffic used for CTV has been classified as invalid. SneakyTerra specifically exploited SSAI to hijack legitimate sessions, which is why verification vendors had trouble catching it initially.
The problem with SSAI from a fraud-detection perspective: because the ad is stitched into the stream server-side, verification tools lose visibility into the actual device. They're often measuring the server, not the TV. That's like checking whether a letter was mailed by verifying the post office exists, without confirming anyone lives at the destination address.
IAB Tech Lab launched support for device attestation through the OM SDK in November 2025, with Apple, Amazon, and Google as initial supporters. This is potentially the most important development in CTV fraud prevention in years, and almost nobody is talking about it. Device attestation lets the actual CTV device cryptographically prove it is what it claims to be, bypassing the SSAI opacity problem entirely. But adoption is early. If your vendor isn't mentioning OM SDK device attestation yet, they're behind.
How to Actually Audit Your CTV Campaigns
I'll skip the generic "use a verification vendor" advice. You already know that. Here's a more specific checklist for your next CTV vendor or DSP conversation.
1. Request log-level data. Ask your DSP for impression-level logs including device ID, app bundle ID, SSAI status, and timestamp. If they say they can't provide it, that tells you something about the relationship.
2. Check app-ads.txt adoption. app-ads.txt is the CTV equivalent of ads.txt for web. It lets publishers declare authorized resellers. According to Pixalate's March 2026 data, adoption has improved but large chunks of CTV inventory still don't have it. Ask what percentage of your impressions served on inventory with valid app-ads.txt entries.
3. Ask about SIVT specifically. "What's our IVT rate?" is an incomplete question. "What's our SIVT rate, and how do you detect it?" is better. General IVT filtering catches the easy stuff. Sophisticated schemes like CycloneBot require behavioral analysis that not every vendor runs.
4. Break down performance by device manufacturer. If your reporting shows a disproportionate share of impressions on Vizio or off-brand Android TV devices, and the CPA from those impressions is meaningfully worse, you may have a fraud concentration problem.
5. Audit your FAST channel mix. FAST apps are the fastest-growing CTV inventory source, and also the least verified. Know which FAST channels you're serving on. If your vendor can't tell you, ask why.
6. Demand makegood and refund policies in writing. When fraud is detected post-campaign, what happens? Most CTV vendors are vague on this. Get specific: what's the threshold, what's the remediation timeline, and what form does compensation take? Similar measurement accountability gaps have shown up in other emerging ad channels too.
7. Ask about "TV off" detection. This is still a frontier area, but it separates the vendors paying attention from the ones coasting on yesterday's methodology.
The Counterargument Deserves a Serious Answer
Comscore and several CTV platforms argue that the fraud panic is overblown. Their position: post-bid IVT rates are low, CTV environments are inherently more controlled than mobile or web, and aggressive fraud reporting by verification vendors is partly self-serving. They sell the solution to the problem they're quantifying.
There's some truth in this. If you're buying exclusively through premium publishers like Hulu, Peacock, or Disney+ with direct insertion orders, your fraud exposure is genuinely minimal. The 19% headline numbers are driven primarily by open exchange and resold inventory.
The honest answer is that both things are true simultaneously. CTV is a premium environment when bought carefully, and a fraud-riddled mess when bought lazily. The difference between those two outcomes is not budget size. It's whether someone on the buying side is actually asking the questions listed above, checking the log-level data, and holding vendors accountable. Attribution already requires careful scrutiny, as the B2B measurement challenges we've covered before illustrated. CTV fraud just adds another layer of noise to an already imperfect signal.
And to be fair, the verification vendors do have a financial incentive to publish alarming numbers. That doesn't mean the numbers are wrong, but it does mean you should treat any single vendor's data as a perspective, not gospel. Cross-reference. Ask questions. Pull your own campaign data and look at it with fresh eyes.
FAQ: Connected TV Ad Fraud
How much of my CTV budget is going to fraud?
It depends almost entirely on where you buy. Open programmatic exchange: potentially 15-20% invalid traffic based on Pixalate Q4 2025 data. Premium PMP or direct-sold inventory: likely under 2%. Most advertisers run a blend, so the real number is somewhere in between. The only way to know your specific exposure is to audit your impression logs by supply source.
Which CTV devices have the highest fraud rates?
Based on Pixalate's Q2 2024 benchmarks, Vizio had the highest IVT rate at 40%, followed by Xiaomi at 29%, Samsung at 21%, and Roku at 8%. These reflect platform-level averages. Your specific campaigns may differ depending on inventory source, but the structural differences between platforms are real and persistent.
Can CTV ads actually run when my TV is off?
Yes. GroupM and iSpot estimated approximately $1 billion annually in CTV ads served to TVs that are powered off while streaming apps remain active in the background. This affects roughly 8-10% of CTV impressions and is not detected by most standard verification tools.
Is buying through PMPs actually safer than open exchange?
Significantly. Madhive's analysis found protected campaigns saw 0.6% fraud rates versus 11.2% for unprotected, an 18x difference. PMPs aren't fraud-proof, but the publisher accountability and supply chain transparency are materially better than what you get on the open exchange.
What is OM SDK device attestation?
It's a new standard from IAB Tech Lab (launched November 2025) that lets CTV devices cryptographically verify their identity. Apple, Amazon, and Google are initial supporters. If widely adopted, it would undermine the core spoofing mechanic behind schemes like CycloneBot and SneakyTerra by making it much harder for a server to impersonate a real TV.
Where This Goes From Here
IAB's device attestation via OM SDK is the development I'd watch most closely. If Apple, Amazon, and Google fully support cryptographic device verification at scale, it undermines the core mechanic behind most CTV spoofing schemes. Adoption will take time. These standards always do. But the technical foundation for making CTV fraud dramatically harder landed in late 2025, and most of the industry hasn't noticed yet.
The more uncomfortable trajectory: CTV ad spend is expected to surpass traditional TV at roughly $46 billion by 2028. Fraud operators know that. Every dollar of new spend that enters CTV without corresponding improvements in verification is additional surface area for exploitation.
I don't think the answer is pulling budget out of CTV. The channel works. The audiences are there. But the advertisers who treat CTV like a premium buy and then run it through open exchange without log-level auditing are, in practice, the ones funding these operations. If 2026 is the year your CTV budget crosses into serious money, it's probably also the year to start asking your DSP the uncomfortable questions. The vendors who can answer them clearly are worth keeping. The ones who can't are telling you something too.