Most Programmatic Impressions Do Not Drive Incremental Sales. IAS and Mastercard Just Proved It.
Integral Ad Science and Mastercard announced a partnership this week that does something no measurement vendor has managed before: connecting media quality scores to anonymized purchase data in real time, while campaigns are still running.
The official branding, "IAS Sales Outcomes powered by Mastercard," sounds like something someone named in a brainstorm at 4:47 PM on a Friday. But the underlying capability is genuinely significant, and I think most of the initial coverage is underselling what it changes about how programmatic buyers make decisions.
The reason this matters is not the partnership itself. It is what happens when media quality stops being a reporting metric and starts being a bidding signal.
The system is simpler than the branding suggests
IAS measures media quality signals on every impression. Viewability, brand safety, attention metrics, format scores. Mastercard aggregates anonymized, geo-level purchase data across its network. No individual cardholder information crosses between systems; matching happens at the geographic cohort level.
The system finds correlations between quality conditions and actual spending increases in specific product categories and geographies. Then it feeds those correlations back into pre-bid segments that your DSP uses to prioritize impressions. Impressions that share the quality characteristics of impressions that previously drove real purchases get prioritized automatically.
This is not a post-campaign report that arrives six weeks later in a PDF nobody reads. It is in-flight optimization against actual sales outcomes. The feedback loop operates while your campaign is still spending.
Think of it like this: imagine you could watch every customer who walked past your billboard and then check whether they bought your product that same week. Not through a survey. Through their actual purchase history. That is roughly what this system does for programmatic display, minus the individual tracking.
The early numbers are hard to dismiss, even with caveats
IAS released early performance data from telecommunications and retail advertisers. The usual caveats apply here. Early results, controlled conditions, and the company has obvious incentive to make the numbers look impressive. But the directional signal is striking enough to pay attention to:
- Impressions with strong Quality Attention scores drove 246% higher sales lift than low-quality impressions
- High-quality impressions showed 133% improvement in projected ROI
- At media quality thresholds above 70%, advertisers saw up to 9x incremental spend impact per 1,000 impressions
I would treat these numbers with the same skepticism you would apply to any vendor's launch data. But even if the real-world effect is half of what is reported, the gap between high-quality and low-quality inventory is enormous. And it is a gap most programmatic buyers have been guessing at rather than measuring.
What jumped out to me specifically: the 70% quality threshold as a performance inflection point. If that number holds across more verticals, it gives media buyers something they have never had before. A specific quality floor below which impressions are not worth buying, backed by purchase data rather than viewability proxies.
That 70% line could become the new viewability threshold debate, except this time there is actual sales data behind it instead of just industry convention.
This makes the cheap CPM argument uncomfortable
Programmatic buying has always had a tension at its center. One camp optimizes for reach and low CPMs. The other argues that cheap inventory is mostly junk, that you are paying less because the impression is genuinely worth less. Both sides have mostly been arguing from intuition, selected case studies, and whichever attribution model flatters their position.
What IAS and Mastercard just introduced is empirical evidence for one side of that argument. If quality impressions drive 246% more sales lift, then the advertiser paying a $3 CPM for quality inventory is not overpaying. The advertiser paying $0.80 CPM for low-quality inventory is the one wasting money, even though their cost-per-thousand looks better in the spreadsheet.
I think this will eventually force a reckoning in how procurement teams evaluate media buying. For years, the pressure has been to reduce CPMs. That makes sense when quality cannot be measured against outcomes. It makes a lot less sense when someone can show you that your cheapest impressions produce almost no incremental sales.
On paper, this should be straightforward. In practice, it is going to be messy. Procurement incentives do not change overnight, and there is a lot of institutional momentum behind cost-efficiency as the primary buying criterion. Some of those conversations will be genuinely difficult.
The measurement landscape just got more complicated, in a useful way
Media mix modeling has been making a comeback over the last two years, partly because multi-touch attribution keeps getting harder as signals disappear. But MMM operates at a macro level. It can tell you that "programmatic display drove X% of sales" but not which specific impressions within that channel were responsible.
The IAS/Mastercard system operates at the impression level. That is a fundamentally different kind of insight. It does not replace MMM or incrementality testing, but it fills a gap that has been empty since the third-party cookie started dying: real-time, impression-level quality-to-outcome correlation.
I should say plainly that this also raises questions about who controls the measurement. IAS is grading its own homework to some extent. They are defining "quality" and then proving that their definition of quality drives sales. Independent validation from advertisers running controlled holdout tests would make these numbers significantly more credible. From what I have seen in other measurement vendor launches, the early numbers almost always look better than steady-state performance. But the direction usually holds, even if the magnitude shrinks.
Specific steps before Q2, if you spend on programmatic
The system launches in the U.S. in Q2 2026. If you are spending more than $50K per month on programmatic display, this probably warrants a pilot test. A few specifics worth thinking through:
First, audit your current quality floor. Most DSPs let you set viewability and brand safety thresholds. Pull your last 90 days of delivery data and check what percentage of your impressions actually met a 70% quality score. If you are running below 50%, you have a lot of headroom, and this partnership will probably surface meaningful improvements.
Second, talk to your IAS rep about early access to the pre-bid segments. Early adopters in these programs typically get better support and less competition on the optimized inventory before the segments become widely available.
Third, set up a controlled test. Run the same campaign with and without the quality-purchase optimization for 30 days. Measure incremental sales, not just clicks or conversions. This is the kind of test that actually resolves the cheap-CPM debate for your specific account rather than relying on industry averages.
We have been writing about how experimental ad budgets are growing as ROAS declines on major platforms. This is exactly the kind of experiment worth running with those budgets. Not because the vendor says so, but because the question "does quality actually correlate with sales?" has been unanswerable until now.
The proxy era of programmatic is ending, slowly
For years, the programmatic industry has operated on proxies. Viewability was a proxy for attention. Attention was a proxy for consideration. Consideration was a proxy for purchase intent. Each step introduced noise, and by the time you got to "did this impression drive a sale," you were basically guessing with good intentions.
Connecting media quality directly to purchase data collapses several of those proxy layers. It probably will not work perfectly. The geo-level matching will be noisier than individual-level matching would have been. Some product categories will show clearer signals than others. Seasonal effects will muddy the data in ways that take quarters to smooth out.
But the alternative is what we have been doing: bidding on impressions with no real evidence about which ones work and then debating it in quarterly business reviews where everyone brings their preferred attribution model and nobody agrees.
I would rather have imperfect purchase data than confident guessing. And honestly, I think most media buyers feel the same way, including the ones who have built optimization frameworks around low CPMs. The uncomfortable part is not learning that quality matters. Most people suspected that already. It is having the numbers to prove it to procurement, and then having to explain why next quarter's media plan costs more per impression while delivering fewer of them.