OpenAI Opened ChatGPT Ads to U.S. Without a Single Measurement Partner Signed

OpenAI Opened ChatGPT Ads to U.S. Without a Single Measurement Partner Signed
OpenAI shipped a self-serve ads manager and dropped the spend floor before naming a single third-party measurement vendor.

OpenAI opened its ChatGPT ads manager to U.S. advertisers in beta on May 5, 2026, dropped the $50,000 minimum spend, and added CPC bidding to the prior CPM-only model. Cost-per-action bidding and third-party measurement are both promised but unscheduled, and OpenAI has named zero measurement vendors so far. CPCs are running $3 to $5, with click-through rates estimated at 1 to 3 percent against Google Search's roughly 29 percent.

By Notice Me Senpai Editorial

What actually shipped, and what's still a slide

Three months after OpenAI launched advertising inside ChatGPT, the actual product surface available to a U.S. advertiser this week looks like this: self-serve account signup, CPC bidding in the $3 to $5 band, a first-party conversion tracking pixel, and three adtech onboarding partners (Pacvue, Kargo, StackAdapt) handling the buying-platform plumbing per Adweek's reporting. The $50,000 minimum spend that gated most of the prior beta is gone, which is the actual news for anyone running a sub-$1M annual paid budget.

The promise list is longer. OpenAI's ads and monetization lead Asad Awan told Digiday that CPA bidding is "in motion" but declined to give a date. Third-party measurement is "in the works" with no partners named and no shipping timeline. A conversions API is in development, not live. Clean-room integrations are aspirational. That gap matters because every line item between $5K and $50K monthly that buyers might shift over depends on signals OpenAI has not actually shipped.

The CPC pricing is where the channel gets interesting and immediately complicated. Bids in the $3 to $5 band put ChatGPT roughly in line with Google Search for low-competition verticals, but the auction is a fraction of the size and the conversion signal is unverified. Pacvue's Melissa Burdick framed the opportunity as "the most significant new channel since the rise of retail media." That is the holdco-trading-desk read. The SMB read is different.

"Measurement coming soon" is the entire story

Performance buyers do not allocate budget to a channel they cannot measure against the rest of the stack. Gartner's Nicole Greene was direct about this in Digiday's earlier coverage: "this consistent measurement will help advertisers justify reallocation of spend to OpenAI." Until that infrastructure arrives, the request being made of you is to fund OpenAI's CTR data collection problem with your own media budget.

The CTR math is the part that makes this concrete. ALM Corp's analysis pegs ChatGPT CTRs at 1 to 3 percent against Google Search's roughly 29 percent, with Google referring something like 190 times more outbound traffic in aggregate. Even at the low end of CPC ($3), the absence of CPA bidding means you are paying for clicks at search-ad rates while your conversion variance behaves more like a social platform. That kind of variance was, until OpenAI moved off pure CPM in March, hidden inside the platform's reported average CTR. We covered the CPC switch and the variance issue here.

There is also a pricing trajectory worth tracking. Digiday reported that ChatGPT CPMs dropped from roughly $60 at launch to as low as $25 within ten weeks. That is a 58 percent compression in under a quarter. Some of that is auction softening from the pure CPM model, but a chunk of it is buyers refusing to pay premium CPMs without measurement to back the price. CPC bidding gives OpenAI a way to stop the floor from falling further while it figures out the measurement piece.

The CPA bid is a Performance Max pitch in disguise

The reason CPA bidding matters more than CPC for OpenAI specifically is this: it is the only bid model that lets a buyer compare ChatGPT against Performance Max on a like-for-like basis. PMax buyers do not think in CPCs. They think in target CPA against a goal column. When OpenAI's CPA bidding actually ships, the pitch will be: same goal-based bidding, conversational intent, lower auction density, run it as a PMax variant.

That pitch only works if the conversion signal it bids against is auditable. Right now it is not. There is a first-party pixel, an in-development conversions API, and no third-party measurement firm signing off on what those signals contain. From what I have seen, that combination is the exact setup that produces a six-month "we're seeing strong ROAS" honeymoon followed by an MMM run that says the channel was incremental for closer to nothing.

I would want to see Innovid, DoubleVerify, IAS, or Nielsen named on the partner list before I treat ChatGPT CPA bidding as comparable to PMax target CPA. Until at least one of those names is public with a ship date, it is a dashboard with a goal field, not a measurement system.

The holdco onboarding tells you what kind of channel this is right now

Dentsu, Omnicom, Publicis, and WPP all have access. That four-holdco onboarding is the standard pre-launch ritual for a channel that needs flagship case studies before SMB acquisition can scale. It also gives OpenAI a way to absorb the "no measurement partner" complaint, because holdco trading desks run their own attribution stacks server-side and can stand up custom integrations without waiting for an Innovid SDK.

For an SMB with no internal measurement infrastructure, that asymmetry matters. The holdcos get a roughly six-month head start on knowing whether ChatGPT clicks actually convert, and at what rate, by vertical. The first SMB cohort is the data set those answers come from. The dropped $50K minimum should be read in that context, not as a generosity move. It is a way to widen the data collection pool while the holdcos work the case-study angle.

The Pacvue, Kargo, StackAdapt onboarding is the same logic. Pacvue handles retail media bid management. Kargo handles publisher-side and CTV. StackAdapt is a programmatic DSP. None of them are measurement vendors. They are plumbing for the same buyers who already have measurement covered.

Test budget you can afford to write off

If you are going to test ChatGPT ads in the next 90 days, run it as if the conversion data does not exist yet, because functionally it does not. A workable test frame:

  • Cap the spend at 5 percent of the monthly budget you would otherwise put into PMax or Advantage+ Shopping. That is the "I cannot defend this in QBR" budget, which is the right size for a channel without measurement.
  • Tag every landing page with your own server-side conversion firing (GA4 server-side, Stape, or your CDP). Do not rely on the OpenAI pixel for the conversion you actually report on. The pixel is fine for OpenAI's optimization. It is not fine for your attribution.
  • Run a parallel geo-holdout for 6 weeks if your volume supports it. Without one, "ChatGPT ads worked" is a claim you cannot defend in front of a CFO.
  • Set a CPC ceiling around $4. The published $3 to $5 range will drift upward as more advertisers join the auction, and your bid cap is the only governor that travels with the budget.
  • Treat the absence of CPA bidding as a feature, not a bug, for the test phase. CPC at least lets you control unit economics. Manual CPA on a thin signal would be worse.

What I would actually wait for: the third-party measurement vendor announcement. When OpenAI names a clean-room operator or MMM partner with a ship date, the channel moves from "interesting test" to "defendable line item." Until then, the most honest budget framing is research spend, not media spend, and the QBR slide should say so out loud.