OpenAI Switched ChatGPT Ads to CPC Because the Average CTR Hid the Variance

OpenAI Switched ChatGPT Ads to CPC Because the Average CTR Hid the Variance
OpenAI shifted ChatGPT Ads to CPC after CPMs collapsed from $60 to $25, signaling that the average CTR was hiding wide variance by query intent.

OpenAI launched ChatGPT ads on February 9, 2026 at a $60 CPM with a $200,000 minimum spend. By late April the CPM had collapsed to roughly $25, and OpenAI quietly switched the platform to a $3 to $5 cost-per-click model with a $50,000 minimum. The CPC switch is the tell. Average click-through rate looks bad in aggregate, and the variance by query intent is wide enough to make a flat impression price untenable.

The CTR number nobody at OpenAI wants to publish

OpenAI has not released an official platform-wide CTR. What we have instead is structural evidence and third-party tracking. Adthena reported tracking 50,000+ daily ad placements across 600+ advertisers since the February launch, with 800 million weekly active users on the receiving end. The independent estimates that have circulated put well-targeted ChatGPT campaigns in the 0.5% to 1.0% CTR range, which is broadly comparable to a mediocre Google Display campaign and roughly an order of magnitude below a strong Google Search ad on a high-intent query.

That number, on its own, would not justify a $60 CPM. A $60 CPM with a 0.5% CTR is a $12 effective CPC. A $60 CPM with a 1.0% CTR is $6. Both are worse than what most performance buyers would pay in Google Search for an equivalent commercial query. So advertisers did the math, and the CPM kept softening. The Next Web reported the rate fell from $60 to as low as $25 within ten weeks, which is the kind of price erosion that only happens when buyers have the upper hand and OpenAI's measurement story is too thin to defend the floor.

Why CPC is OpenAI admitting CTR is segmented

If your average CTR is the only number you have, you can defend a flat CPM. Once you have to defend a CPC, you are implicitly admitting that some clicks are worth more than others, and the platform is the only party that knows which ones. That is roughly where Google Search was in 2003. It is a long way from where OpenAI was selling impressions in February.

Digiday's coverage of the CPC switch quotes Adthena's analysis that Meta's CPCs run three to five times cheaper than Google Search, and the gap is almost entirely explained by user intent rather than inventory quality. ChatGPT sits closer to Google Search on the intent dimension, because users are typing full-sentence questions with explicit purchase context. The flip side is that OpenAI also has to absorb the long tail of conversational queries that have no commercial value at all, like homework help, recipe ideas, and emotional venting.

A platform that sells $60 impressions across that mix is asking advertisers to subsidize the long tail. A platform that sells $3 to $5 clicks is letting buyers price the long tail at zero. From what I have seen in the early test buys friends ran in March, the second model is the only one most performance teams will tolerate.

Where the variance actually lives

The interesting question is not what the average is. It is how wide the spread is between commercial and navigational queries on the same surface. We do not have a clean public dataset on that yet, but a few things point in the same direction.

First, OpenAI confirmed at launch that ads can fire on the very first response to a high-intent query. ALM Corp's tracking caught a sponsored result on a query as casual as "what's the best way to book a weekend away," meaning OpenAI is treating commercial intent as immediate ad inventory rather than waiting for deeper conversation context. That is a decision the system would not make if commercial CTR were not materially higher than the average.

Second, projected industry conversion rates from a Lapis playbook citing First Page Sage's March 2026 data show higher education converting at 6.0% on ChatGPT versus 1.7% on Google, e-commerce at 2.4% versus 1.3%, and B2B SaaS roughly flat at 1.1% versus 1.0%. Conversion rate is a downstream proxy, not CTR, but the spread suggests the same intent variance shows up upstream. Categories where users are research-heavy and click-cautious benefit. Categories where the query is already 80% navigational do not.

Third, the fanout pattern matters. Our prior coverage of Aiso's 90-prompt audit found that ChatGPT fans out a single commercial query into 25x more downstream search behavior than a navigational one. If ad slots are placed alongside that fanout, the impression-weighted CTR on commercial slices is going to be a multiple of the average.

How to bid against a platform with no segmentation tools

The uncomfortable part is that OpenAI is not yet giving advertisers query-level breakdowns. Reporting is mostly impressions and clicks, and the system is the only one that knows which conversation triggered which placement. So performance teams cannot bid up commercial inventory and bid down everything else, which is exactly what the CPC model is supposed to enable elsewhere.

What you can actually do this month, before more transparent reporting ships:

First, segment your campaigns by product category instead of by funnel stage. ChatGPT's matching layer is closer to keyword matching on natural language than to audience targeting, so the cleanest signal you can give the auction is a tightly themed product set. Adthena's creative analysis showed top performers running brand-first headlines in a "Brand: Benefit" format averaging 30 characters, with body copy around 19 words. Match that pattern across two or three creative variants per ad group and let the system learn the click pattern within a single intent slice.

Second, treat the $50,000 minimum as a learning budget, not a campaign budget. With a $3 to $5 CPC you are buying somewhere between 10,000 and 16,000 clicks. That is enough to see whether your post-click conversion rate is anywhere near the projected 2.4% to 6.0% range for your category, but not enough to justify a multi-quarter commitment. I would run two months, pull the conversion data into an MMM-style decay model, and only renew if the incrementality looks real.

Third, do not build creative against ChatGPT's CTR target. Build it against your own CPA target. The platform has not given anyone a stable CTR benchmark to optimize against, and the average is probably going to keep moving as OpenAI tunes the auction. The number that survives the next pricing change is your post-click CPA, and that is the only one I would let a creative test be judged on.

Why the next pricing change is already coming

OpenAI is targeting $2.5 billion in ad revenue this year, $11 billion in 2027, and roughly $100 billion by 2030. The early pilot was already running at over $100 million annualized within two months. To get from there to $2.5 billion, OpenAI needs either a much higher fill rate, much higher CPCs on the commercial slice, or both. The cleanest path is to give advertisers query-level transparency so they will pay more for the inventory they actually want.

That is the next thing to watch. When OpenAI rolls out commercial-vs-navigational reporting, the average CTR number will become irrelevant overnight, and the CPC ceiling on the commercial slice will reprice itself upward. From what I have seen in earlier auction transitions on Meta and Google, the buyers who are already segmenting their creative by intent will benefit immediately. The ones still optimizing against an unsegmented average will be paying for the long tail by accident.

I do not think the platform-wide CTR number is the one that decides whether ChatGPT Ads becomes a real performance channel. It is whether OpenAI gives buyers the segmentation they need to bid honestly. The CPC switch was step one. Query-level reporting is step two, and it is the one I would build a 2026 test plan around.

Notice Me Senpai Editorial