ChatGPT Ads Reward Clarity Because the Format Is Structurally Incapable of Creativity

ChatGPT Ads Reward Clarity Because the Format Is Structurally Incapable of Creativity
When your ad format gives you fewer characters than a Google search ad, calling it clarity over creativity is generous.

Adthena published an analysis of 40,000+ daily ChatGPT ad placements this week and arrived at a conclusion that has been getting a lot of airtime: functional, clear ad copy outperforms creative storytelling inside ChatGPT. The framing across most of the coverage has been that this is some kind of philosophical shift. Clarity over creativity. A new paradigm for conversational advertising. (I am paraphrasing, but only slightly.)

My read is different. This is not a creative philosophy. It is a format constraint wearing a research paper costume.

You cannot be creative in 30 characters

The Adthena data is genuinely useful if you look past the headline takeaway. The average ChatGPT ad headline runs about 30 characters. Five words. The body copy averages 116 characters, which is roughly 19 words. If you have ever tried to write a compelling narrative in the space of a tweet reply, you know where this goes.

Thirty characters is not a canvas for storytelling. It is a label. And the data confirms this: the dominant format across those 40,000+ placements is "Brand: Benefit." Nike: Free Shipping. Grammarly: Write Better. That kind of thing. Dollar symbols and specific numbers outperform vague promises. "Free" is the single most common conversion lever. Explicit CTAs like "Shop now" and "Compare" beat generic ones like "Learn more."

None of this should surprise anyone who ran Google search ads in 2008. These are structurally search ads wearing a conversational costume. The character limits are nearly identical to the old expanded text ads. The "creative insights" are the same insights we have had for two decades of paid search: be specific, lead with the value prop, include a number if you have one, and tell people what to do next.

Calling this "clarity over creativity" implies there was a creative option being rejected. There was not. The format does not allow it.

The tone data is more interesting than the copy data

Where the Adthena analysis gets genuinely useful is in the tonal findings. The best-performing ChatGPT ads skew "calm, confident, measured." Minimal exclamation points. Low urgency. This makes sense if you think about where these ads appear: inside an AI assistant that someone is actively using to solve a problem or answer a question.

Research on behavioral psychology in ChatGPT ad environments points to something called goal shielding and interruption aversion. When someone is mid-task in a conversational AI, they are cognitively defensive. A loud, urgent ad does not just fail to convert. It actively irritates. The ads that work feel like a helpful suggestion from a knowledgeable friend, not a billboard on a highway.

That is actually worth paying attention to. Not because it is new (contextual advertising has always rewarded tonal matching), but because the punishment for getting it wrong seems steeper here. On a search results page, an irrelevant ad gets ignored. In a conversation, it gets resented. And to be fair, we do not have great data yet on what "resented" looks like in terms of brand lift metrics, mostly because OpenAI cannot really tell advertisers if their money is working. But I will come back to that.

The CPM math nobody wants to do out loud

Here is where the conversation about ChatGPT ads gets uncomfortable. The format constraints are one thing. The economics are another, and they are considerably harder to explain away.

ChatGPT ads run at roughly $60 CPM. That is about 3x Meta's average and meaningfully higher than most programmatic display. One enterprise client in the Adthena dataset showed a 0.91% CTR. For context, Google Search ads average around 6.4% CTR across verticals.

Run the math on that: $60 CPM with a 0.91% click-through rate gives you an effective CPC of about $6.59. For some verticals, particularly high-consideration B2B or financial services, that might be defensible. For e-commerce, DTC, lead gen, and honestly most categories, it is worse than Google Search. And Google Search comes with 20 years of tooling, real-time bidding, automated rules, and conversion tracking that actually works.

OpenAI is charging premium CPMs for what is functionally a beta product with pre-2010 Google tooling. Reporting comes in weekly CSVs. There are no automated buying tools. No real-time optimization levers. One major retailer could not even see its own campaign data. The $200K minimum spend requirement for the beta means only large advertisers are in, and even among them, utilization is questionable. One enterprise advertiser reportedly used just 3% of a $250K budget.

I keep thinking about that 3% number. That is not cautious testing. That is someone who signed up, looked at what they got, and quietly stopped spending.

$100M in revenue does not mean $100M in value delivered

OpenAI hit $100 million in annualized ad revenue within six weeks. 600+ advertisers. 85% of users are eligible to see ads, though fewer than 20% are actually shown them on any given day. Those are impressive supply-side numbers.

But supply-side success and demand-side ROI are different conversations. What I have not seen, and what the Adthena analysis does not really address, is evidence that advertisers are getting returns that justify a $60 CPM in a measurement-blind environment. The revenue number tells you OpenAI can sell ads. It does not tell you that buying them is a good idea.

On paper, "clarity over creativity" sounds like a useful insight for advertisers testing ChatGPT placements. And if you are already in the beta, sure, write your headlines like search ads. Be functional. Include a price point. Use an explicit CTA. That is fine advice as far as it goes.

But framing it as a discovery about what works in conversational AI advertising overstates it. What we are really seeing is that a channel with extreme character limits, no creative tooling, and limited targeting rewards the same things that early Google search ads rewarded: intent-matching and clarity. The people running these ads are not making a creative choice. They are responding to constraints, which is exactly what competent advertisers have done on every new platform since the early 2000s.

If you are thinking about testing, here is what I would actually do

I think there is a narrow case for testing ChatGPT ads right now, but it is narrower than most of the coverage suggests. If your average order value or customer lifetime value is high enough to absorb a $6+ effective CPC with no attribution clarity, and you have the budget to tolerate the $200K minimum without needing to prove ROI in the first quarter, it might be worth a small allocation. Think of it as buying early data, similar to the first wave of programmatic or the early days of TikTok ads.

But if you are working with the budgets most paid media managers actually have, the honest assessment is probably this: wait for the tooling. Wait for automated bidding. Wait for conversion tracking that is not a spreadsheet someone emails you once a week. The "clarity over creativity" finding will still be true in six months, because the format constraints are not going anywhere. And the CPM will probably come down as OpenAI moves past the early-adopter premium phase and needs to prove they can deliver performance, not just impressions.

OpenAI selling ads inside the answer engine was always going to be a strange product. The interesting question is not whether clarity beats creativity in this format. It is whether the format itself can deliver enough value to justify the price. Right now, I would say the evidence is thin, the tooling is primitive, and the advertisers who are spending big seem to be doing so because they can afford to learn, not because they have found something that works.

The Adthena data is useful. The conclusion being drawn from it is a category error. You did not discover that conversational AI advertising rewards clarity. You discovered that 30-character text ads do. We already knew that.