AI Labels Cut Ad Clicks by 31% and No Two Platforms Agree What Counts

AI Labels Cut Ad Clicks by 31% and No Two Platforms Agree What Counts
Five platforms, three regulators, and a 31% click penalty. The AI labeling rules for advertising are arriving from every direction.

Nobody in advertising wants to talk about this directly, but telling consumers an ad was made with AI makes that ad perform significantly worse. And the rules about when you have to tell them are arriving faster than most teams realize, from at least three different directions, with almost no coordination between them.

A joint study from NYU Stern and Emory University put a number on it. AI-generated ads actually outperformed human-created ones by about 19% in click-through rates. But the moment researchers disclosed that the ad was AI-made, performance dropped 31.5%. That is not a small variance. That is a third of your clicks disappearing because of a label.

The tension is obvious. AI makes better-performing creative. Disclosure makes that creative perform worse. And regulators are about to require disclosure anyway.

The performance hit is real, and it is worse than the headlines suggest

The NYU Stern study, led by Professor Anindya Ghose, tested this across field experiments, not just surveys. In real campaigns, disclosure cut click-through rates by 1.17 percentage points relative to human-made ads. That is a measurable budget impact for anyone running significant paid spend.

And the perception gap between the industry and its audience is, honestly, a little alarming. According to IAB research published alongside their new disclosure framework, 82% of advertising executives believe younger consumers feel positively about AI-generated ads. The actual number among Gen Z and Millennials is 45%.

That is a 37-point gap, and it grew from 32 points in 2024.

It gets more specific than that. Twenty percent of consumers describe brands using AI in ads as "manipulative." Only 10% of executives thought consumers would use that word. Sixteen percent of consumers said "unethical." Executives guessed 7%.

I think most teams are still operating on the assumption that consumers either do not notice or do not care about AI in creative. The data suggests they notice more than we think, and the negative associations are stronger than most brand teams have accounted for.

Five platforms wrote five different rulebooks

This is where it gets genuinely messy for anyone running campaigns across multiple channels. Running the same AI-generated creative across these platforms right now is a bit like filing the same expense report under three different accounting systems. Same receipt, three different rules about what goes where.

Meta uses the C2PA standard to automatically detect and label content created with tools like Adobe Firefly, DALL-E 3, and Microsoft Designer. If the metadata is present, the label goes on whether you wanted it or not. For commercial ads, the labeling is mostly automatic. For political ads, you have to actively disclose. Two different systems under one roof.

TikTok requires visible labeling on all AI-generated visuals and audio that depict realistic people or scenes. They also participate in C2PA, so automatic detection can flag content even when creators skip the self-disclosure toggle. But text written by AI (scripts, descriptions, hooks) is explicitly exempt. So an AI-written ad with a human face does not need a label, but a human-written ad with an AI-generated face does.

YouTube requires creators to flag AI-generated content manually during upload. Fail to do it repeatedly and you are looking at policy strikes or demonetization. But their definition of what needs flagging (synthetic voices, fabricated events, digitally manipulated visuals) does not perfectly overlap with Meta or TikTok.

The IAB framework, published in January 2026, was supposed to simplify all of this. It introduced a useful question: "Does AI involvement meaningfully change what a consumer thinks they are seeing?" If yes, disclose. The problem is that "meaningfully" is doing a lot of work in that sentence. The framework says C2PA metadata should be attached to all ads for platform verification, but compliance is voluntary. It specifically exempts background alterations, audio enhancements, and post-production tweaks it calls "standard production techniques."

In practice, that means using AI to swap out a background does not require disclosure. Using AI to generate the person standing in front of that background does. The line between those two things is where all the legal risk lives, and it is the part nobody has cleanly resolved.

Two deadlines that make the voluntary approach temporary

If the platform rules were the whole story, teams could probably navigate this with careful metadata management and some creative workflow adjustments. But two regulatory deadlines are approaching that turn optional disclosure into legal requirements.

The EU AI Act transparency provisions hit on August 2, 2026. Article 50 requires that AI-generated content be identifiable as such, with fines reaching up to 6% of global turnover for non-compliance. The European Commission published a draft Code of Practice in December 2025 to flesh out the practical details, with a finalized version expected by June. There is a potentially significant exemption: if a human reviews and takes editorial responsibility for AI-generated content before publication, no label is required. That exemption is going to get tested in ways nobody has fully mapped out yet.

Closer to home, New York's synthetic performer disclosure law takes effect June 9, 2026. It requires advertisers to conspicuously disclose when AI-generated "synthetic performers" appear in ads. The fine structure is relatively mild ($1,000 first offense, $5,000 for repeats), but the precedent is not. New York tends to set the template that other states follow. The law does not specify the form of disclosure (language, placement, size), which means the first few enforcement actions will effectively write the compliance standard for everyone else.

I would expect at least two more states to introduce similar legislation before the end of 2026, and at least one major platform to make AI labels mandatory for all paid creative by Q1 2027.

Meanwhile, 89% of advertisers using generative AI only disclose "sometimes." Less than half always disclose. Which means a lot of campaigns that are technically fine today will not be in four months.

The compliance audit most creative teams are skipping

If I were running a multi-channel paid program right now, here is roughly what I would be mapping out.

First, inventory every AI touchpoint in your creative pipeline. Not just the obvious image generation, but AI-assisted copy, AI-enhanced audio, AI color correction. Each platform draws the disclosure line in a slightly different place. You need to know which tools your team is using and which ones embed C2PA metadata automatically, because Meta in particular will flag content whether you intended to disclose or not.

Second, check your asset export workflows. Some teams are already stripping C2PA metadata using ExifTool before uploading to platforms where they do not want automatic labeling. Whether that is smart compliance management or ethically questionable depends entirely on whether the content genuinely crosses the disclosure threshold. It is a judgment call, and one that probably needs legal sign-off rather than a junior media buyer making it at 4pm on a Friday.

Third, build a decision tree for your creative team that maps each platform's rules to your specific use cases. The IAB framework's question ("does AI involvement meaningfully change what the consumer thinks they are seeing?") is genuinely useful as a starting point. From there, layer on the platform-specific requirements and the regulatory deadlines.

This connects to a broader shift happening across AI-powered advertising right now. OpenAI recently launched its own ads manager with $60 CPMs, and as the tools for AI-generated creative multiply, the compliance surface area keeps expanding.

Four months to build the paper trail that actually matters

The finalized EU Code of Practice, the first New York enforcement actions, and whatever platform policy updates drop between now and August are going to set the actual compliance standard. Not the current patchwork of voluntary frameworks and inconsistent platform rules.

From what I have seen across the industry, the teams that tend to come out of regulatory transitions in the best shape are not the ones with the most aggressive legal departments. They are usually the ones who built documentation habits early enough that when the rules crystallized, they already had the evidence trail. An audit log of what AI tools touched which assets, export with or without metadata, and a recorded rationale for each disclosure decision.

Nobody got into advertising to manage metadata documentation. But four months from now, the teams with clean audit trails are going to look significantly smarter than the ones explaining to legal why they cannot answer a basic question about which assets used AI.

By Notice Me Senpai Editorial