76% of Marketers Use AI Daily and Most Companies Aren't Checking the Output

76% of Marketers Use AI Daily and Most Companies Aren't Checking the Output
76% adoption, 44% enforcement. The governance gap in marketing AI widens every quarter.

A ProGEO.ai survey of 112 marketing professionals found 75.9% use GenAI for work every day, with 91.1% holding corporate-paid subscriptions. But only 43.8% of their companies enforce AI usage policies with technical controls, creating a governance gap that widens with each new tool added to the stack. A separate Gartner survey of 1,539 consumers found 50% prefer brands that avoid GenAI in consumer-facing content altogether.

The adoption numbers are less impressive than they look

The ProGEO.ai AI Marketing Maturity (AIMM) Index covers 112 respondents surveyed at RSAC, a cybersecurity conference. Not a massive sample, and the venue probably skews toward people who are more comfortable with AI tools than the average marketing team. I'd be cautious about projecting 76% daily usage across the entire industry.

That said, the directional signal is hard to argue with. 83% of respondents brainstorm content with GenAI at least once a week. 82.1% create content with it. 75% repurpose existing content. And 23.2% are "vibe coding" at least once a week, which is a genuinely interesting data point that didn't get much coverage. Marketers aren't just using AI for copy. They're building internal tools with it, apparently by describing what they want and letting the model figure out the code.

The subscription numbers tell a bigger story, honestly. 91.1% of respondents have a corporate-paid AI subscription. That means organizations aren't just tolerating AI use. They're funding it. They've made the purchase order. They've approved the line item. What most of them have not done is build the compliance infrastructure that should come with it.

77% have a policy. Less than half enforce it.

This is the number from the survey that I keep coming back to: 76.8% of marketing professionals say their company has a GenAI usage policy. Sounds responsible. But only 43.8% say their company enforces that policy with technical controls.

So more than half of the companies that wrote AI policies aren't actually monitoring whether anyone follows them. In a lot of organizations, the "AI policy" is a PDF that lives on an intranet page nobody bookmarks. Marketing teams signed an acknowledgment during onboarding, and that was the last time anyone thought about it.

This isn't a new pattern. GDPR consent policies looked great on paper for years before regulators started asking for evidence of actual enforcement. The gap between "we have a policy" and "we have controls that enforce the policy" is where regulatory risk compounds. It's not a theoretical concern, either. It's the kind of gap that shows up in an FTC investigation as evidence of insufficient oversight.

And that gap is about to get more visible. New York's AI disclosure law, effective June 2026, will require advertisers to disclose the use of AI-generated "synthetic performers" in ads distributed in the state. Penalties start at $1,000 per violation for first offenses and $5,000 for repeats. The fines aren't ruinous for a large brand, but the disclosure requirement itself is the real compliance headache. If your team creates ad creative with AI and can't prove it went through a review process, you're exposed.

Consumers are moving in the opposite direction

While marketers rush deeper into AI, consumers are becoming more skeptical. Gartner's March 2026 survey of 1,539 U.S. consumers found that 50% would prefer to buy from brands that don't use GenAI in consumer-facing messages, advertising, and content.

That statistic is worth sitting with for a second. Half the audience you're trying to reach would rather you didn't use the tool you're building your entire workflow around. And this isn't some abstract preference. The sentiment is backed by deeper trust erosion: 61% of consumers frequently question whether information they use to make decisions is reliable, and 68% frequently wonder whether content they see is real at all.

We covered the AI labels study recently that showed AI-labeled ads see a 31% lower click rate. That data starts to make more sense in this context. The consumer backlash isn't hypothetical. It's already showing up in performance metrics.

From what I've seen, the brands navigating this well aren't avoiding AI entirely. They're just careful about where they deploy it. AI for internal operations, research, data analysis, audience segmentation? Nobody cares. AI for the words and images a customer actually sees? That's where the trust penalty lives, and it seems to be getting steeper.

The regulatory timeline most teams haven't mapped

The FTC has been signaling pretty aggressively on this front. Operation AI Comply, launched in 2024, has produced more than a dozen enforcement actions targeting inflated or unsubstantiated AI claims. Those were mostly consumer product companies making wild claims about what their AI could do. But advertising-specific guidance is expected in 2026 or 2027.

New York's synthetic performer disclosure law in June is the first state-level regulation directly hitting AI-generated marketing creative. It almost certainly won't be the last. California and Illinois usually follow New York on this kind of thing, and the EU's AI Act already has disclosure requirements baked in.

If your company's AI governance plan is "we told people to use their judgment," you're building on what MarTech described as "control theater." Policies without enforcement create the appearance of oversight without the substance. That's arguably worse than having no policy at all, because it can look like you knew you needed controls and chose not to implement them.

What an actual audit looks like (30 minutes, not 30 days)

I think most marketing teams overcomplicate governance because they treat it like a legal project instead of a workflow check. Here's a stripped-down version you can run this week:

1. Inventory your AI touchpoints. List every place GenAI output reaches a customer or prospect. Ad copy, email subject lines, landing page text, blog posts, social captions, chatbot responses. That's your audit scope. Internal brainstorming docs and research summaries aren't consumer-facing, so leave them out for now.

2. Check whether your policy matches your reality. If your policy says "all AI-generated content requires human review before publishing," ask whether there's a documented review step in the actual workflow. A policy without a process step is a decoration.

3. Flag the enforcement gap. For each consumer-facing AI touchpoint, answer this: does a specific person review this output before it ships? If the answer is "it goes through a general approval queue" or "the person who wrote it also approves it," most teams call that oversight. Regulators probably won't.

4. Document one thing. If you change nothing else, create a log of which content pieces used AI and who reviewed them. When (not if) a regulator or client asks "show me your AI content oversight process," having a log is the difference between "we take this seriously" and "we meant to get around to it."

I'd budget about 30 minutes for the first three steps. The fourth one is ongoing, but the setup takes maybe 10 minutes if you just add a column to whatever content tracker your team already uses.

The gap between funded and governed is where liability lives

The uncomfortable part of the ProGEO data is that these companies already made the buy-in decision. 91% are paying for AI tools. The budget conversation happened. Someone approved the spend. The question nobody seems to have followed up with is "and how do we make sure the output doesn't create legal exposure?"

We wrote about AI making marketing teams spend more, not less a few weeks ago. The spending keeps going up. The governance infrastructure to match it keeps being next quarter's project.

From a regulatory perspective, the worst version of this is a team generating hundreds of ad variations with AI, running them across Meta and Google, and having no record of which ones were AI-generated or who reviewed them. If one of those ads makes an unsubstantiated claim or uses a synthetic likeness, the company's defense would need to be stronger than "we have a policy." It would need to show enforcement. And right now, for 57% of companies, that evidence doesn't exist.

The teams that will handle this well probably aren't going to build some elaborate AI governance committee with quarterly reviews and executive dashboards. They're going to add three fields to their existing content tracker: "AI-assisted (yes/no)," "reviewer," and "review date." That's probably the whole difference between governed and ungoverned when someone finally asks.