Brussels Couldn't Agree to Delay the AI Act and Marketers Got 92 Days
EU lawmakers spent 12 hours in Brussels on April 29 trying to delay the AI Act's August 2, 2026 enforcement and walked out without a deal. The original deadline still holds. Article 50 transparency obligations apply across every consumer-facing AI system, and the AI Office can hit general-purpose model providers with fines up to 3% of global turnover or €15 million, whichever is higher. Marketers have roughly 92 days.
What actually broke in Brussels
The trilogue between the Council, Parliament, and Commission ran the night of April 28 into April 29, 2026 under the Cyprus presidency. The package on the table was the Digital Omnibus on AI, which would have moved the high-risk Annex III deadline to December 2, 2027 and pushed the Annex I deadline (AI embedded in machinery, medical devices, lifts) to August 2, 2028. Parliament's committees voted 101-9-8 in favor of those new dates back in March.
It didn't pass. According to The Next Web and IAPP's read on the breakdown, the parties couldn't converge on whether AI in already-regulated products (toys, medical devices, machinery) should stay under combined AI Act and sectoral assessment, or move primarily to sectoral handling. That is a relatively small structural argument for a 12-hour standoff. Consumer groups and medical associations also opposed the exemption.
Parliamentary rapporteurs Arba Kokalari (EPP, Sweden) and Michael McNamara (Renew, Ireland) had already rejected the Commission's earlier mechanism that would have triggered new dates by future decision rather than fixed calendar, PPC Land reported. So the path forward narrowed to a hard reschedule. A follow-up trilogue is set for around May 13. If that one stalls and there is no deal by early June, the original deadline doesn't get touched at all.
What hits on August 2 if nothing changes
Three things switch on under the original Regulation (EU) 2024/1689 timeline.
Article 50 transparency obligations. Providers and deployers of AI systems facing real people have to disclose that the user is interacting with AI. Synthetic content (image, video, audio, text) has to be machine-readable as artificially generated. Deepfakes need labels. Emotion recognition and biometric categorization need notification. Chatbots have to identify themselves. Generated text published for public-interest purposes has to be marked as AI-generated. The full Article 50 text spells out the deployer-versus-provider split.
This is the part the Omnibus was not trying to delay. Even if Brussels signs a delay package on May 13, Article 50 still hits in August. That distinction has been buried in most coverage, and from what I've seen, most agency planning decks are treating Article 50 as if it's part of the high-risk delay debate. It isn't.
AI Office enforcement on general-purpose models. Models that exceed the 10²³ FLOP threshold the Commission set last July, and were placed on market after August 2, 2025, become directly supervised by the AI Office. The Office gets to investigate, request source code access, appoint independent evaluators, and impose fines up to 3% of global turnover or €15 million, whichever is higher. Those numbers are not in the small-print zone for any model provider you actually use.
Annex III high-risk obligations. The Annex III list covers employment, credit, biometric ID, education, essential services, law enforcement. Critically, an Annex III system is always classified high-risk where it performs profiling of natural persons. Profiling means automated processing of personal data to assess work performance, economic situation, preferences, behavior, location, or movement.
The Code of Practice on AI-generated content, which covers labeling and watermarking specs, is expected to land May or June. So the final guidance arrives after the deadline, not before. Plan accordingly.
Why this lands hardest on adtech and martech
Read the profiling definition again. Most of the modern marketing stack qualifies. Audience segmentation tools that build behavioral profiles, dynamic creative optimization that tailors output to user signals, automated bidding systems that score individual conversion likelihood, attribution models that infer economic situation from intent: all of those are profiling under the regulation. Whether they fall into Annex III high-risk depends on the use case, but the surface area is bigger than most agencies have mapped.
The Belgian DPA noticed early. Its 2026-2028 strategic plan names large-scale advertising technology platforms, cross-border data sharing among data brokers, and large-scale profiling systems as priority enforcement targets. They've also said they're moving from a complaint-driven model to proactive inspections. With around 90 staff and a hiring freeze through 2029, that means strategic case-picking. Adtech is the strategic pick.
This is the same pattern Norway's DPA ran on Schibsted's privacy-fee model, where the regulator labeled the 39-krone opt-out a "privacy as a luxury" problem. EU regulators are picking adtech specifically and going at it with whichever regulation gives them the biggest fines. The AI Act gives them more room than GDPR did.
Four pieces of compliance you can wire this quarter
None of this is legal advice. It's the prep work agencies and in-house teams should already be doing.
- Inventory every consumer-facing AI system. Ad copy generators, chat support agents, AI-personalized landing variants, generated visuals on social, voice features, dynamic email subject lines. Article 50 deployer obligations attach to whoever operates the system, not whoever built it. If you deploy a chatbot, you're a deployer.
- Mark generated content now, not in August. Get watermarking and provenance metadata on AI image and video output. Label deepfakes, including the "as a [celebrity]" creative trick some teams still run. Add chatbot self-identification. The technical specs will land late, but the disclosure UX you can ship this month.
- Map your DCO and segmentation pipelines for profiling exposure. Anywhere your stack scores individuals on economic situation, employment status, health proxies, or essential-service access, you're closer to Annex III than you think. Lookalike-audience pipelines built on third-party data brokers tend to be the most exposed.
- Get your upstream model provider's classification in writing. OpenAI, Anthropic, Google, Mistral all have to declare themselves general-purpose model providers, with or without systemic risk. Your downstream obligations as a deployer depend on what they classified themselves as. Send the email this week.
René Judak, quoted in the PPC Land coverage, put it bluntly: "the real risk is not preparing too early...but discovering too late that you do not even have control." Forrester's Enza Iannopollo, talking to Computerworld, said the same thing in fewer words: "waiting is not an option."
What May 13 means in practice
The next trilogue is in roughly two weeks. There are three plausible outcomes.
If a deal lands on May 13, Annex III high-risk obligations slip to December 2027, and Annex I to August 2028. Article 50 transparency still triggers August 2. The AI Office still gets enforcement power on general-purpose models August 2. So even the optimistic scenario doesn't buy as much time as the headlines suggest.
If May 13 stalls again, the next realistic window is June. At some point the calendar runs out. By late June or early July there isn't enough legislative runway left to amend before August 2. The deadline holds in full.
If Parliament and Council both hold their March positions, the most likely middle-ground outcome is a partial deal: maybe Annex I exemption for sectoral products, Annex III still slips a year. That still leaves Article 50 in place. It still leaves the AI Office armed.
I don't think the most useful read here is panic. It's that the planning horizon for AI compliance just got considerably shorter than the lobby was hoping for, and the parts of the regulation marketers actually touch every day were never the parts being delayed in the first place.
By Notice Me Senpai Editorial