Air Canada's C$812 Chatbot Loss Sets the Real Marketing AI Liability Floor
The British Columbia Civil Resolution Tribunal ordered Air Canada to pay C$812 in February 2024 for misinformation its chatbot gave a customer, rejecting the airline's defense that the bot was a "separate legal entity." Marketing teams treating AI legal risk as a 2027 EU AI Act problem are watching the wrong calendar. The real exposure is chatbot statements, derivative-work claims, and FTC AI-washing actions, all live in 2026.
I've been watching how marketing leadership talks about AI compliance, and the pattern is pretty consistent. The EU AI Act keeps getting cited as the milestone, the Code of Practice is on the calendar, and the assumption is that real exposure starts when the general-purpose AI obligations bind. That framing skips past a much more boring problem: the cases hitting marketing teams right now are negligent misrepresentation, copyright, and FTC substantiation. None of those needed a new statute.
The C$812 number nobody is taking seriously enough
Moffatt v. Air Canada is small in dollars and large in precedent. Jake Moffatt asked the airline's chatbot about bereavement fares, the bot quoted incorrect policy, and Air Canada argued in tribunal that the chatbot was "a separate legal entity that is responsible for its own actions." Foster & Company's writeup of the negligent-misrepresentation ruling is the version legal teams keep emailing each other. The tribunal flatly rejected the defense and held the airline responsible for "all the information on its website," chatbot or otherwise.
Read the opinion and the rule is simple. If your customer-facing AI generates a statement, that statement is yours. Dentons' analysis frames it as negligent misrepresentation, which means you don't need bad intent. You just need the chatbot to say something a reasonable person would rely on. That covers product capability claims, return policy answers, pricing quotes, eligibility statements, and the bulk of what marketing teams now ship into Intercom, Drift, and the dozen lightly-supervised support flows running on top of ChatGPT.
The C$812 award is not the warning. The warning is that the precedent is now sitting on the desk of every plaintiff lawyer in North America with a client whose AI chatbot promised something the company won't honor.
The Copyright Office already said your AI ad copy isn't yours
In January 2025 the U.S. Copyright Office published Part 2 of its Copyright and Artificial Intelligence report, and the operative line is direct: "prompts alone do not provide sufficient human control to make users of an AI system the authors of the output." Two months later the D.C. Circuit affirmed the same logic in Thaler v. Perlmutter. The Copyright Office's registration guidance now requires applicants to disclose AI-generated content and exclude anything more than de minimis from the claim.
For most marketing teams, prompt-only is the whole workflow. You feed the brief into ChatGPT, edit lightly, ship. That output is unregistrable, which sounds abstract until you try to enforce against a competitor lifting your ad copy line-for-line.
The other side of the same problem is on the input. The New York Times v. OpenAI case is now scheduled for summary judgment in April 2026, with Judge Stein letting the main copyright claims move forward and a January order forcing OpenAI to produce 20 million de-identified chat logs in discovery. Trademark dilution is part of the Times complaint, too. From what I've seen, that's where the marketing exposure actually lives. The model can absorb a competitor's brand voice with no operator intent, and the output ends up in your campaign before anyone checks.
FTC enforcement is the live wire, not the EU AI Act
Operation AI Comply hit a dozen cases in 2025 and kept going through the administration change. The agency went after agentic AI productivity claims specifically, and the substantiation standard is the same one applied to any other product representation. If you say your AI does X, you need verifiable evidence X is what it does.
For marketing buyers, this lands in two places. The first is the language you use in your own case studies and tool reviews. "AI-powered attribution," "AI-driven creative scaling," and "machine learning bid optimization" all need to map to something the underlying tool actually does, not a category badge the vendor stuck on. The second is the language clients and stakeholders write off the back of your work. If your dashboard quotes a 35% lift "from our AI workflow" and the underlying lift is from a manual segment change, that claim becomes the agency's problem the minute it gets reused in pitch decks.
The workflow change that actually matters
Search Engine Land's piece on the safest AI workflow walks through seven steps, but the one that resets a marketing team's day-to-day is the risk-tiered lane structure. Green lane is brainstorming and internal drafts. Yellow lane is external drafts that go through human review. Red lane is anything customer-facing, anything making a factual or compliance claim, anything that ends up in legal-adjacent surfaces like product pages, refund policy bots, and lead-gen quotes.
The practical move is to label every prompt template you have, this week, by lane. Most teams will find their "marketing operations" stack quietly moved a stack of red-lane work into yellow-lane review without anyone noticing. The audit usually takes about three hours for a mid-size team.
From there, the rule that holds up is the one that flips the human role: humans draft the prompt and the brief, the AI generates options, humans select and edit. Not the other way around. That keeps a defensible human-authorship claim on the work, which is what the Copyright Office is looking for, and it keeps the customer-facing surface in the loop of the people responsible for substantiation.
How this connects to the state-law wave
The federal landscape is one half. The state landscape is the other, and it moves faster than most marketing teams plan for. Colorado's SB 26-189 originally exempted marketing AI from its consequential-decision rules, then the pricing clause pulled a chunk of it back in. We covered the mechanics earlier this week in the Colorado breakdown. The pattern is going to repeat. Each state's "AI exemption for marketing" is going to ship with a carve-back, and the place to watch is whatever clause talks about pricing, financial offers, or personalized eligibility. The carve-backs are where marketing actually operates.
Three audits to run before Friday
First, list every AI surface your marketing function ships to customers. Chatbots, support flows, search summaries on your site, AI-generated email subject lines if you're using ESP-side generation, lead-routing bots, anything that talks to a user without a human in the loop. That list is your red-lane inventory. On most mid-size teams it ends up longer than the AI tool inventory the IT side maintains, which is usually the first sign something's off.
Second, write a one-page policy that says where each AI tool sits in the green/yellow/red structure, who reviews each lane, and what the escalation path is when the AI generates a customer-facing statement that turns out to be wrong. Skadden's analysis of the Copyright Office report has good language to lift for the human-authorship section if you want a starting point.
Third, the boring one. If you make AI claims in your own marketing, audit them against what the tool measurably does. Pull the receipts before someone else does. The FTC's Operation AI Comply pattern suggests the agency is going through verticals methodically, and marketing automation has been a recurring target.
I don't think the marketing teams that end up on the wrong side of one of these cases will be the ones doing something wild. From what I've seen, it'll be a chatbot quote that survived a vendor migration, a piece of unedited generative copy that drifted into a brand-trademarked phrase, or a tool review that overstated what an integration does. Boring, all of it. Which is part of why the EU AI Act framing is misleading. The risk isn't statutory, and it isn't 2027. It's already on the docket.
Notice Me Senpai Editorial