Mastercard's Verifiable Intent Just Made Brand Signals a Cryptography Problem

Mastercard's Verifiable Intent Just Made Brand Signals a Cryptography Problem
Mastercard open-sourced Verifiable Intent on March 5 and aligned it to Google's AP2 protocol. The trust layer is also the brand layer.

Mastercard open-sourced its Verifiable Intent specification on March 5, 2026, co-developed with Google and aligned to Google's Agent Payments Protocol (AP2). The framework creates a tamper-resistant cryptographic record of what a cardholder authorized when an AI agent buys on their behalf, and it integrates directly into Mastercard Agent Pay's intent APIs over the next several months. Brands without machine-readable signals risk getting filtered out of the agent's choice set entirely.

What Mastercard actually shipped on March 5

The spec lives on GitHub. Google, Fiserv, IBM, Checkout.com, Basis Theory, and Getnet committed to it on day one. Fiserv has already pushed Mastercard Agent Pay into its merchant platform. Verifiable Intent uses a technique called Selective Disclosure, which shares only the minimum data needed with each party to verify authorization or resolve a dispute, and nothing else.

The protocol's core innovation is two artifacts. An Intent Mandate (the cryptographically signed instructions the user gave the agent) and a Payment Mandate (a separate signal flagging whether the user was actually present when the transaction fired). Together they give merchants and issuers a verifiable answer to the question that has stalled agentic commerce since GPT-3.5: did a human authorize this, and what specifically did they authorize?

AP2 itself sits on top of Google's Agent2Agent (A2A) protocol and Anthropic's Model Context Protocol, with around 60 partners now signed on. American Express, PayPal, Coinbase, Adyen, JCB, ServiceNow, and Worldpay are all on the list. Every payments rail and every cloud is racing to become the verification layer for transactions where no human ever clicks a buy button.

The marketing read is cleaner than the press release suggests. The trust layer is also the brand layer. Whatever standard wins the spec wins the right to decide which brand signals an agent will treat as authoritative, and which it will quietly ignore.

The cardholder is no longer the audience that matters

For three decades, brand teams optimized for the cardholder. Loyalty cashback. Discount codes timed to weekend baskets. "Spend $50, get 15% off." Promo emails at 6am Tuesday because that is when she opens email. All of it human-readable. None of it survives the moment software is doing the buying.

When the buyer is an AI agent, the only inputs that count are the ones the agent can verify cryptographically. The query is system-to-system. The browsing path is an API call. The "ad creative" is structured data the agent can parse, weight, and trust.

Mastercard's EVP of marketing for the Americas, Rustom Dastoor, called this "technical branding" at CES 2026: getting a brand differentiated at the algorithm level, not just the visual one. The subtext is that Mastercard processes roughly 150 billion transactions a year, which gives it aggregated spending signal an agent can weight. A JPEG ad creative is, to the same agent, essentially noise.

This is the part most marketers I have talked to are still underestimating. Anthropic ran 186 agent-to-agent project deals where the losing brands could not even tell they had lost. There was no impression log to pull. There was no SERP they ranked third on. The agent just did not pick them. Verifiable Intent is the cleaner version of that world: a record of what the user authorized, what the agent was allowed to spend, and which merchant fulfilled it. If your product feed is a pile of PDFs and your deals sit inside an email creative, you are not in that record.

Technical branding, decoded

Strip the buzzword. Technical branding is three things, in this order.

One. Structured product data with cryptographic provenance. Schema.org Product markup, Merchant Center feeds, and increasingly signed product attestations. The agent has to verify that "this is the real Nike running shoe at the real Nike-authorized price," not a spoofed listing on a marketplace clone.

Two. Machine-readable claims, not adjectives. "Best-selling running shoe" is a slogan. "Returned at 4.1% versus category median 9%" is a claim an agent can compare across SKUs. The closer your product page reads to a structured comparison, the more often an agent surfaces you.

Three. Agent-callable APIs for inventory, pricing, and authorization. Per the AP2 specification, agents exchange Verifiable Digital Credentials that include an Intent Mandate (what the user authorized) and a Payment Mandate (whether a human is present). If your stack cannot respond to those calls in real time, an agent will route around you to a competitor that can.

Honestly, the order matters more than the items. Most teams want to skip to step three and ship an "agent integration" without ever cleaning up step one. From what I have seen across e-commerce stacks, the schema layer is where most agent reads silently fail today.

The schema audit you can run before your team meeting

Three things, under thirty minutes total.

Pull your top ten product or service pages and run them through Google's Rich Results Test. Count how many return Product schema with price, availability, aggregateRating, review, and brand fields populated. The benchmark from what I have seen: most mid-size DTC sites land at 40 to 60 percent coverage. Anything under that, and an agent will struggle to weight your listing against competitors.

Audit your robots.txt and your AI crawler policy. If you are blocking GPTBot, ClaudeBot, Google-Extended, or Anthropic's user agent across the board, you are also blocking the agents that will eventually act on the user's behalf. Most marketing teams I have seen quietly blocked everything in 2024 and never revisited. The trade-off is real (training data leakage versus agent visibility), but the default of blanket-blocking looks worse every quarter. PayPal just made 400 million verified buyers free to target via an ad ID precisely because deterministic, machine-readable signal is now the asset.

Check whether your merchant feed flows into an agent-readable spec. AP2 is open and Verifiable Intent is open-sourced. If your engineering team has not at least pulled the docs, you are running on the assumption that 2026 commerce looks like 2022 commerce. It does not.

Where this lands in 18 months

My read: by Q3 2027, large checkout platforms will quietly default to AP2 or a Verifiable Intent variant, the way they defaulted to PCI compliance years ago. Agentic transaction share, which is currently a rounding error, will probably push past 5% of US e-commerce on platforms that integrate first. Brands with clean structured data and signed product feeds will see their unit economics improve, because the agent removes the comparison-shopping shopper from the funnel and just transacts. Brands that did nothing will see flat revenue and no obvious culprit.

The other shift, the quieter one, is that the brand team and the engineering team are about to share an audit. The product detail page is no longer just a marketing surface. It is a verifiable claim. Schema markup, price object, availability flag, aggregate rating, brand entity reference. All of it becomes underwriting data for an agent's decision, the same way a cardholder used to weigh "is this brand on Instagram trustworthy" in their head.

I do not think the winners here are the ones with the biggest paid-social budgets. They are probably the ones who treated their product schema like brand work for the first time.

The shift is quieter than the AI Overviews story, but it is the same pattern. Once a layer above the consumer starts mediating decisions, your job is to be readable to that layer, not just to the consumer. Mastercard just wrote the first credible spec for that layer. The product page your team ships next quarter is the brand asset that will or will not show up inside it.

Notice Me Senpai Editorial