Last-Click Buries the 10.21% Converter. AI Strips 70.6% of Its Referrers.
Dan Taylor argued in MarTech on May 12 that last-click attribution rewards the wrong work in an AI-first world. The number that makes his case concrete is the one his piece left out: 70.6% of AI-driven sessions arrive with no referrer header, which dumps the highest-converting traffic on the open web into the Direct bucket where last-click never sees it. That dark cohort converts at 10.21% versus 2.46% for everything else. The model is not just unhelpful for budget calls. It is directionally wrong.
Taylor is right. His argument is also incomplete.
Taylor is head of technical SEO at SALT.agency and has spent the last year writing about how AI search breaks the GA4 reports most marketing teams still treat as truth. His MarTech piece spells out the funnel logic cleanly: last-click "creates a strong and persistent bias toward channels and tactics closest to the moment of purchase," which means branded search and retargeting hoover up the credit while content, brand, and partnerships starve at the top.
That argument is right. What it skips is the size of the measurement break, which has gotten worse fast in the past 12 months.
I think the reason this debate keeps recycling (and I include myself in this) is that smart marketers keep treating it as a philosophical disagreement about funnel logic. It is not. It is a plumbing failure. The HTTP referrer header that last-click depends on is being stripped at scale by the AI tools your prospects are actually using to decide what to buy.
The 70.6% strip is the part nobody is writing about
Loamly's analysis of 446,405 AI-driven sessions found that 70.6% landed on a site with no referrer attached. Most of those get bucketed as Direct in GA4. Across industries, ChatGPT alone accounts for roughly 87.4% of all AI referral traffic, which means the bulk of the dark traffic problem is just one behavior: people copying URLs out of a ChatGPT answer and pasting them into a browser instead of clicking the citation.
The same dataset shows the visitors hiding in Direct convert at 10.21% compared to 2.46% for non-AI sessions. A roughly 4x conversion gap. Last-click sees a Direct session land on a product page and credits Direct, or worse, hands the credit to the branded search or remarketing click that came later in the same session. The upper-funnel work that earned the AI citation in the first place gets nothing.
For context on the volume: ChatGPT alone processes about 5.4 billion global monthly visits as of January 2026, even after slipping from 86.7% to 64.5% of Gen AI traffic share year over year. The pipeline is not shrinking. Other AI assistants are just taking some of it.
Google AI Overviews make the picture worse on purpose. AI Overview clicks pass no distinct referrer and arrive as ordinary organic Google sessions. So your "branded organic" lift in 2026 is partly a discovery channel you cannot see, and last-click cannot tell the difference between a brand search that came from autocomplete and one that came from a citation inside an AI Overview. We covered the operational fix for the referrer side of this in our earlier piece on the AI traffic denominator problem, but the attribution model issue is the other half of the same break.
21.5% still defend last-click. The other 78.5% have not shipped the fix.
This is where the survey data gets uncomfortable. Measured's industry tracking shows only 21.5% of marketers believe last-click provides an accurate read on long-term platform impact, and nearly 75% say they are actively moving off it. But adoption tells a different story. MMM adoption has tripled since 2023, and 46.9% of US marketers plan to invest more in MMM "next year," which is mostly the polite version of "we have not actually started."
Among the teams who do run MMM, eMarketer's November 2025 measurement survey reports 27.6% now cite it as their most reliable methodology, and 61% of US retail decision-makers use it for incrementality reads. Retail moved first because retail media exposed the problem fastest. A Sponsored Products click that converts inside two minutes is almost never the cause of the sale.
The gap between "we know last-click is broken" and "we shipped a replacement" is where most analytics budgets are sitting right now. Senior marketers will tell you in private that their attribution layer is fiction. They keep it in the deck because the CFO wants a single ROAS number and nobody wants to be the one who admits the number is wrong.
From what I have seen at agencies and in-house teams looking at this seriously, the honest move is to keep last-click for one job only: tactical decisions inside a single channel, like which Meta creative ran best in the same campaign window. Stop treating it as a strategic budget tool. Use it like a thermostat, not a compass.
The three-model stack that actually holds up in 2026
Taylor proposes three alternatives in his piece. None of them are new ideas. What is new is that all three are now buyable off the shelf, and the entry cost has dropped enough that mid-market teams can actually run them in parallel.
Incrementality testing. Run geo holdouts or audience-level controlled experiments. Triple Whale, Measured, and Haus all offer this for paid social and retail media. The eMarketer/TransUnion July 2025 survey found 27.6% of US brand and agency marketers now cite expanding incrementality testing as a top measurement priority. Retail has moved fastest here because incrementality is the only way to defend a Sponsored Products line item to a category manager.
Marketing mix modeling. Google open-sourced Meridian in 2024, which dropped the entry cost for MMM from "hire a consultant for $100K" to "two analysts can pilot it in a quarter." MMM is the only model that natively accounts for upper-funnel work, brand spend, and AI-driven dark traffic, because it does not care about user-level pathways. It runs on aggregate spend and outcome data, which means the referrer strip and the AI Overview problem do not break it.
Channel role assignment. Tag each channel as awareness, consideration, or conversion and hold each one to a different metric. Branded search is not allowed to take credit for new demand. Display is not allowed to be judged on last-click ROAS. This part requires no new tooling. It requires saying no to the platform's default report, which is harder than it sounds when the platform AE is on the QBR call.
Run the three together and reconcile with judgment. Measured calls this "unified measurement." The Wunderkind 2026 performance guide calls it "post-last-click." Same idea. No single model survives AI-first journeys alone.
What to actually queue before Q3 planning
If you have one analytics ticket to write before the next planning cycle, make it this one. Add a custom GA4 channel grouping that classifies traffic from ChatGPT, Perplexity, Gemini, Copilot, and Claude as their own dedicated channel. Do not try to fix the referrer strip itself. Build the channel definition so that when you do get a referrer (which is about 29.4% of the time, per the Loamly data), you are not letting it route into Organic Search or Direct by default. Even that partial signal usually changes how a team budgets within a month.
If you have two tickets, the second is a small geo holdout on one top-of-funnel channel. Pick the channel that gets the least credit in last-click and the most defense from your CMO. That is exactly where the incrementality gap lives, and it is the cheapest test to run with the highest political payoff inside the team.
The deeper bet is a Q3 MMM pilot scoped to one product line. Three months of historical media data plus one quarter of forward measurement is usually enough to get a defensible read against the platform-reported ROAS your CFO has been quoting all year.
The CFO conversation that is actually coming
CFOs are not going to keep funding measurement they cannot defend. The CFO in 2026 is reading the same eMarketer reports the CMO is, and seeing that last-click attribution buries 4x-converting traffic in a bucket called Direct. The next budget cycle is when this stops being a marketing question and becomes a finance one.
If your team is still walking quarterly board decks on last-click ROAS, expect the question. The team that already piloted incrementality and channel-role tagging gets to answer it with a graph. The team that did not gets to answer it from defensive notes about why the number is wrong but probably directionally correct.
Honestly, I do not think this gets fixed evenly. Some teams will move now and quietly grow share because they are not panicking about a 30% Direct traffic bucket they cannot explain. Other teams will wait until the AI referral share crosses some psychological threshold (10%? 15%?) and then panic-buy a measurement vendor at peak vendor pricing. The first group is going to look prescient by Q4, and the truth will be that they read the Loamly study six months earlier and just acted on it.
Notice Me Senpai Editorial