TikTok's Attribution Portfolio Is the Defendant Running Its Own Trial

TikTok's Attribution Portfolio Is the Defendant Running Its Own Trial
TikTok's Attribution Portfolio reports the same 35%-to-23x undervaluation gap that independent MMM vendors have been reporting for two years. The instrument and the defendant are now the same party.

TikTok launched its Attribution Portfolio on May 13, 2026, a four-tool suite inside Ads Manager that reports on TikTok's role in conversions where TikTok was not the last click. The headline finding: more than 1 in 4 TikTok-attributed conversions happen after a user sees an ad and then navigates directly to the site within the same day. TikTok built the measurement, sourced the numbers, and wrote the verdict.

Read that sentence again. The platform that benefits most from disputing last-click attribution just shipped a measurement portfolio to dispute last-click attribution. The product is well built. The data is plausible. The position is also impossible.

Inside the portfolio

The portfolio bundles four tools under Attribution Analytics in Ads Manager, three of which are generally available globally for web campaigns. Only the Overview tab is still in Alpha, and it is limited to web purchase events.

Performance Comparison lets you swap click-through and view-through windows side by side, so you can see how the conversion count and cost per acquisition shift across View Content, Add to Cart, Initiate Checkout, and Purchase events. Useful for sanity-checking what your current window choice is doing to your CPA before anyone in finance asks.

Time to Conversion displays elapsed time between ad interaction and conversion. TikTok's own data inside the tool reports 5.1 days as the average and says 80% of conversions land within 9 days. That is a longer tail than most performance teams plan for, which matters when you are rotating creative weekly and judging a winner after 72 hours.

Touchpoints to Conversion is the path-analysis view. It surfaces which campaign combinations correlate with conversion, and where there is lift from running awareness and lower-funnel placements together. Per the TikTok help documentation, the paths are limited to ad interactions inside TikTok itself.

Assisted Conversion is the headline feature. It reports paths where a user engaged with a TikTok ad and then converted on another channel, with TikTok claiming the assist. Per TikTok's own documentation, the biggest category being recovered is the same-day view-then-direct path. There is also a new Google Analytics 4 connector that feeds GA4 conversion signals back into TikTok bidding. TikTok's early 2025 internal testing reported a 54% lift in conversions and a 27% drop in cost per action through that connector.

The 23x problem

Every third-party measurement vendor that has run media mix modeling on TikTok in the last 18 months has reported a version of the same finding. Last-click underestimates TikTok by a lot. Kochava's September 2025 study on North American Android and iOS apps in Q1 2025 found 35% higher incremental impact under MMM than under last-touch. Ovative Group has separately reported TikTok ROI at roughly 23 times what last-click models show.

If you accept those studies at face value, TikTok has a credible grievance. The platform sits near the top of a lot of funnels and gets thanked by the bottom.

The problem is what TikTok did with the grievance. They built their own courtroom.

Why measuring yourself is still measuring yourself

The Attribution Portfolio runs on TikTok's pixel, TikTok's conversion logic, TikTok's match key, and TikTok's definition of what an assist actually is. The Adweek piece covering the TikTok Shop off-site pixel made the point earlier this year: when the platform owns the measurement layer, the platform decides what counts.

Assisted Conversions credits TikTok for any view-then-direct path that lands within the window TikTok chose. That is a defensible methodological choice. It is not an independent one. The same underlying data, run by a different vendor with a different match key and a different lookback rule, would produce a different number. Possibly higher. Probably different.

None of this means the tool is dishonest. The point is the structure. A platform whose ad spend depends on being valued cannot also be the auditor that decides whether it was valued correctly. The pattern is too familiar. Meta's own engaged-view window changes earlier this year were a quieter version of the same move. Pixels are scoring instruments. The platform that makes the instrument tends to win the test it administers.

What independent measurement actually looks like

If you genuinely want to know whether last-click is undercrediting TikTok in your account, the honest tests are not inside Ads Manager. They are designed around it.

The cleanest test is geo-lift. Pick two matched markets, run TikTok in one and not the other, hold everything else steady, measure total demand at the market level. TikTok itself ships a Geo Lift methodology, which is the version that runs on TikTok's data. The methodology travels though, and an independent MMM vendor can run the same design with cleaner controls.

The next-best test is a holdout cell inside your existing buying program. Turn TikTok off for 10% of your geo or audience for four weeks. Measure total brand revenue, not channel-attributed revenue. If revenue holds, the assists were assists in name only. If it drops noticeably, you have an independent confirmation of what the portfolio is reporting.

The same logic applies on the GA4 side. The new connector pushes GA4 conversions into TikTok bidding, which is genuinely useful. It is also a closed loop. If TikTok is reading TikTok's view of your GA4 numbers and using those numbers to optimize TikTok spend, you want a control on what its optimization is actually optimizing toward. Pull your own GA4 export at least monthly and compare to what shows in Ads Manager. From what I have seen, anything beyond a 15% delta is worth a conversation with your account team.

Where this fits in the broader attribution mess

We already wrote about how AI tools strip roughly 70.6% of referrer signal, leaving last-click reports increasingly fictional in any AI-influenced buying journey. TikTok's portfolio is responding to a real measurement crisis in the right general direction. Anything that pushes practitioners off pure last-click is good for the discipline.

The trap is treating walled-garden assist data like incrementality data. It is not. Assists are correlation, dressed in a UI that looks like causation. Incrementality is a holdout test you ran on purpose. The portfolio gives you the first thing and lets you mistake it for the second.

Three moves this quarter

First, turn the portfolio on and pull a baseline. The Performance Comparison view in particular is useful for stress-testing your current 7-day click and 1-day view windows against alternatives. Second, do not reallocate budget based on Assisted Conversions alone. Treat it as one input. Third, run an independent test. A four-week geo holdout costs you a small revenue dip in one market and buys you a number you can defend to a CFO who has already heard the 23x claim and is not going to accept it on the vendor's word.

I do not think the answer here is that TikTok is wrong about being undervalued. The MMM vendors are saying it too, and the underlying behavior, the same-day view-then-direct path, is real enough that anyone running a Shopify store has watched it happen. The answer is that the cure for walled-garden underreporting was never going to be walled-garden self-reporting. It was always going to be a holdout test someone outside the walls actually ran.

Notice Me Senpai Editorial