Influencer Casting Is Becoming Index Fund Investing
By Notice Me Senpai Editorial
The biggest shift in influencer marketing this year has nothing to do with a new platform or a viral format. It's structural. Agencies are quietly replacing their creator casting process with AI screening tools, and the result isn't just faster discovery. It's a completely different campaign architecture.
Dentsu launched CATS (Creator & Trends Studio) in January, built on a Meta API partnership. Later has had an AI matching system running for six months. Walmart is deploying "hundreds of thousands" of creators. And agencies using these tools report working with 30 to 40 percent more influencers per campaign on average.
Those numbers are interesting on their own. But the part that actually changes how you spend money is the logic underneath them.
The stock-picking era of influencer marketing is ending
For years, the default influencer strategy looked like this: find 8 to 12 creators who feel right, negotiate rates, run a campaign, hope for the best. The selection process was part spreadsheet, part gut feel, part "my colleague follows this person." It worked well enough when the creator pool was small. It doesn't scale, and it leaves a lot of performance on the table.
What AI discovery tools actually enable is a portfolio approach. Instead of placing concentrated bets on a handful of mid-tier or macro creators, you can screen tens of thousands of candidates against your campaign brief and run with 30, 50, or 100+ creators at once. Mostly nano and micro. The per-creator cost drops. The total reach often doesn't.
I keep thinking about this as the difference between stock picking and index fund investing. Stock picking (the old way) requires deep research on each pick, carries high concentration risk, and depends heavily on the judgment of whoever's doing the picking. The index approach spreads risk across a broad base, reduces the cost of any single bad pick, and tends to outperform over time. Not always. But consistently enough that it's hard to argue against the math.
From what I've seen working with brands that have tested both approaches, the portfolio model tends to produce more stable results. You're not praying that one creator's post goes viral. You're building a base of coverage where the aggregate performance is more predictable. And honestly, some of the individual creators who end up performing best are ones no human would have picked manually. That's the part most agencies are still getting their heads around.
Walmart's "hundreds of thousands" number is not hyperbole
According to Digiday, Walmart is now deploying "hundreds of thousands" of creators, with a focus on engagement metrics rather than follower counts. That phrasing stopped me for a second. Hundreds of thousands. That's not an influencer program. That's closer to a distributed sales force.
No human team is vetting hundreds of thousands of creators individually. The only way that number works is with AI doing the initial screening, matching, and performance modeling. And what Walmart seems to have figured out, maybe faster than most agencies, is that a nano-influencer with 2,000 genuinely engaged followers in a specific product niche can drive more measurable action than a creator with 200,000 followers and a scattered audience.
This echoes something we've been tracking with YouTube's affiliate play, where the platform dropped its shopping eligibility threshold to 500 subscribers. The direction is the same everywhere: smaller creators, tighter audiences, better signal.
In one creator program I helped structure last year for a DTC brand (roughly $15k/month influencer budget), we shifted from 8 macro creators to 45 micro-creators. Total spend stayed flat. Engagement rate went from 1.8% to 4.2%. Sales attribution improved too, though I'd hedge that number since we were also running paid amplification on the best-performing posts, which muddies things a bit.
Elizabeth Arden's results and what they actually tell you
Dentsu's CATS tool produced a case study with Elizabeth Arden that's worth looking at carefully. The campaign saw a 14.3% increase in unaided ad recall and a 41% rise in sales conversions. Those are genuinely strong numbers for an influencer program, especially the ad recall metric. Unaided recall is hard to move.
But I want to be precise about what those numbers mean. Dentsu isn't claiming the AI wrote better content or picked objectively better creators. The tool analyzes creators by subject matter, profile relevance, and trend participation. What it seems to do well is match the right creator to the right brief with more consistency than a human team scrolling through Instagram for three hours. It's a better starting point for human oversight, not a replacement for it.
And that distinction matters. The agencies getting value from these tools aren't firing their influencer managers. They're freeing them up to spend time on the top-tier, white-glove relationships that still require a human touch. Celebrity partnerships, long-term brand ambassadors, creators who need custom contracts and careful creative direction. The AI handles the long tail. Humans handle the high-touch stuff. On paper, that sounds like an upgrade. And sometimes it is.
The risk, and I don't think agencies are talking about this enough, is that AI selection can create a kind of homogeneity. If every tool optimizes for engagement rate and brand relevance, you could end up with campaigns where every creator looks and sounds similar. The same aesthetic, the same cadence, the same audience demographics. One of the points raised in the Digiday piece is that AI has the potential to reduce demographic bias in selection, which is a real benefit. But that potential is only realized if the training data and scoring models are built to prioritize diversity. It doesn't happen automatically.
How to actually restructure your next campaign around this
If you're running influencer campaigns and haven't tested a portfolio approach yet, here's a framework that doesn't require buying enterprise AI tools.
Step 1: Split your creator budget 70/30. 70% goes to a broad pool of nano and micro-creators (1K to 50K followers). 30% stays reserved for 2 to 3 proven mid-tier or macro creators. This gives you the portfolio diversification without abandoning relationships that already work.
Step 2: Use engagement rate as your primary filter, not follower count. The benchmark worth knowing: nano-influencers (under 10K followers) average 4 to 6% engagement on Instagram. Macro-influencers (100K+) average 1 to 2%. If a creator in the 5K to 15K range is hitting above 5%, they're almost certainly worth testing.
Step 3: Set a kill threshold. Not every creator will perform. That's the point of the portfolio. Give each creator one deliverable. If it hits below 2% engagement or zero trackable conversions within 7 days, don't rebook them. Move that budget to a new creator. Treat it like ad creative testing: rotate, measure, cut, reinvest.
Step 4: Track attribution at the creator level. Unique discount codes, UTM parameters, or dedicated landing pages per creator. This sounds obvious but I still see brands running 20+ creator campaigns with a single shared link. You can't optimize what you can't see.
Later's AI system, which has been running for about six months now, does something interesting here. It models content performance using historical engagement data before you even book the creator. Personally, I think that kind of predictive scoring is where the real value sits. Not in finding creators (any decent search tool can do that) but in predicting which ones will actually convert for your specific campaign.
The part nobody's measuring yet
There's a second-order effect of the portfolio approach that I think will matter more than the immediate performance gains. When you work with 50 or 100 creators instead of 10, you generate 50 or 100 pieces of content. Even if half of those posts underperform organically, you now have a library of authentic, creator-generated content that can be repurposed for paid social, email, product pages, even out-of-home.
I've seen brands where the influencer content outperformed their in-house creative in paid amplification by 2 to 3x on cost-per-click. The influencer program wasn't even measured on that. It was a side effect. And it kind of changes the ROI math entirely when you factor it in.
The agencies investing in AI discovery tools aren't just solving a casting problem. They're building a content supply chain. And the brands that figure out how to operationalize that supply chain, connecting the influencer program to the paid media program to the organic content calendar, are probably going to have a meaningful edge over the next couple of years.
I don't think this kills the big-name influencer deal. Those still work for awareness and cultural positioning. But the mid-tier space, creators with 50K to 300K followers charging premium rates with inconsistent performance, that's the segment that gets squeezed. When AI can reliably identify 30 nano-creators who collectively outperform one mid-tier creator at a fraction of the cost, the mid-tier value proposition gets genuinely hard to justify.
The smart move probably isn't to wait for your agency to adopt one of these platforms. It's to start testing the portfolio structure now, even manually. Run your next campaign with three times the number of creators at one-third the individual budget. See what the aggregate data looks like. I suspect most teams will be surprised by how much more predictable it is than the old way of doing things.