Your B2B Ad Algorithm Thinks Sales Team Vacations Are Targeting Problems

Your B2B Ad Algorithm Thinks Sales Team Vacations Are Targeting Problems
When your closer goes on holiday, the algorithm blames your targeting. That is the whole problem.

If you run B2B paid media with sales cycles longer than 30 days, there is a good chance your ad algorithm is solving the wrong problem. Not because the platform is broken. Because you are feeding it data that makes it confused about what "good" looks like.

A recent piece in Search Engine Land made an argument that I think more B2B advertisers need to hear, even if they will not like it: stop optimizing your campaigns to final sale. Optimize to lead submission instead, with proper value assignment. The rest is noise.

Your algorithm cannot tell the difference between bad targeting and Dave being on holiday

The core issue is deceptively simple. In long sales cycles, the gap between when a lead arrives and when it converts (or does not) can be weeks or months. During that gap, a lot of things happen that have nothing to do with your ad targeting or creative quality. Your best closer goes on vacation. A junior rep covers unfamiliar accounts. The finance team delays approvals because it is quarter-end. A prospect ghosts for three weeks because their boss changed priorities.

When the conversion rate drops because of any of these factors, the algorithm sees it as a targeting problem. It does not know that Dave is at a conference and the person covering his pipeline has half his close rate. It just sees fewer conversions from the same lead profile and starts adjusting who it shows your ads to.

As the Search Engine Land piece puts it: "when the conversion rate drops because Dave is away and a junior team member is covering his accounts, the algorithm sees it as a targeting problem rather than a staffing issue." The algorithm then starts chasing a different audience profile, one that correlates with the (artificially lower) conversion rate during that period. By the time Dave gets back and starts closing again, your targeting has drifted.

This is not a hypothetical. If you have ever looked at your Google or Meta campaign data and noticed performance mysteriously dipping and recovering without any changes on the ad side, operational factors downstream are probably the explanation.

The "Santa Claus Rally" that breaks your attribution

The most vivid example from the article comes from financial services. In December, conversion rates spike dramatically, sometimes up to 150% in the third week, because year-end bonus pressure pushes prospects to make decisions they have been sitting on. Then rates crater during the holidays when offices close.

The leads have not changed. The targeting has not changed. The only thing that changed is urgency on the buyer side and availability on the seller side. But the algorithm does not see "year-end pressure" or "office is closed." It sees conversion rate surging and then collapsing, and it tries to optimize around patterns that are entirely seasonal and operational.

If you are optimizing to final sale in a business like this, your algorithm is essentially trying to predict Dave's vacation schedule and your prospect's fiscal year-end. It is not equipped to do that, and the harder it tries, the more your targeting drifts from the people who are actually good leads.

The counterintuitive fix: optimize earlier, not deeper

The solution is not better attribution or more data points downstream. It is the opposite. Move your optimization point earlier in the funnel, to the moment of lead submission, and assign values to those leads based on historical close rates.

The framework looks like this:

  1. Pull 12 months of historical data on which leads actually closed and at what deal size
  2. Group leads by characteristics that correlate with close likelihood (company size, title, industry, source, form completeness, whatever your CRM tracks)
  3. Assign expected revenue values to each group. High-likelihood leads from enterprise accounts might get $850. Mid-range from SMB might get $420. Lower-likelihood informational requests might get $120.
  4. Use value-based bidding in Google or Meta to optimize for estimated lead quality rather than downstream conversion

The important distinction here is that you are not ignoring downstream data. You absolutely should continue measuring what closes and what does not. But you stop letting the algorithm optimize against it, because the signal-to-noise ratio in downstream conversion data is too low for the algorithm to learn from reliably.

As the article puts it: "You absolutely should continue measuring... but it just should not be what you are optimizing to."

Why most B2B teams resist this

I think there are two reasons this approach gets pushback, and both are understandable.

First, it feels like you are giving up control. If you optimize to final sale, you can point at the campaign and say "this generated $X in closed revenue." If you optimize to lead submission with estimated values, you are working with projections. The CFO wants to see actual revenue, not modeled revenue. And fair enough, that is a reasonable thing to want. But the alternative, letting the algorithm chase downstream signals it cannot properly interpret, produces worse actual revenue. The reporting just looks more precise.

Second, the value assignment requires work. You need clean CRM data going back at least 12 months. You need someone who can segment leads by close likelihood with enough granularity to be useful. Most B2B marketing teams do not have that analysis readily available, and building it takes time. It is not a 10-minute fix.

But if your sales cycle is 60+ days and you are optimizing to final sale, you are essentially asking your ad platform to learn from outcomes that are two months old and contaminated by operational factors it cannot see. That is like trying to teach someone to cook by only telling them whether dinner guests enjoyed the meal two months later, without mentioning that the oven was broken for three of those weeks.

The specific test to run

If you are spending at least $10K per month on B2B lead generation and your sales cycle exceeds 30 days, run this test for 60 days:

Campaign A: Your current setup, optimizing to whatever downstream conversion event you are using now.

Campaign B: Same targeting, same creative, same budget. But optimize to lead submission with value-based bidding using your historical close rate data to assign lead values.

After 60 days, compare not just cost per lead, but revenue per dollar spent, using actual closed deals from both campaigns. The 60-day window matters. You need enough time for the downstream data to materialize so the comparison is fair.

We wrote recently about how brands are growing their experimental ad budgets as returns decline on established channels. This kind of structural test, where you are changing what the algorithm optimizes against rather than tweaking creative or audiences, is exactly where experimental budget should go. It is not a creative test. It is an infrastructure test, and the results tend to be durable if the hypothesis is correct.

Measure everything, optimize selectively

The broader principle here extends beyond B2B, honestly. Any advertiser whose conversion event is significantly delayed or influenced by factors the ad platform cannot observe should think hard about where in the funnel they set their optimization target.

Google's value-based bidding documentation explains the mechanics, but it does not address the strategic question of when downstream optimization hurts more than it helps. That is a judgment call, and most of the default recommendations from platform reps assume short, clean conversion paths. If your path is neither short nor clean, the defaults are probably making things worse.

I do not think most B2B marketing teams are going to change their optimization strategy based on one article. These habits are deeply embedded, and the reporting infrastructure is built around downstream metrics. But the next time your campaigns mysteriously underperform for two weeks and then recover without any changes on your side, consider that the algorithm might be reacting to something happening in your sales team's calendar rather than anything wrong with your ads. That realization, more than any framework, is what usually gets people to rethink where their optimization point should sit.