You Fed Google Ads More Conversion Signals. It Optimized for the Easiest One.

You Fed Google Ads More Conversion Signals. It Optimized for the Easiest One.
A 76-to-1 noise ratio in your conversion setup means Smart Bidding optimizes for pageviews, not purchases.

Google's own documentation encourages you to feed Smart Bidding more conversion signals. More data points, better optimization, smarter algorithm. And for a while, the CPA in your dashboard seems to prove the point. It drops. Sometimes dramatically. A 40% CPA reduction looks great in a Monday morning report.

Then someone pulls the revenue numbers, and the room gets quiet.

Sarah Stemen laid out the problem in Search Engine Land this week, and it is worth reading in full because the mechanism she describes is something I suspect a lot of accounts are experiencing without realizing it. When you set micro-conversions (pageviews, add-to-carts, newsletter signups, scroll depth events) as Primary conversion actions, Smart Bidding optimizes toward them. Not alongside your real conversions. Toward them, specifically, because they are easier to generate.

The algorithm is doing exactly what you told it to do. The problem is that what you told it to do is wrong.

The 76-to-1 Noise Ratio Nobody Talks About

Stemen uses an example that made me actually stop and recalculate: a typical over-tracked account recording 500 pageviews, 200 add-to-carts, 50 form starts, and 10 purchases per period. That is 760 conversion signals for every 10 real sales. A 76-to-1 noise ratio.

Smart Bidding processes all of those signals with roughly equal weight when they are marked as Primary actions. And because a pageview is infinitely easier to generate than a purchase, the algorithm naturally gravitates toward the path of least resistance. It finds users who will view pages. It finds users who will start forms. It does not find users who will buy, because you gave it 75 easier targets for every 1 hard one.

The CPA drops because you are paying less per micro-conversion. The dashboard looks excellent. Revenue stays flat because the algorithm was never optimizing for revenue in the first place.

I think a lot of PPC managers have experienced this without connecting it to their conversion setup. The campaigns "perform well" on paper. The client or CFO keeps asking why sales have not moved. And the answer is sitting in the conversion actions tab, not the campaign settings.

Google Recommends This. That Is Part of the Problem.

This is the tension nobody wants to name directly. Google's own documentation on Primary vs. Secondary conversion actions pushes you toward adding more signals, especially for accounts below the 30-50 monthly conversion threshold where Smart Bidding struggles to find patterns. The logic seems reasonable: if you only get 15 purchases a month, the algorithm does not have enough data to optimize. So add some higher-funnel events to increase the signal volume.

The problem is the exit ramp. Google tells you when to add micro-conversions (when volume is low). It does not tell you when to remove them (when your volume has grown past the threshold). So accounts that started with 15 monthly purchases and correctly added micro-conversions are now running at 80 monthly purchases but still feeding the algorithm 600+ micro-conversion signals alongside. The crutch became the default, and nobody went back to clean it up.

Navah Hopkins at Optmyzr put it well: you should only mark high-intent micro-conversions as Primary, and only when you have correlation data proving those micro-conversions actually predict purchases. "Add-to-cart" with a 40% purchase rate is a useful signal. "Page scroll" with no measurable purchase correlation is noise.

The Volume Thresholds That Actually Matter

The practical framework from the research breaks down like this:

Below 30 real conversions per month: Micro-conversions as Primary actions are probably justified. The algorithm genuinely does not have enough data, and higher-funnel signals help it learn. But even here, be selective. Add-to-cart or qualified lead submission, not pageviews or time-on-site.

30 to 60 real conversions per month: Start reducing your reliance on micro-conversions. Move the softest ones (pageviews, scroll events) to Secondary. Keep only the highest-intent ones as Primary while the algorithm builds more confidence on real conversion data.

Above 60 real conversions per month: You should probably have zero micro-conversions set as Primary. The algorithm has enough real data. Every micro-conversion signal at this point is dilution, not addition. As Search Engine Land noted in a separate analysis, soft conversions at scale "actively train the algorithm in the wrong direction."

There is an important caveat here: removing micro-conversions from Primary means the algorithm resets its learning to some degree. Julie Friedman Bacchini, one of the sharpest PPC practitioners I follow, pointed out that removing Primary actions later means "starting over to a large extent on system learning." So do not rip them all out at once. Phase them out over 2-3 weeks, starting with the lowest-intent signals, and give the algorithm time to recalibrate.

The Safety Discount Nobody Uses

If you are in the 30-60 conversion range and need to keep some micro-conversions as Primary, there is a valuation approach that PPC Land covered in their Smart Bidding analysis that seems to work reasonably well.

Calculate the baseline value of each micro-conversion: multiply its conversion rate to sale by your average order value. Then apply a 25% discount. If an add-to-cart converts to purchase at 35% and your AOV is $120, the baseline value is $42. With the safety discount, you assign it $31.50.

The logic: undervaluing micro-conversions may slightly slow learning, but it does not distort bidding. Overvaluing them is what creates the noise problem in the first place. Err on the side of undervaluing. The algorithm can tolerate conservative values. It cannot tolerate inflated ones without eventually optimizing toward them.

The Diagnostic Metric Worth Adding

Jordan Brunelle suggested monitoring what he calls the micro-conversion-to-outcome ratio as a diagnostic metric. It is exactly what it sounds like: the number of micro-conversions generated per actual sale. Track this weekly. If the ratio is climbing (more micro-conversions per sale over time), the algorithm is drifting toward easier targets. If it is stable or declining, your signal quality is holding.

I would add one layer to this. Compare that ratio across campaigns. If Campaign A shows a 12:1 micro-to-sale ratio and Campaign B shows 45:1, Campaign B's conversion setup is almost certainly polluted. The fix is not campaign-level. It is in the conversion action configuration.

This connects to something we wrote about with B2B lead valuation recently: the algorithm optimizes for whatever signal you give it the most of. If you give it more noise than signal, it will optimize for noise and report great numbers while doing it. That is not a bug. That is the system working as designed, with bad inputs.

A 15-Minute Audit Before Your Next Reporting Cycle

Open Google Ads. Go to Goals, then Conversions, then Summary. Sort by conversion action. Look at two things:

First, how many conversion actions are set to Primary? If the answer is more than 3, you almost certainly have a signal pollution problem. The tightest accounts I have seen run 1-2 Primary actions, rarely more.

Second, what is the volume ratio? Count your micro-conversion events versus your actual sale or lead events over the last 30 days. If micro-conversions outnumber real conversions by more than 10:1, you are in noise territory regardless of what your CPA says.

The fix is not complicated. Move low-intent actions to Secondary (they still report in your dashboards, they just stop influencing bids). Phase out the softest signals first. Give the algorithm 2-3 weeks to recalibrate. Your CPA will probably go up in the short term. Your revenue should follow within a billing cycle, and that is the metric that actually pays salaries.

The slightly uncomfortable conclusion here is that a lot of PPC accounts look efficient on their own terms while underperforming on the metrics that matter to the business. The dashboard is not lying. It is just answering the wrong question. And the fix starts with asking what, exactly, you told the algorithm to optimize for.