Bing Is Shrinking Copilot's Citation Click Target to a Single Mark

Bing Is Shrinking Copilot's Citation Click Target to a Single Mark
Bing's Copilot test leaves only the small numbered citation clickable. Source: Search Engine Roundtable.

Microsoft Bing is testing Copilot Search results that strip the inline link off the cited answer text and leave only the small superscript citation number clickable. The change shrinks each citation from a full-sentence target to one character. Bing's own AI Performance report, which launched in February, still does not show click data, so publishers cannot directly measure what the test costs them.

Barry Schwartz at Search Engine Roundtable flagged the test on April 27, 2026, with screenshots showing Copilot Search results where the link no longer wraps the cited passage. Only the numbered citation mark at the end is hyperlinked. Until now, Bing's behavior matched Google's AI Overviews and ChatGPT inline citations: hover or tap anywhere in the cited sentence and you get the source. In the test variant, you have to find the number.

Microsoft is calling this a test, not a rollout. Tests in this surface area tend to ship.

Why a smaller click target costs more than it sounds

Fitts's Law is one of the oldest results in interaction design: the time and accuracy cost of hitting a target scales with how small the target is. Shrink a click target by an order of magnitude and you do not lose 10% of your clicks. You lose a lot more, and the loss is non-linear.

A line of cited text in a Copilot answer is roughly 400 to 600 pixels wide on desktop. A superscript citation mark is maybe 12 pixels of glyph plus a few pixels of padding. Conservatively that is a 30x reduction in surface area. On mobile, where most Copilot interactions actually happen, the gap is worse, because thumbs are not precision instruments and the surrounding text becomes a much larger competitor for taps. From what I have seen on accidental-click data in other ad surfaces, when the surrounding chrome is text-heavy, taps land everywhere except the intended target.

The kicker is that this lands at a moment when Copilot was supposed to be the more publisher-friendly option. Microsoft's November 2025 announcement bringing AI Search into Copilot specifically called out "more prominent citations" as the differentiator from chat-only competitors. Search Engine Land's coverage took Microsoft at its word. Five months later, the citations are still there. The clickable area around them is what is shrinking.

The Pedowitz Group's recent breakdown of how Bing Copilot sources answers argued that Copilot embeds citations more aggressively per query than Google's AI Mode does, but conversion of citation to clickthrough has been the open problem. This test makes the open problem worse on purpose.

Microsoft's own reporting will not show the damage

Bing launched its AI Performance report inside Webmaster Tools on February 9, 2026. The dashboard shows five things, per Microsoft's own announcement: total citations, average cited pages, grounding queries, page-level citation activity, and a visibility trend line.

What it does not show: clicks. Or click-through rate. Or sessions. Or anything that connects "Copilot cited me" to "a person ended up on my site."

Search Engine Land called this out at launch. So did Search Engine Journal. The framing in the trade press at the time was that Microsoft was at least giving publishers something Google's Search Console barely does, which is true. The catch is that the something is incomplete in exactly the dimension that matters once a click-target test like this one ships.

Microsoft can run the test, watch internal engagement metrics, and decide it does not hurt user satisfaction. Publishers cannot verify the trade-off because the data they would need was never released.

Most SEO dashboards will miss this entirely

Bing has been a rounding error in most paid and organic dashboards for so long that the muscle of monitoring it has atrophied. Most teams I have watched look at Bing share of AI-citations data once a quarter, if that. Some not at all.

That is the problem. If Bing's test cuts publisher click-through by, say, another 25 to 30 percent over the next two months, the only signal in your GA4 will be a small dip in bing.com / referral sessions. If you do not have a baseline from before the test, you will not see the dip. You will just notice in Q3 that Bing-driven sessions are lower than they used to be, and you will quietly attribute it to "AI search erosion" without ever knowing which platform or which UI change actually did it.

This is the same shape as the AI Overviews CTR rebound we covered earlier this month. The rebound looked real until you noticed it tracked Google's impressions bug, not a behavior shift. Same template here: a UX or measurement change that sounds minor, in a system where publishers cannot independently verify the traffic impact, gets quietly absorbed because nobody had a clean before-and-after.

The 15-minute baseline you should pull today

This is the action. Do it before the test rolls out wider. From what I have seen, this kind of A/B usually expands within four to eight weeks if internal metrics hold.

  1. Open the AI Performance report in Bing Webmaster Tools (Performance and reports, then AI Performance). Export the last 90 days of total citations and page-level citation activity to CSV. This is your "before" Bing surface area on the citation side.
  2. In GA4, build a segment for source containing bing or copilot in the referral hostname. Export the last 90 days of sessions, engaged sessions, and conversions. Save it as your Bing baseline.
  3. Run the same export from server-side logs if you have them, because GA4 will collapse some Copilot referrals into (other). Server logs see the actual Referer header. The two numbers will not match exactly, and the gap is informative on its own.
  4. Set a calendar reminder for May 27 and June 27. Re-pull both files. Compare to baseline. Any drop greater than your normal Bing variance window is signal that the test is hitting your traffic.
  5. Bonus, if you have a publisher-side log pipeline: pull the ratio of bingbot and Copilot user-agent crawl hits to bing.com sessions. The crawl-to-session ratio is a leading indicator. If Bing keeps citing you at the same rate but sessions fall, that is a click-target compression effect, not a deindex or a ranking move.

The baseline is the entire point. Bing has shipped the test. Microsoft will not retroactively give you click-through data. Your CSV from today is the only proof of what was lost.

The quieter pattern across all three AI surfaces

There is a thing playing out across Google AI Overviews, Bing Copilot, and ChatGPT search where citation visibility goes up while click-through quietly goes down. Liz Reid's "bounce clicks" defense at Google was the same pattern: citations are fine, and the problem is that you cannot prove what they are worth.

Each platform is making the citation more decorative and the click less likely. On paper, that sounds like a UX preference. And sometimes it is. But the cumulative math gets brutal: if Copilot, AI Overviews, and ChatGPT each cut publisher click-through by 20 to 30 percent over twelve months, the surviving traffic looks like a different internet, and the publishers funding the content the AIs cite are doing it for less and less return.

Microsoft, to be fair, gave publishers the most visibility-side data of any AI search platform when it shipped the AI Performance report. That part is real progress. The click data is the missing half, and the click-target test is the move that turns the missing half into the part that actually matters.

The one bet I would make on this

Most teams will see "Bing test" and skip the post. Bing is small enough that it does not feel urgent. In six months, the post-mortem will say "we should have measured." The teams who export the CSV today have a baseline that nobody else does.

Personally, I would rather be holding the boring spreadsheet on May 27 than hunting for one.

Notice Me Senpai Editorial