82 Chrome Extensions Sell 6.5M Users' Browsing Data and It's Perfectly Legal

82 Chrome Extensions Sell 6.5M Users' Browsing Data and It's Perfectly Legal
Chrome's Limited Use policy bans this exact behavior. Eighty-two extensions are doing it anyway.

Security firm LayerX audited 6,666 Chrome Web Store privacy policies in April 2026 and found 82 extensions, including Stands AdBlocker (3M users) and Poper Blocker (2M users), explicitly reserving the right to resell user browsing data to third parties. Combined reach: more than 6.5 million users. Every one of those 82 extensions claimed legal cover through a privacy policy that 71% of Chrome Web Store extensions never publish at all.

The framing in most coverage, including the PPC Land writeup, is consumer-side: ad blockers spy on you, here's the irony, here are the names. That's the easy story. The harder one is on the buy side. Most of those 6.5M users are showing up in someone's lookalike seed, somewhere, right now, and the marketer who funded the buy has no idea the data was scraped through a tool the audience installed for the opposite reason.

The list LayerX named is short enough to memorize

The headline numbers are worth holding onto. LayerX flagged 82 extensions in total, with 6.5M+ active users between them. The five worth knowing by name:

  • Stands AdBlocker (3 million users). Sells browsing data for what its policy calls market analytics.
  • Poper Blocker (2 million users). Collects identifiers and inferred attributes including health, religion, and sexual orientation.
  • All Block (500K users). Anonymized browsing data sold for analytics.
  • TwiBlocker (80K users). Browsing data sold to third parties.
  • Urban AdBlocker (10K users). Routes through the BiScience data broker.

Then there's the QVI network: 24 extensions under the "dogooodapp" brand, run by HideApp LLC, with about 800,000 users across Netflix, Hulu, Disney+, and Prime Video. They collect viewing history, content preferences, and demographics by matching emails against third-party identity databases. Buyers include studios, content creators, media research firms, and marketing agencies. Yes, that last one is us.

The full LayerX writeup names every extension, the data category, and the disclosed buyer category. Worth a read before you assume your enterprise allowlist hasn't quietly inherited a few of them.

How a privacy tool turns into a tracking pipeline

The mechanism is the part most people miss. Browser extensions get permission scopes the moment a user installs them, and "read and change all your data on websites you visit" is the default for almost any ad blocker, content blocker, or anti-popup tool. That permission is genuinely needed for the stated function. It also happens to be the broadest possible read on a user's session.

The data-selling layer sits behind that scope. Some extensions ship with a third-party SDK from a data broker. BiScience is the most-cited example here, and Wladimir Palant's teardown of how BiScience collects browsing history is still the cleanest technical walkthrough I've seen. Other extensions operate a backend the developer controls and sell the data themselves. Either way, the legal cover is a privacy policy line item the user "consented to" by clicking install.

What makes it legal, more or less, is that the disclosure exists. What makes it lousy in practice is that 71% of Chrome Web Store extensions never bother publishing a privacy policy at all, and the ones that do are long PDFs nobody opens. The LayerX team put it more cleanly than I can: the gap is not between what is legal and what happens. The gap is between what is disclosed and what is read.

It's also worth saying out loud: Chrome's own Limited Use policy explicitly bans transferring or selling user data to third parties like advertising platforms, data brokers, or other information resellers. Eighty-two extensions are doing it anyway. Enforcement is the missing variable, not policy.

Why this matters on the buy side

This is where it stops being a cybersecurity story and becomes a media-buying story.

The data being sold isn't anonymous in any practical sense. BiScience's clickstream feeds are tied to persistent device identifiers; QVI's matching is done against email databases. So when a streaming research firm or a marketing agency buys this data and pipes it into a custom audience or a measurement panel, they're buying something that walks and talks like first-party traffic. There's no flag that says "this user thought they were blocking ads."

A few practical implications:

  1. Lookalikes get poisoned in a quiet way. If your seed audience pulled even a few percent of its signal from one of these brokers, you're modeling against a population that explicitly opted out of advertising surveillance. That doesn't mean the seed is unusable. It means your incrementality test is reading a different distribution than your conversion model thinks it is.
  2. Attribution panels are likely contaminated. A measurement vendor selling "panel of X million users" should be able to tell you how those users were recruited. From what I've seen, the answer is usually a paragraph of vague language about "publisher partnerships" and "consented data partners." That's where the laundering shows up.
  3. Brand-safety models trained on this data get weird. A model trained on browsing histories sourced through ad-blocker users is going to over-index on websites that ad-blocker users actually visit. Which, by definition, are websites whose ad inventory the same data is then going to help value.

I don't think any of this is catastrophic on its own. But it stacks. A few percent of distortion in lookalikes, plus a few percent in panels, plus a few percent in brand-safety scoring is probably the difference between a campaign that hits ROAS and one that misses by a hair you can't explain.

The 30-minute audit worth running this week

It is not a heroic engineering lift. It is a checklist:

  1. Pull your DSP's audience source list and your DMP's data partner list. Look for any vendor whose recruitment language includes "browser extension," "browser panel," or "publisher partnerships" without specifics. Ask, in writing, whether their panel includes data collected through extensions categorized as ad blockers, VPNs, or content blockers. Save the response.
  2. Pull your enterprise extension allowlist (your IT team has one). Cross-reference against the LayerX 2026 report, which publishes the full list of 82 by name. If any are allowed, route them to your security team for a permission review. This is also the cleanest internal pitch for getting an extension governance policy in place.
  3. If you run lookalike modeling, isolate any seed audience sourced from a third-party data partner versus one built from your own first-party event stream. Run a holdout test with first-party-only seeds for a week. The delta won't be dramatic, but it'll tell you how much of your modeling is leaning on broker data you don't fully trust.
  4. If you run a measurement panel, ask the vendor for the SOC 2 disclosure on data sources. Reputable panels will hand it over within a day. The ones that won't are the ones to worry about.

We've covered the broader extension surveillance issue from the LinkedIn angle before. The platform's hidden scan of 6,278 extensions including Apollo and Lusha ran on the same logic: nobody was reading the permission grants. That story was about extensions scraping platforms. This one is the inverse, extensions scraping users and reselling the result. Same plumbing, different direction.

What probably happens next

Google's enforcement options here are limited by their own incentives. The Chrome Web Store generates almost no direct revenue, and removing 82 extensions with 6.5M combined users will produce loud headlines, but the policy already prohibits the behavior. The teams that should care more are the data buyers, who have a direct exposure to enforcement from the FTC, state AGs, or EU regulators applying GDPR consent rules. That exposure doesn't depend on Google doing anything.

So my best guess is the next move comes from the buy side. A holding company audits a panel vendor, finds extension-sourced data, asks for a discount, and the discount becomes the new market price for that segment. That's how ad-tech taxes usually get re-priced: not with a press release, but with a renegotiated contract.

Until that happens, just assume part of your custom-audience strategy is leaning on data nobody at the source thought they were giving you. Worth knowing, even if the fix takes a few quarters.

The strangest part of reading the LayerX list isn't the scale. It's how mundane the names are. These aren't sketchy crypto wallets or random PDF converters. They're tools with millions of users who installed them specifically to opt out of being tracked, and the trade was that the tracking just moved one layer down the stack. I don't have a clean takeaway for that. I just keep thinking about it.

Notice Me Senpai Editorial