Security researchers catch "privacy" browser extensions siphoning AI chats and selling them via a data broker

Privacy-Focused Browser Extensions Caught Harvesting and Monetizing AI Conversations

Security researchers have uncovered a brazen scheme involving browser extensions marketed as privacy protectors, which instead secretly capture users’ AI chats and funnel them to data brokers for sale. These extensions, often bearing names that evoke trust and anonymity, betray their promises by exfiltrating sensitive conversation data from platforms like ChatGPT, Claude, and Gemini. The discovery highlights the growing risks in the browser extension ecosystem, where malicious actors exploit users’ desire for enhanced privacy to conduct large-scale data harvesting.

The investigation, led by researchers at Grip Security, revealed six Chrome Web Store extensions responsible for the activity. Among them were “PrivacyGPT,” “Web Chat Privacy,” “AI Chat Privacy,” “Privacy AI,” “Private AI Chat,” and “AI Privacy Chat.” Collectively, these extensions boasted over 100,000 installations, affecting a substantial user base seeking safeguards for their AI interactions. Rather than shielding data, the extensions injected scripts into AI web interfaces to monitor and record every keystroke, prompt, and response in real-time.

Technical Mechanics of the Breach

The malicious behavior hinges on content scripts, a core feature of browser extensions that allows them to interact with web pages. Upon installation, these extensions request broad permissions, including access to tabs, storage, and activeTab for seemingly legitimate purposes like enhancing AI usability or blocking trackers. In practice, they deploy JavaScript payloads that hook into the Document Object Model (DOM) of AI chat interfaces.

For instance, on ChatGPT’s site (chat.openai.com), the extensions listen for mutations in chat elements using MutationObserver APIs. They capture user inputs via keydown and input events, scrape response text from dynamically loaded divs, and even extract metadata like session timestamps and user agents. Similar techniques target Anthropic’s Claude (claude.ai) and Google’s Gemini (gemini.google.com), adapting to each site’s unique structure through pattern matching and XPath selectors.

Captured data is bundled into JSON payloads containing full conversation threads, then transmitted via HTTPS POST requests to intermediary servers. Analysis of network traffic showed endpoints like api.chatprivacy.ai and similar domains receiving gigabytes of data daily. From there, the conversations are aggregated and forwarded to a data broker identified as “Adspend.io,” operated by a company in Cyprus. Grip Security’s researchers reverse-engineered the broker’s API, confirming that AI chats are commoditized—priced at approximately $22 per 1,000 conversations—and resold to third parties for marketing, training datasets, or behavioral profiling.

Privacy policies embedded in the extensions’ manifests and websites claim compliance with GDPR and user consent, but these are boilerplate and misleading. No explicit opt-in for data sharing exists, and uninstallation does not retroactively purge harvested data. The extensions evade detection by mimicking benign traffic patterns, using domain generation algorithms for endpoints, and employing obfuscated code to hinder static analysis.

Scale and Impact

Grip Security’s telemetry indicated over 5 million AI conversations siphoned in the past six months alone, with peaks correlating to viral AI trends. The harvested data includes highly sensitive content: personal health queries, financial advice requests, proprietary business strategies, and even legal consultations. One analyzed sample set revealed prompts about mental health struggles, corporate merger plans, and custom code generation—material ripe for exploitation.

The irony is stark. Users install these extensions explicitly to circumvent OpenAI’s data retention policies or mitigate tracking on AI platforms. Instead, they amplify exposure, channeling private dialogues into opaque marketplaces. Data brokers like Adspend.io anonymize records minimally (stripping obvious PII but retaining contextual fingerprints), enabling inference attacks that could deanonymize users through cross-referencing with public datasets.

Researcher Response and Mitigation Steps

Upon validation, Grip Security notified Google, which promptly suspended the extensions from the Chrome Web Store on October 10, 2024. The researchers also coordinated with domain registrars to disrupt the command-and-control infrastructure. However, backend servers persist, potentially pivoting to new extensions or direct web skimmers.

For users, immediate actions include:

  1. Auditing installed extensions via chrome://extensions/ and removing suspects.
  2. Clearing browser data and reviewing connected AI accounts for unusual activity.
  3. Enabling Enhanced Safe Browsing in Chrome settings.
  4. Opting for official AI apps or self-hosted alternatives to reduce web-based exposure.

Organizations should enforce extension whitelisting via enterprise policies and monitor for anomalous network flows using tools like endpoint detection and response (EDR) systems.

Broader Implications for AI Privacy

This incident underscores systemic vulnerabilities in browser extension marketplaces. Despite reviews, malicious extensions slip through due to volume—over 200,000 live on Chrome Web Store—and reliance on self-reported permissions. AI’s conversational nature amplifies risks; unlike traditional browsing, chats encode intent, cognition, and secrets in natural language.

Regulators may respond with stricter mandates, akin to Apple’s App Tracking Transparency or the EU’s AI Act. Meanwhile, AI providers could bolster client-side protections, such as DOM clobbering or integrity checks against injected scripts.

The episode serves as a cautionary tale: privacy cannot be outsourced to unverified third parties. Vigilance in extension selection—scrutinizing reviews, permissions, and publisher history—remains paramount.

Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.

What are your thoughts on this? I’d love to hear about your own experiences in the comments below.