AI Chat Secrets Leaked: 8M Users & Browser Extensions Exposed

Phucthinh

AI Chat Secrets Leaked: 8 Million Users & Browser Extensions Exposed

The world of AI chatbots is rapidly evolving, offering incredible convenience and functionality. However, a recent investigation by security firm Koi has revealed a disturbing trend: browser extensions, boasting over 8 million users, are secretly harvesting and selling complete AI conversation data for marketing purposes. This breach of privacy raises serious concerns about the security of our interactions with popular AI platforms like ChatGPT, Gemini, and others. This article delves deep into the details of this data harvesting operation, the extensions involved, and what users can do to protect themselves.

The Data Harvesting Operation: How Your AI Conversations Are Being Exploited

Koi’s research uncovered eight browser extensions available on both the Google Chrome Web Store and the Microsoft Edge Add-ons store. Shockingly, seven of these extensions were designated with “Featured” badges – endorsements signifying that the platforms deemed them to meet quality standards. These extensions typically offer functionalities like VPN routing and ad blocking, promising enhanced online privacy. However, their privacy assurances are demonstrably false.

The core of the issue lies in the extensions’ code, which contains eight unique “executor” scripts designed to intercept data from leading AI chat platforms, including ChatGPT, Claude, Gemini, Copilot, Perplexity, DeepSeek, Grok, and Meta AI. These scripts operate by overriding the browser’s standard network request functions (fetch() and HttpRequest), effectively creating a backdoor for data collection.

How the Scripts Work: Intercepting Your Data

According to Koi CTO Idan Dardikman, “By overriding the [browser APIs], the extension inserts itself into that flow and captures a copy of everything before the page even displays it.” This means that every prompt you send, every response you receive, along with timestamps, session metadata, and the specific AI model used, is intercepted and sent to the extension maker’s servers. The data is then compressed and transmitted, bypassing your browser’s security measures.

Crucially, this data collection continues even when core functionalities like VPN networking or ad blocking are disabled. The only way to definitively stop the harvesting is to disable or uninstall the extension entirely.

The Extensions Involved: A List of Culprits

The investigation began with the discovery of data harvesting in Urban VPN Proxy, a VPN routing extension. The data collection commenced in early July with the release of version 5.5.0. Following this initial finding, Koi identified seven additional extensions employing the same tactics. Here’s a breakdown of the extensions and their user numbers (as of late 2024):

  • Chrome Store
    • Urban VPN Proxy: 6 million users
    • 1ClickVPN Proxy: 600,000 users
    • Urban Browser Guard: 40,000 users
    • Urban Ad Blocker: 10,000 users
  • Edge Add-ons
    • Urban VPN Proxy: 1,32 million users
    • 1ClickVPN Proxy: 36,459 users
    • Urban Browser Guard – 12,624 users
    • Urban Ad Blocker – 6,476 users

What Data is Being Collected? A Comprehensive Overview

The harvested data is incredibly comprehensive, encompassing:

  • Every prompt a user sends to the AI
  • Every response received from the AI
  • Conversation identifiers and timestamps
  • Session metadata
  • The specific AI platform and model used

This data includes potentially sensitive information such as medical questions, financial details, proprietary code, and personal dilemmas – all of which are being sold for “marketing analytics purposes.”

Conflicting Messaging and Hidden Disclosures

The extensions present a deceptive facade, advertising features like “AI protection” while simultaneously harvesting AI conversation data. For example, Urban VPN Proxy claims to “check prompts for personal data” and “display a warning before click or submit your prompt.” However, their privacy policy reveals a different story.

While the extensions display a consent prompt mentioning the processing of “ChatAI communication,” “pages you visit,” and “security signals,” it frames this processing as necessary for providing core functionalities. The explicit disclosure of AI conversation harvesting is buried deep within lengthy and complex privacy policies – such as the 6,000-word policy for Urban VPN Proxy.

This policy states that the extension will “collect the prompts and outputs queried by the End-User or generated by the AI chat provider, as applicable” and “disclose the AI prompts for marketing analytics purposes.”

The Companies Behind the Extensions: Urban Cyber Security, BiScience, and B.I Science

All eight extensions and their associated privacy policies are developed and managed by Urban Cyber Security, a company claiming to have 100 million users across its apps and extensions. Their policies reveal that they share “Web Browsing Data” with affiliated companies, BiScience and B.I Science.

BiScience describes itself as a company that “transforms enormous volumes of digital signals into clear, actionable market intelligence.” This suggests that the harvested AI conversation data is being used to create detailed user profiles and generate targeted marketing insights.

The Role of Google and Microsoft: A Failure of Oversight?

It’s perplexing that both Google and Microsoft allowed these extensions onto their platforms, especially considering that seven of them were awarded “Featured” badges. Neither company has responded to inquiries regarding their vetting process for extensions, their plans to address this issue, or the clarity of their privacy policy requirements.

Attempts to contact the extension developers and Urban Cyber Security have also been unsuccessful. BiScience provides no email contact and directed calls to their New York office to Israel.

The Broader Implications: A Cautionary Tale for the Age of AI

Koi’s discovery serves as a stark reminder of the growing risks associated with online activity. Trusting sensitive information to AI chatbots, which lack the privacy protections of traditional communication channels (like HIPAA assurances or attorney-client privilege), is inherently risky. The ease with which free apps and extensions can access and exploit this data further exacerbates the problem.

The rush to install convenient tools, particularly from unknown developers, without carefully reviewing their privacy policies is a recipe for disaster. This incident highlights the urgent need for greater transparency, stricter oversight, and enhanced user awareness regarding data privacy in the age of AI. As reported by GearTech, this is a developing story and users should remain vigilant.

Protecting Yourself: What You Can Do

Here are some steps you can take to protect your AI conversations:

  • Disable or Uninstall Suspicious Extensions: Immediately disable or uninstall any of the extensions listed above, or any extension you suspect of questionable behavior.
  • Review Extension Permissions: Carefully review the permissions requested by any browser extension before installing it.
  • Read Privacy Policies: Take the time to read the privacy policies of extensions and apps, paying close attention to how your data is collected, used, and shared.
  • Use Privacy-Focused Browsers and Extensions: Consider using browsers and extensions that prioritize privacy and data security.
  • Be Mindful of What You Share: Exercise caution when sharing sensitive information with AI chatbots.

The incident underscores the importance of proactive data protection and responsible AI usage. By taking these steps, you can mitigate the risk of having your AI conversations exploited and safeguard your privacy in the digital age.

Readmore: