AI Chat App Leaks 300M Messages
The Chat & Ask AI app, with over 50 million downloads on major stores, suffered a major data exposure through improper Firebase backend setup. The Chat Ask AI leak permitted unauthorized reads of full conversation logs, timestamps, custom AI names, and model preferences like ChatGPT or Claude.
This wrapper app’s failure highlights confidentiality risks in third-party AI interfaces, potentially enabling misuse of sensitive user inputs across personal and professional contexts.
Breach Scope
Researchers accessed data from more than 25 million users, totaling around 300 million messages. A sample of 60,000 users and one million messages showed queries on illegal drug production, app hacking, and suicide methods. Storage included complete histories without encryption barriers.
Firebase Misconfiguration
Default Firebase rules allowed self-designated authentication, bypassing protections. Such setups expose real-time databases publicly despite the platform’s security features when configured correctly.
No CVE assigned to this configuration error.
Data Contents
Exposed elements detailed user-AI interactions comprehensively.
| Data Type | Description |
|---|---|
| Conversation History | Full chat logs |
| Timestamps | Interaction times |
| AI Configurations | Model selections, custom names |
Implications
The incident underscores wrapper apps as weak points, storing unencrypted sensitive data despite leveraging secure upstream models. Personal disclosures in AI chats carry lasting integrity risks once leaked, differing from revocable credentials.
Users face elevated privacy threats from exposed queries revealing vulnerabilities or intents. Firebase’s ease contributes to repeated misconfigurations across apps.
The Chat Ask AI leak compromises conversation confidentiality for vast user bases, with no patches detailed but emphasis on developer rule hardening. Ongoing scans reveal similar issues in over 100 iOS apps, amplifying sector-wide exposure risks.
No Comment! Be the first one.