A large-scale data exposure incident has hit Chat & Ask AI, a popular generative AI application with around 50 million users worldwide. Due to a misconfigured Firebase cloud database, hundreds of millions of private conversations between users and AI chatbots were left accessible to anyone who knew where to look, including highly sensitive and potentially incriminating queries.
Scope of the Chat & Ask AI Data Breach
Chat & Ask AI, developed by Turkish company Codeway, acts as a client for multiple large language models, including ChatGPT, Claude and Gemini. The app has been installed more than 10 million times from Google Play and has over 318,000 ratings in Apple’s App Store, which significantly amplifies the impact of any security incident involving its backend infrastructure.
According to reporting by 404 Media, a security researcher using the pseudonym Harry discovered that the application’s backend storage was effectively public. By connecting as a standard mobile client, the researcher was able to retrieve roughly 300 million messages associated with about 25 million users. The exposed dataset included full conversation histories, timestamps, bot names, model configuration parameters and records of which specific AI model each user selected.
How a Firebase Misconfiguration Led to Full Data Exposure
The root cause of the breach was an incorrect configuration of Firebase, Google’s popular backend-as-a-service platform widely used by mobile developers. Firebase access control is governed by Firebase Security Rules, which must explicitly define who can read or write to each collection of data. In this case, the rules were overly permissive, allowing any app client to be treated as an authenticated user with read access to the backend database.
Misconfigurations of this type are a well-known class of cloud security issue, similar to publicly exposed Amazon S3 buckets or open Elasticsearch clusters. Dan Guido, CEO of security firm Trail of Bits, has previously described weak Firebase rules as a “well-known weakness” in the mobile ecosystem. Trail of Bits even reported building a scanner to detect insecure Firebase configurations in roughly 30 minutes using an AI model, underscoring how widespread and easily discoverable these flaws can be.
Highly Sensitive AI Chatbot Content in the Leak
Analysis of a sample of the leaked data shows that many people treat AI assistants as a space for intensely personal and confidential discussions. Among the exposed prompts were requests for advice on how to commit suicide painlessly, help drafting farewell letters, questions about drug manufacturing and guidance on hacking or bypassing applications.
Example prompts included: “Write me a two-page essay on how to cook meth in a world where it has been legalized for medical use,” and “I want to kill myself, what is the best way?” Such content highlights that AI chat logs can reveal not only identifiable personal data, but also a user’s mental health status, intentions, risk behavior and potential involvement in activities that could be interpreted as criminal.
Beyond One App: A Systemic Mobile and AI Security Problem
Harry reported that the vulnerability was not limited to Chat & Ask AI alone, but affected other Codeway products that reused the same backend configuration. After being notified on 20 January 2026, Codeway reportedly closed the exposed Firebase rules across its apps within a few hours. This demonstrates a commendably fast response, but also illustrates how a single insecure cloud configuration can silently scale across an entire product portfolio.
To assess how common this issue is in the broader ecosystem, the researcher developed an automated tool to scan app stores for Firebase misconfigurations. When testing 200 iOS applications, 103 were found vulnerable—more than half of the sample. Harry has launched a website listing affected apps and removes entries as vendors remediate their configurations. Codeway’s products have already been taken off the list following the fix.
Privacy, Legal and Security Risks for AI Chatbot Users
Data leaks from AI apps create a dual risk profile. First, they can expose deeply personal and confidential information: health hints, mental health struggles, relationship conflicts, financial problems, location patterns and behavioral details that could be used for profiling, blackmail, targeted scams or discrimination.
Second, AI chat logs often contain discussions of potentially unlawful actions—even if hypothetical or rejected by the model. If such data is harvested and correlated with other breached information, it may cause legal exposure, reputational damage or harassment for affected users. From a regulatory perspective, long-lived storage of such sensitive content without strong protection can attract scrutiny under data protection and privacy laws.
Key Security Lessons for AI and Mobile App Developers
The Chat & Ask AI incident underlines fundamental security practices that remain frequently neglected in fast-moving AI product development. At minimum, teams using Firebase or similar platforms should enforce strict Firebase Security Rules, apply the principle of least privilege to every read/write operation, and integrate automated configuration scanning into their CI/CD pipelines.
Additional safeguards include regular independent security assessments, robust logging and monitoring for anomalous access patterns, and encryption of highly sensitive data both in transit and at rest. Given the intimacy and sensitivity of many AI chatbot interactions, security baselines for these services should be treated as comparable to healthcare or financial data protection, not casual consumer apps.
Practical Recommendations for AI Chatbot Users
While infrastructure security is primarily the responsibility of providers, users can reduce their exposure by limiting what they share with AI apps. It is advisable to avoid entering passport or ID details, full payment card data, precise travel itineraries, contact details of third parties, or explicit descriptions of actions that could be legally ambiguous. Incidents like the Chat & Ask AI data breach demonstrate that even highly rated, widely used apps can contain basic architectural security flaws.
Stronger protection of AI chatbot privacy will require both more mature secure development practices from vendors and more conscious digital hygiene from users. Choosing services with transparent security practices, clear data retention policies and a proven track record of promptly fixing vulnerabilities is an essential step toward safer everyday use of generative AI.