AI-powered toys promise personalized learning and engaging conversations, but the recent Bondu AI toy data breach shows how quickly that convenience can turn into a serious privacy incident. Due to a simple yet critical access-control error in Bondu’s web portal, anyone with a Gmail account could access thousands of children’s conversations with the toy and related personal details.
How the Bondu AI Toy Exposed Children’s Conversations
The issue was discovered by security researchers Joseph Thacker and Joel Margolis after a neighbor asked them to review the Bondu toy her children were using. Within minutes, the researchers identified an open Bondu web console where Google sign‑in was accepted as sufficient authentication, but authorization and tenant isolation were not correctly enforced.
This is a classic case of broken access control: once logged in with any Google account, a user could see not only their own records but also a complete list of all conversations from other Bondu customers. In security terms, the system failed to separate users and families, effectively turning an internal support portal into a public data repository.
What Personal Data About Children Was Exposed
Through this vulnerable portal, an attacker could access far more than harmless chatbot banter. The exposed information included:
children’s names and dates of birth; names of family members; “developmental goals” defined by parents in the app; and full text transcripts of every conversation with the Bondu AI toy.
According to the company, the breach involved more than 50,000 chat sessions, representing essentially the entire interaction history, except for chats manually deleted by parents. As the researchers noted, these transcripts often revealed what children call the toy, their favorite games and snacks, family events, and other intimate details of home life.
Security and Safety Risks Stemming from Children’s Chat Data
Children tend to overshare with devices they perceive as friendly. As a result, chat logs from AI toys and smart gadgets can become highly detailed profiles of a child and their family: daily routines, hobbies, emotional state, relationships at home, and sometimes even addresses or landmarks near school and home.
From a cybersecurity perspective, such data is extremely valuable for social engineering and targeted attacks against children. A malicious actor armed with these transcripts could convincingly impersonate a “friend,” reference personal details to gain trust, or use sensitive information for coercion and manipulation. Past incidents, such as the VTech breach that exposed millions of children’s accounts in 2015, demonstrate how dangerous large-scale leaks of children’s data can be.
Bondu’s Response and the Limits of “No Evidence of Misuse”
After being alerted, Bondu quickly disabled access to the console and relaunched the portal the next day with corrected authentication and authorization controls. The company’s CEO, Fatin Anam Rafid, stated that the fix took only a few hours and was followed by an additional security audit.
Bondu claims that it found no evidence that attackers had exploited the vulnerability. However, the absence of clear indicators does not guarantee that no unauthorized access occurred. Many consumer portals have limited or short‑lived logging, which means successful intrusions may leave little to no trace. For systems processing children’s data, robust logging, long retention, and continuous monitoring should be considered mandatory controls.
Systemic Privacy Risks of AI Toys and Smart Gadgets
The Bondu case highlights a broader issue: most AI toys and connected devices for children retain detailed interaction histories. These logs are used to improve language models and personalize interactions, but they also create large, sensitive datasets that must be protected throughout their entire lifecycle.
Third‑Party AI Models and the Data Supply Chain
Bondu relies on external AI platforms, including Google Gemini and, according to company statements, future models such as GPT‑5 from OpenAI, to generate responses. Even when providers operate in “enterprise” modes where prompts are not used for training, children’s data still flows through multiple vendors.
This data supply chain raises regulatory and compliance questions, especially under frameworks like the EU’s GDPR and the U.S. Children’s Online Privacy Protection Act (COPPA), which impose stricter rules on processing children’s data. Each additional processor must be vetted, contracts must define data use and retention, and data minimization should be enforced to reduce exposure.
AI-Assisted Development and Insecure Web Portals
The researchers believe the Bondu portal was built with heavy use of AI coding assistants and so‑called “vibe coding” — relying on generated code snippets without thorough security review or testing. When this approach is applied to authentication, session management, and access control, critical vulnerabilities can be introduced silently.
As generative AI accelerates software delivery, organizations must integrate secure development practices: threat modeling, code review focused on access control, automated security testing, and periodic penetration tests. Broken access control consistently appears at the top of the OWASP Top 10 list of web application risks, and consumer IoT and toy platforms are not exempt.
The Bondu incident is a reminder that every “smart” toy should be treated as a potential data leak source. Before introducing AI toys into the home, parents should review what data is collected, how long chats are stored, whether encryption is used, and whether the vendor offers clear privacy controls and deletion options. Wherever possible, limit the personal information children share with devices, regularly review account settings, and purge older chat histories. For manufacturers, privacy‑by‑design, least‑privilege access, and independent security audits are no longer optional extras but essential prerequisites for earning families’ trust in the age of AI toys.