Researchers from Miggo Security have demonstrated a novel prompt injection technique against the Google Gemini AI assistant that abuses standard Google Calendar invitations. By embedding carefully crafted instructions into the description field of a meeting invite, attackers were able to trigger data leakage from a user’s calendar without deploying any malware or compromising the endpoint itself.
New Google Gemini vulnerability: prompt injection through calendar invites
The attack exploits how Google Gemini integrates with a user’s calendar. When a user asks the assistant to “show my meetings for today” or performs similar queries, Gemini automatically reads and processes all events in the linked Google Calendar, including meetings received via external invitations.
This legitimate behavior turns calendar content into a powerful input channel. If an attacker can insert hidden instructions into an event description, those instructions may be interpreted by the model as commands rather than mere data, leading to an effective prompt injection attack.
How the Google Calendar prompt injection attack works
Hidden instructions in calendar event descriptions
In Miggo Security’s proof of concept, the attacker sends a standard meeting invitation to the victim. The only malicious component is the event description, which contains natural-language instructions disguised as notes, agenda items, or comments.
When Gemini processes the user’s schedule, it reads this description and, under certain conditions, treats it as instructions to execute. The researchers showed that the injected prompt can be designed to make Gemini:
— collect details of all meetings for a given day, including private and sensitive appointments;
— automatically create a new calendar event and insert the harvested information into its description field;
— return a benign-looking response to the user, without indicating that any hidden instructions were followed.
In practice, this turns the description of a single calendar event into a control channel for the AI assistant and a mechanism for covert data exfiltration through newly created events.
Why Google Gemini’s security controls failed in this scenario
According to Miggo Security, Google uses a separate, isolated model to filter harmful prompts and block direct prompt injection attempts. However, in this case, the protection was bypassed because the injected instructions appeared to be legitimate calendar operations: summarizing the day’s meetings, creating an event, and replying to the user.
Limits of syntax-based prompt injection detection
Traditional filters often rely on detecting syntactic indicators of malicious activity, such as explicit commands to exfiltrate data or disable safeguards. Here, the text simply asked Gemini to “collect and structure meetings for the day” and “create an event” — instructions that are semantically risky but linguistically neutral.
This issue is common across large language model (LLM) systems. The model simultaneously acts as a command executor and a data interpreter. When user-controlled data (like a calendar description) contains embedded instructions, the line between “content” and “command” blurs. OWASP’s “LLM Top 10” explicitly lists prompt injection as a primary risk for AI-enabled applications, precisely because of this ambiguity.
Corporate impact: data leakage through shared calendars
The demonstrated attack is especially concerning for enterprises that rely heavily on shared calendars for project planning, negotiations, and strategy discussions. In many organizations, event descriptions are visible to all invitees and often contain sensitive details: meeting topics, internal project codes, deal terms, or confidential participants.
If Gemini follows hidden instructions to create a new event containing these details, the resulting calendar entry may become visible to additional users or groups. An attacker who can insert a single malicious invitation into a shared calendar may indirectly gain access to confidential corporate information without ever breaching the core infrastructure.
This is not an isolated case. In August 2025, researchers from SafeBreach showed that malicious calendar invites could be used to remotely influence Gemini agents on a victim’s device and steal user data, pointing to a systemic class of risks around AI integrations with productivity tools.
Google’s response and evolving AI security best practices
Google states that it applies a multi-layered security strategy to mitigate prompt injection, including filters and additional checks. One key control is requiring explicit user confirmation before Gemini creates new calendar events, reducing the likelihood of fully automated execution of hidden instructions.
However, Miggo Security argues that the industry needs to move from purely syntax-based detection to context-aware defensive models. Effective AI assistant security should evaluate:
— the source of the data (e.g., external invite, shared document, email);
— the context and scope of requested operations (such as broad access to private meetings or files);
— the potential impact of executing the instruction (including exposure of confidential records or contacts).
This aligns with emerging guidance such as the NIST AI Risk Management Framework, which emphasizes contextual risk assessment and continuous monitoring for AI systems integrated into business workflows.
How organizations can secure AI assistants and LLM agents
Enterprises deploying AI assistants and LLM-based agents should reassess their data-access models and internal security policies. Practical defensive measures include:
— Restricting default permissions for AI agents to create, modify, or share calendar events and other records;
— Separating work and personal data sources (calendars, task lists, document stores) to limit blast radius;
— Implementing detailed logging and auditing of AI-agent actions, including what data is read and what objects (events, files) are created;
— Training employees on prompt injection risks, highlighting that everyday tools like email, calendars, and documents can serve as attack vectors.
The Gemini–Google Calendar case demonstrates that in the age of AI, any external input can become an attack surface, even something as mundane as a meeting invite. Organizations that proactively implement context-aware defenses, tightly scoped permissions for AI agents, and regular testing for prompt injection (similar to traditional penetration testing) will be better positioned to manage these emerging risks. Now is the time to integrate AI-specific threat scenarios into risk management programs and security awareness training, before such attacks move from research labs into mainstream cybercriminal toolkits.