In June 2025, the Python Package Index (PyPI) administration encountered what initially appeared to be a massive malicious campaign but later revealed itself as an unprecedented case of defensive cybersecurity measures gone wrong. The incident involved over 250 new account registrations and more than 1,500 package publications, triggering immediate security protocols and highlighting emerging threats in the AI-driven development landscape.
Initial Security Response to Suspicious Activity
Upon detecting anomalous activity linked to email addresses from the inbox.ru domain, PyPI administrators implemented immediate protective measures. All new registrations and email additions from this domain were instantly blocked, a response that seemed justified given the potential scale of the threat to the Python ecosystem.
The suspicious packages shared concerning characteristics: while they contained no executable code, they featured entry points mimicking popular libraries. This pattern represents a classic namespace squatting attempt, a tactic frequently employed in software supply chain attacks where malicious actors reserve package names to intercept future installations.
Understanding Slopsquatting: The New AI-Era Threat
The incident gained particular significance within the context of a newly identified attack vector called slopsquatting. This term, coined in 2025, describes the creation of malicious packages with names frequently “hallucinated” by large language models when generating code examples.
Recent research reveals alarming statistics about AI-generated security risks: approximately 20% of AI model responses recommend non-existent packages when generating Python and JavaScript code examples. Analysis of 576,000 code samples identified over 200,000 unique names of non-existent packages, with 43% appearing consistently across similar AI queries.
The Growing Risk of AI Hallucinations in Development
This emerging threat vector represents a significant shift in cybersecurity landscape. As developers increasingly rely on AI coding assistants, the risk of installing non-existent packages suggested by these tools creates new opportunities for malicious actors to exploit the gap between AI recommendations and actual package availability.
The Unexpected Resolution
The situation took an unexpected turn when representatives from VK, the owner of inbox.ru domain, contacted PyPI administration to clarify the circumstances. The revelation was surprising: the suspicious activity originated not from cybercriminals, but from VK’s internal security team.
VK security specialists confirmed that the mass account registration and package creation were part of a proactive security measure. Their objective was to “reserve” potentially vulnerable package names, preventing their exploitation by threat actors targeting the company’s internal systems. This defensive approach, while well-intentioned, inadvertently triggered PyPI’s security protocols.
Lessons Learned and Future Implications
Following the clarification, VK’s team committed to revising their security approaches. The company agreed to discontinue mass package reservation practices and develop alternative methods for detecting and preventing abuse attempts. This commitment demonstrates the importance of coordinated security efforts between major technology companies and open-source communities.
PyPI administration subsequently lifted all restrictions on the inbox.ru domain, restoring registration capabilities for users of this email service. However, the incident highlighted critical coordination gaps between large technology companies and open-source development communities.
This case underscores the increasing complexity of cybersecurity in the AI-driven development era. Organizations must balance proactive security measures with community standards and communication protocols. For developers, the incident serves as a crucial reminder to verify package names carefully and avoid blind reliance on AI tool recommendations, which may contain inaccuracies or hallucinations. Implementing robust package verification processes and maintaining awareness of emerging threats like slopsquatting will be essential for maintaining secure development practices in an increasingly AI-integrated landscape.