Criminal LLMs: How WormGPT 4 and KawaiiGPT Turn Generative AI into a Cybercrime Tool

CyberSecureFox 🦊

Generative AI is no longer just a productivity tool for developers and enterprises. In parallel to legitimate platforms, an underground ecosystem of large language models (LLMs) purposely trained and configured for cybercrime is rapidly emerging. Recent research by Palo Alto Networks into two such models — the commercial WormGPT 4 and the free KawaiiGPT — shows how offensive AI already automates phishing, malware creation and ransomware operations at scale.

Underground LLM market: “no‑rules AI” for cybercriminals

Public services such as ChatGPT and similar LLMs enforce strict safety mechanisms: they filter prompts, block instructions related to malware development and restrict high‑risk scenarios. In contrast, on closed Telegram channels and criminal forums, a different class of systems is being promoted — “AI without rules”, explicitly advertised for phishing, intrusion and bypassing security controls.

These criminal LLMs dramatically lower the barrier to entry into cybercrime. Tasks that previously required solid programming skills and understanding of operating systems or networks — for example, writing ransomware or polymorphic malware — can now be partially automated by anyone with basic technical literacy and some prompt‑engineering ability. This trend reinforces the model of Cybercrime‑as‑a‑Service (CaaS), where attack components are offered as ready‑made, easy‑to‑use services.

WormGPT 4: commercial “no‑limits” offensive AI

WormGPT first appeared in 2023, and WormGPT 4 is its current version. Its operators market it as a “key to AI without boundaries”. Distribution takes place through a Telegram channel with around 571 subscribers and through hacking forums such as DarknetArmy. Access is monetised via subscriptions: approximately USD 50 per month or USD 220 for lifetime access, including source code.

The developers do not disclose the LLM architecture or training data. It is unclear whether WormGPT 4 is an illicitly fine‑tuned copy of a major foundation model or a deeply jailbroken variant of an existing engine. In practice, the model operates without embedded safety or ethical controls, responding directly to requests that mainstream platforms would block.

Experiment: AI‑assisted Windows ransomware generation

Palo Alto Networks researchers asked WormGPT 4 to create a ransomware program targeting PDF files on Windows. The model produced a functional PowerShell script in seconds and wrapped it in an aggressive message emphasising “fast and silent digital destruction” — a clear indication of the absence of content filters.

According to the analysis, the generated script used AES‑256 encryption, created a ransom note with a 72‑hour payment deadline and even suggested data exfiltration via the Tor network. Although the code still required manual refinement to bypass modern endpoint protection and detection systems, the model effectively delivered a ready “skeleton” that could significantly accelerate ransomware development for less experienced actors.

KawaiiGPT: free “waifu” interface, real offensive capabilities

KawaiiGPT, launched in July 2025, follows a different distribution model. It is available for free on GitHub and, according to its operators, already has more than 500 registered users, with several hundred active weekly. Marketed as a “sadistic waifu for pentesting”, it uses anime‑style branding and humour to attract users and normalise its offensive purpose.

Behind this playful façade lies a toolset that, in the assessment of multiple experts, is fully capable of supporting real‑world cyberattacks, not just simulated penetration testing. The model is designed to respond constructively to prompts that mainstream LLMs would classify as abusive or high risk.

From phishing templates to lateral movement scripts

In one test, researchers asked KawaiiGPT to write a phishing email impersonating a bank with the subject line “Urgent: verify your account information”. The model generated a convincing message closely resembling genuine banking communication and included a description of a fake website designed to steal payment card data, dates of birth and account credentials.

KawaiiGPT also produced a Python‑based script for lateral movement across Linux infrastructure using the SSH library paramiko, code for extracting EML email files on Windows and another variant of a ransomware note. Collectively, these outputs demonstrate the operators’ intention to provide a unified tool for automating multiple attack stages: reconnaissance, access, lateral movement and monetisation.

Criminal LLMs as the new baseline for cyber risk

Specialised offensive LLMs are becoming a new baseline in the digital threat landscape. The risk is not limited to the sophistication of individual models; it is the combination of powerful generative AI with broad, anonymous availability in underground communities.

Any user with minimal technical skills can now leverage such tools to:

  • generate credible phishing emails and fraudulent landing pages at scale;
  • create and mutate malware, including basic polymorphic variants designed to evade signature‑based detection;
  • automate target profiling (OSINT), service scanning and social‑engineering preparation.

This expansion of capabilities to a wider audience directly increases the number and diversity of potential attacks and complicates the work of security operations teams.

International threat reports from organisations such as ENISA, Europol and major cybersecurity vendors already note that generative AI is becoming a standard element of criminal toolkits, alongside information‑stealers, botnets and ransomware frameworks. Defensive strategies that rely solely on traditional antivirus software and basic awareness training are no longer sufficient.

To stay ahead of AI‑enabled cybercrime, organisations should combine continuous security awareness training (with a focus on spotting AI‑crafted phishing), strong authentication (including multifactor authentication), behaviour‑based email and network analytics, strict control of scripting environments such as PowerShell and proactive monitoring for anomalous access patterns. Early recognition of criminal LLMs like WormGPT 4 and KawaiiGPT as a systemic risk — and integrating this factor into cybersecurity strategy, technology investment and incident response planning — will significantly improve resilience against the next wave of AI‑driven attacks.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.