Open-source AI-driven offensive tools are rapidly moving from lab experiments into real attack chains. According to research by Team Cymru, the CyberStrikeAI platform, positioned as an AI-based security testing framework, has been actively used to automate intrusions against Fortinet FortiGate firewalls in multiple countries.
AI platform CyberStrikeAI seen in real Fortinet FortiGate intrusion campaign
Earlier reporting described a Russian-speaking threat actor who, over a five-week period, compromised more than 600 Fortinet FortiGate devices across 55 countries, relying heavily on generative AI tools for reconnaissance and exploitation. Building on this, Team Cymru analyst Will Thomas (BushidoToken) has now linked the open-source CyberStrikeAI platform to that same operation.
On one of the campaign’s servers, with IP address 212.11.64[.]250, investigators identified an instance of the CyberStrikeAI service running on port 8080. NetFlow telemetry — high-level flow data that tracks who talks to whom on a network — showed sustained traffic between this host and multiple compromised FortiGate devices, indicating the AI platform was actively involved in the attack workflow.
The last confirmed CyberStrikeAI activity within this infrastructure dates to 30 January 2026, allowing researchers to confidently associate the tool with the FortiGate compromise campaign.
How CyberStrikeAI works: architecture, AI agents and attack orchestration
On its GitHub page, CyberStrikeAI is described as an AI platform for security testing written in Go. The framework aggregates more than 100 well-known security and penetration-testing tools, combining them with a custom orchestrator, predefined roles, and a system of reusable “skills” that allow AI agents to perform complex attack or audit scenarios end-to-end.
The platform is tightly integrated with modern large language models (LLMs). Its decision engine supports models such as GPT, Claude, DeepSeek and others, and implements the Model Context Protocol (MCP) to coordinate AI agents and external tools. The core idea is full automation from natural-language instructions: an operator can describe a goal in plain English, and the platform translates it into reconnaissance tasks, vulnerability discovery, attack path construction, enrichment of findings, and visualization in a single interface.
CyberStrikeAI exposes a password-protected web interface with detailed activity logging, a SQLite-backed data store, and dashboards to track vulnerabilities, orchestrate tasks, and visualize attack graphs. While this makes it attractive for Red Teams and penetration testers, it also dramatically lowers the barrier for less-experienced threat actors to run sophisticated campaigns.
Offensive toolchain: from scanning to post-exploitation in one AI platform
CyberStrikeAI integrates tooling across almost every stage of the classic kill chain — a model describing the sequence of steps in an attack, from initial reconnaissance to data exfiltration.
For network scanning and reconnaissance, it leverages nmap and masscan. For web application testing, it includes sqlmap, nikto and gobuster, enabling automated discovery of SQL injection, misconfigurations, and hidden directories.
During exploitation, the platform can call out to Metasploit and pwntools. For password cracking, it integrates hashcat and John the Ripper. These tools, when orchestrated by AI agents, can chain exploits and brute-force attempts far more efficiently than manual operation.
In the post-exploitation phase, CyberStrikeAI bundles mimikatz, BloodHound and impacket scripts, streamlining credential theft, privilege analysis and lateral movement inside victim networks. This combination, driven by AI logic, turns the platform into an automation engine for cyber attacks that can be effectively used even by modestly skilled operators.
Global CyberStrikeAI deployment and focus on border network devices
Between 20 January and 26 February 2026, Team Cymru identified at least 21 unique IP addresses hosting CyberStrikeAI services. Most of these systems were located in China, Singapore and Hong Kong, with additional servers observed in the United States, Japan and several European countries. Such geographic dispersion suggests a deliberate effort to build a resilient, distributed infrastructure and diversify ingress points for attacks.
Researchers stress that AI-driven offensive platforms are particularly dangerous for border network devices — firewalls, VPN gateways and routers. These appliances are usually reachable directly from the internet, and once compromised, they provide attackers with a stealthy and highly privileged foothold into internal corporate networks. Previous industry reports, including long-running data such as the Verizon DBIR, consistently highlight edge devices as frequent entry points in advanced intrusion campaigns.
Developer profile: Ed1s0nZ and ties to the Chinese cybersecurity ecosystem
The main developer behind CyberStrikeAI, using the handle Ed1s0nZ on GitHub, maintains a broader portfolio of offensive AI tools. Among them are PrivHunterAI, aimed at detecting local privilege escalation vulnerabilities, and InfiltrateX, described as a scanner for privilege escalation and deep system reconnaissance.
Analysis of public activity associated with this profile suggests possible connections to organizations previously linked by Western researchers to Chinese government cyber units. In December 2025, CyberStrikeAI was shared with the Starlink project of Knownsec 404, a major Chinese cybersecurity vendor. In January 2026, the profile also referenced an award from the China National Vulnerability Database (CNNVD), a state-backed vulnerability database sometimes cited in Western analyses for its ties to Chinese intelligence interests. That reference was later removed.
Security risks and practical defenses against AI-powered automated attacks
The case of CyberStrikeAI illustrates the classic dual-use problem in cybersecurity: tools created for legitimate security assessment can be repurposed into “plug-and-play” frameworks for automated AI-powered cyber attacks. As LLMs and orchestration frameworks mature, attackers gain the ability to rapidly scale campaigns against widely deployed technologies, such as Fortinet FortiGate firewalls and other network edge devices.
Organizations should operate under the assumption that similar AI platforms will proliferate. Key defensive measures include rigorous patch management for all border devices, strict limitation of access to administrative interfaces (for example, management only from dedicated internal networks or VPNs), enforced multi-factor authentication, and network segmentation to prevent a single compromised device from exposing the entire environment.
From a detection standpoint, continuous monitoring of NetFlow and other network telemetry is critical to identify anomalous patterns, such as unexpected management traffic to firewalls or previously unseen external IPs like 212.11.64[.]250. Regular penetration tests and red-team exercises should increasingly incorporate scenarios involving AI-driven automated attacks to validate resilience against this emerging class of threats.
As AI-driven offensive tooling becomes commoditized, the organizations that will fare best are those that invest early in structured vulnerability management, robust monitoring and well-trained security teams. Proactive hardening of edge devices, combined with realistic testing against AI-enabled adversaries, is rapidly becoming a baseline requirement rather than an advanced capability.