Generative AI is rapidly shifting from a productivity tool to a powerful asset in the hands of cybercriminals. According to a report from the Google Threat Intelligence Group (GTIG), the North Korean-linked threat actor UNC2970—overlapping with the well-known Lazarus Group, Diamond Sleet and Hidden Cobra clusters—has begun actively abusing the Google Gemini model to support reconnaissance and preparation for targeted attacks.
UNC2970 and the Evolution of the “Dream Job” Social Engineering Campaigns
UNC2970 has a long history of targeting aerospace, defense and energy organizations. One of its hallmark campaigns, Operation Dream Job, relies on sophisticated social engineering: attackers pose as corporate recruiters, offering highly attractive roles to lure engineers and security specialists into opening malicious attachments or visiting weaponized websites.
GTIG notes that these operations continue to focus on defense contractors and high‑value technical staff. What has changed is that generative AI is now embedded into the tradecraft, making lures more believable, tailored and difficult to distinguish from legitimate outreach.
Using Google Gemini for OSINT and Target Profiling
Investigators observed UNC2970 using Gemini for open-source intelligence (OSINT) and detailed victim profiling. The model was tasked with:
— mapping large cybersecurity and defense companies and their business lines;
— breaking down specific technical roles, skills and responsibilities;
— analyzing salary ranges and market expectations to craft realistic job offers.
This type of AI-assisted OSINT blurs the line between legitimate labor-market analysis and hostile reconnaissance. By understanding corporate structures, compensation and motivations, threat actors can create highly convincing recruiter personas, significantly increasing the success rate of phishing and initial account compromise. Similar patterns are reflected in industry reports such as the FBI’s IC3 and Verizon’s DBIR, which consistently show social engineering and business email compromise among the most damaging attack vectors.
HONESTCUE and COINBAIT: Generative AI in Malware and Phishing Toolchains
HONESTCUE: Fileless Malware Generation via the Gemini API
GTIG highlights a particularly notable tool, the malicious framework-loader HONESTCUE. Instead of embedding a static second-stage payload, HONESTCUE calls the Google Gemini API, sends a crafted prompt and receives C# source code for the next stage of the attack.
The generated C# is then compiled and executed directly in memory using the legitimate .NET CSharpCodeProvider framework. This fileless approach minimizes disk artifacts, reducing the effectiveness of traditional signature-based antivirus solutions and illustrating how generative AI can be leveraged to dynamically produce tailored malware on demand.
COINBAIT: AI-Built Phishing Kit Masquerading as a Crypto Exchange
Another example of AI abuse is the phishing toolkit COINBAIT, created using the Lovable AI service. COINBAIT impersonates a cryptocurrency trading platform to harvest user credentials.
Google associates aspects of COINBAIT activity with the financially motivated cluster UNC5356. The emergence of “AI-assembled” phishing kits dramatically lowers the technical barrier for less-skilled criminals, allowing them to launch professional-looking campaigns at scale with minimal development effort.
ClickFix Campaigns: Exploiting Public Sharing Features of AI Services
GTIG also draws attention to ClickFix campaigns, where attackers abuse public-sharing features in generative AI platforms. Victims are shown seemingly legitimate “step-by-step guides to fix a common computer problem”, but the recommended download is in fact an information-stealing malware.
Such activity, observed for example in reports by Huntress in December 2025, demonstrates how user trust in AI-generated “help content” can be weaponized to deliver infostealers and remote access tools under the guise of technical support.
Model Extraction Attacks Against Gemini: Cloning Behavior via the API
Beyond operational use, GTIG reports extensive model extraction attacks targeting Gemini itself. In these attacks, adversaries submit large numbers of queries to a closed model and use the responses to train a surrogate that closely mimics the original system’s behavior.
In one case, over 100,000 prompts were sent in multiple non-English languages to replicate Gemini’s reasoning capabilities across diverse tasks. Public research, such as proof-of-concept work by Praetorian, has shown that with on the order of 1,000 API queries and several training epochs, a copied model can achieve above 80% behavioral similarity to the target.
As researcher Farida Shafik notes, “Many organizations assume that keeping model weights secret is enough. But behavior is the model. Every query–response pair becomes a training example for a clone.” Any widely exposed AI API therefore becomes a potential target for intellectual property theft, evasion testing and downstream misuse.
Google’s Defensive Measures and the AI Arms Race in Cybersecurity
According to GTIG, threat actors regularly attempt to bypass safety controls by framing their prompts as security research or CTF exercises, seeking to coax models into generating harmful content under an “educational” pretext.
Google states that it is continuously hardening Gemini using specialized safety classifiers, stricter guardrails and behavioral monitoring. New attack techniques observed in the wild are fed back into training pipelines to improve the model’s ability to detect persona manipulation and respond safely.
These efforts are aligned with Google’s AI Cyber Defense Initiative, launched in 2024, which positions AI as a way to counter the classic “defender’s dilemma”—where attackers need to find only one weakness while defenders must secure every gap. As AI-augmented attacks increase in quality, volume and speed, defenders must adopt AI-driven detection, threat hunting and incident response capable of operating at machine scale.
Organizations should update their threat models to explicitly account for generative AI abuse: AI-crafted spear-phishing, dynamically generated malware, and model-targeted attacks over public APIs. Practical steps include strict monitoring and segmentation of API keys, logging and analyzing access to AI services, red-teaming defensive tools against AI-assisted adversaries and training staff to recognize highly personalized social engineering. The sooner such measures are implemented, the more likely it is that AI will become a force multiplier for defenders rather than just a new weapon for cybercriminals.