Google Warns of AI-Driven Malware Linked to North Korea Crypto Theft

- Google disables abusive accounts and tightens Gemini safeguards to counter AI threats.
- AI-enabled malware queries LLMs at runtime to rewrite code and evade detection.
- North Korean UNC1069 actors used Gemini to target crypto wallets and craft phishing content.
Google’s Threat Intelligence Group (GTIG) reports that government-backed hackers and cyber criminals now run large language models (LLMs) as part of live malware campaigns. The latest assessment highlights at least five malware families, and the identified malware connects to models such as Gemini and Qwen2.5-Coder during execution to generate, mutate, or hide malicious code.
GTIG describes this trend as a shift from traditional malware design, where developers hard-code most logic inside the binary. The new malware strains instead rely on “just-in-time code creation.” The program sends prompts to an external AI model and receives fresh payloads or obfuscated scripts in response.
Two families, known as PROMPTFLUX and PROMPTSTEAL, show how attackers weave AI directly into their operations. PROMPTFLUX runs a “Thinking Robot” component that calls Gemini’s API on an hourly schedule to rewrite its own VBScript code. GTIG links PROMPTSTEAL to Russia’s APT28. It uses a Qwen model hosted on Hugging Face to craft Windows commands on demand.
By adjusting their code through LLM prompts, these tools seek to evade signature-based defenses and frustrate static analysis. Security teams now face malware that can change structure repeatedly without a full recompile. The code still follows attacker-defined objectives.
North Korean UNC1069 Group Exploits Gemini For Wallet Raids
The report also calls out a North Korean threat actor that GTIG tracks as UNC1069, or Masan. The group targets cryptocurrency through social engineering and malware. According to GTIG, the group used Gemini to research wallet data locations and generate scripts that access encrypted storage. It also used the model to write multilingual phishing messages for staff at crypto exchanges.
These prompts aimed to locate wallet application folders and identify where keys or seed phrases might reside. They also sought to automate attempts to exfiltrate sensitive information. The group also asked the model to craft messages that referenced IT support and account security in several languages.
This use of AI reduces the time required to prepare theft operations and lowers the technical barrier for some tasks. While UNC1069 still needs custom tooling to deploy malware and move stolen assets, LLM-generated content can speed up reconnaissance and phishing stages.
Related: Bitcoin Faces Quantum Threat After Google’s Willow Leap
Google Tightens Gemini Safeguards to Curb AI Attack Tools
Google states that it disabled the accounts and projects associated with the identified misuse of Gemini. The company also reports that it refined prompt filters and increased monitoring of API usage. It fed new intelligence into its classifiers to help block similar behavior.
In its wider AI security guidance, Google encourages organizations to log model access and enforce strict API keys. It also recommends that teams treat AI endpoints as potential paths for data theft or malware control. The company notes that defenders should assume adversaries can experiment with public LLLMs. They should design detection and access controls with that reality in mind.
The GTIG findings indicate that AI-enabled malware now forms part of the mainstream threat landscape, not only isolated experiments. As attackers integrate LLMs into code generation, obfuscation, and phishing, security teams in the crypto sector and other industries must adapt their monitoring. The report frames this evolution as a long-term challenge for digital asset security. Furthermore, GTIG plans continued monitoring of AI-assisted intrusion activity.



