Resources

Adaptive Malware Arrives: LameHug Uses AI to Evolve Mid-Attack

Written by Admin | Jul 18, 2025 2:40:54 PM

A newly discovered malware strain named LameHug is turning heads by becoming the first publicly documented malware to use a large language model (LLM) to generate real-time system commands during an active attack. 

Uncovered by Ukraine’s national CERT, the malware is believed to be linked to the Russian state-backed group APT28 (also known as Fancy Bear or STRONTIUM). It was distributed via phishing emails sent from compromised accounts, impersonating government officials and targeting executive bodies. The malicious emails included a ZIP file carrying one of several known LameHug loaders, including files named Attachment.pif, AI_generator_uncensored_Canvas_PRO_v0.9.exe, and image.py. 

How LameHug Uses AI in the Attack Chain 

What sets LameHug apart is its use of the Qwen 2.5-Coder-32B-Instruct model, an open-source LLM created by Alibaba Cloud and hosted via Hugging Face’s API. This AI model, typically designed for code generation and reasoning, is leveraged by the malware to dynamically create shell commands based on prompts. 

Once active on a compromised Windows system, LameHug uses the AI-generated commands to perform tasks such as: 

  • Gathering system information and saving it to a text file 
  • Searching for documents across key folders like Documents, Desktop, and Downloads 
  • Exfiltrating stolen data via SFTP or HTTP POST requests 

This dynamic approach means attackers no longer need to embed hardcoded commands in their malware. Instead, they can generate tailored instructions in real time, potentially making it harder for traditional detection tools to flag suspicious behaviour. 

Why This Matters 

By embedding real-time command generation using AI, LameHug opens the door to more flexible, adaptive attack strategies. Threat actors could feasibly adjust their tactics mid-compromise without needing new payloads, improving both stealth and efficiency. 

Moreover, by using Hugging Face’s infrastructure for command-and-control activity, attackers may bypass traditional network monitoring tools, blending in with legitimate traffic and extending dwell time within compromised environments. 

Our CTO, Juliette Hudson, shared the following:  

“LameHug represents a significant evolution in adaptive malware.  

Unlike earlier proof-of-concepts like BlackMamba, which used cloud-based LLMs to generate malicious code at runtime, LameHug takes that concept operational, embedding a local transformer model to semantically analyse system logs, interpret security events, and adapt its behaviour accordingly.  

This marks one of the first known cases of LLM-enabled malware used in active campaigns. If this trend continues, we could see threat actors increasingly leveraging on-device AI for autonomous decision-making, evasion, and even real-time planning of lateral movement or credential attacks.” 

Looking Ahead 

While it’s not yet clear how successful the LLM-generated commands were in this particular campaign, the use of AI in this context is a clear warning sign. LameHug may be an early example of how attackers are beginning to weaponise accessible AI tools to create more evasive and adaptable malware. 

At CybaVerse, we continue to help organisations stay ahead through real-time threat intelligence, expert-led services, and our cyber security platform. 

If you'd like to understand how your organisation can detect and respond to evolving threats like LameHug, you can get in touch with us here.