Attackers Hid Malware in Vector Image

Hackers behind a phishing campaign appear to have used artificial intelligence-generated code to hide malware behind a wall of overly complex and useless code, said Microsoft.
See Also: Fighting Deepfakes: Transformative Approaches to Protect Your Business
The computing giant said Wednesday it ran code culled from credential phishing malware through its own AI tool. Microsoft Security Copilot pronounced the code to be “not something a human would typically write from scratch due to its complexity, verbosity and lack of practical utility.”
The attack began with phishing emails that employed a self-address tactic, where the sender and recipient addresses matched, while the actual targets were hidden in the BCC field to bypass basic detection systems. The emails contained messages crafted to resemble file-sharing notifications, with attachments named “23mb – PDF- 6 pages.svg” designed to appear as legitimate PDF documents despite their SVG file extension.
Vector image files offer advantages to attackers because they are text-based and scriptable, allowing embedded JavaScript and dynamic content that can deliver interactive phishing payloads while appearing benign to users and security tools. The format supports obfuscation-friendly features such as invisible elements, encoded attributes and delayed script execution that can evade static analysis and sandboxing.
Should a target open the SVG file, it redirected them to a webpage prompting completion of a CAPTCHAs for security verification, a social engineering tactic designed to build trust and delay suspicion. Microsoft’s visibility was limited to the initial landing page, but the researchers assessed that the campaign would likely have presented fake sign-in pages.
Microsoft’s analysis of the SVG code showed unique obfuscation methods that distinguished this campaign from typical phishing attempts. Instead of using cryptographic obfuscation commonly employed to hide phishing content, hackers used business-related language to disguise malicious activity through two primary techniques.
The hackers structured the beginning of the SVG code to resemble a legitimate business analytics dashboard, containing elements for a supposed business performance dashboard, including chart bars and month labels. They rendered the elements invisible by setting their opacity to zero and fill to transparent, creating a decoy designed to mislead casual inspection, but making the SVG appear solely focused on visualizing business data.
The attackers concealed the payload’s functionality using what the researchers termed a “creative” use of business terminology. The attackers hid the malicious code by using a series of business-related terms including “revenue,” “operations,” “risk” or “shares,” embedding those words into a concealed section of the file that users cannot see.
The embedded JavaScript processed these business-related words through transformation steps, with the hackers encoding the payload by mapping pairs or sequences of business terms to specific characters or instructions. As the script executed, it decoded the sequence and reconstructed hidden functionality from what appeared to be harmless business metadata, including browser redirection, fingerprinting and session tracking.
Security Copilot determined that the code was likely synthetic and probably generated by an LLM or tool using one, showing complexity and verbosity usually not seen in manually written scripts.
The AI analysis tool identified five key indicators of AI code. Function and variable names followed consistent patterns, combining descriptive English terms with random letter-number codes, a naming style typical of AI-generated code. The code structure was highly organized with repeated use of similar logic patterns, showing the systematic approach that characterizes AI output. The embedded comments were wordy and generic, using formal business language such as “advanced business intelligence data processor.” The attackers implemented their hiding techniques in both thorough and formulaic ways that match AI code generation styles. The file also included technical elements more common in AI-written code that aims to follow proper formatting rules.
Microsoft said it detected and blocked the campaign by analyzing signals across infrastructure, behavior and message context that were largely unaffected by the attacker’s use of AI. AI-generated attacks still follow the same basic patterns and use the same infrastructure as human-created attacks, making them detectable through existing security methods.