The Growing Threat of AI Infostealers
SOPA Images/LightRocket via Getty Images
Update, March 21, 2025: The article, initially published on March 19, now includes insights from a recent report addressing the AI threat landscape and a statement from OpenAI concerning vulnerabilities in Chrome’s password manager.
It appears there is virtually no way to halt the increase in infostealing malware. With 2.1 billion compromised credentials attributed to this peril, 85 million recently acquired passwords leveraged in current attacks, and tools that can bypass browser security in mere seconds, the situation is alarming. Recent findings indicate that hackers can exploit a large language model jailbreak technique, known as an immersive world attack, to instruct AI in the creation of infostealer malware. Here’s what you need to understand.
The Simple Creation of AI Password Infostealers
A threat intelligence analyst lacking any malware coding skills successfully managed to jailbreak multiple large language models, prompting the AI to create a potent password infostealer capable of extracting sensitive information from the Google Chrome web browser.
This unsettling revelation comes from a new threat intelligence report published by Cato Networks on March 18. The vulnerability bypassed the built-in defenses of large language models, designed to prevent such misuse, by employing the immersive world jailbreak technique.
Understanding the Immersive World AI Attack
According to Cato Networks’ researchers, an immersive world attack utilizes “narrative engineering” to circumvent those essential security features of large language models. This involves the attacker crafting a detailed, entirely fictional world and assigning roles within it to the LLM, effectively normalizing the restricted functions. The report revealed that the researcher successfully engaged three different AI tools in this fictional scenario, each with specific roles and challenges.
The outcome, as highlighted in the Cato Networks report, was harmful code that effectively extracted credentials from the Chrome password manager. “This confirms both the Immersive World technique and the functionality of the generated code,” the researchers noted. Cato Networks attempted to notify all implicated AI tools, with DeepSeek unresponsive, while Microsoft and OpenAI acknowledged receipt of the findings. Google received the report but declined to examine the code further.
The Current State of AI Security
New findings from Zscaler, outlined in the ThreatLabz 2025 AI Security Report released on March 20, paint a concerning picture of the AI threat landscape. The usage of enterprise AI tools saw a staggering 3,000% increase year-over-year, prompting Zscaler to issue warnings about the pressing need for security protocols as these technologies rapidly permeate various sectors. Their analysis indicated that 59.9% of all AI and machine learning transactions were halted by enterprises, based on over 536.5 billion transactions assessed between February 2024 and December 2024.
Potential threats ranged from data breaches to unauthorized access, alongside compliance risks. “Threat actors are increasingly using AI to enhance the sophistication, speed, and impact of their attacks,” Zscaler stated, which underscores the need for both businesses and consumers to reevaluate their security strategies.