Seamless Phishing: New Attack Leverages Generative AI for Runtime Code Assembly
TL;DR: Researchers from Palo Alto Networks demonstrate a novel attack using generative AI to create dynamic phishing pages by generating malicious JavaScript in real-time within a victim’s browser. This technique can evade detection by utilizing trusted domains for code delivery, posing a significant risk to users.
Recent research by Palo Alto Networks reveals a new attack methodology that exploits generative AI to craft effective phishing pages dynamically. The technique involves loading an innocent-looking webpage, which then makes client-side API requests to trusted large language model (LLM) services. Through cleverly engineered prompts, attackers can manipulate these models to generate malicious JavaScript code, assembling it on-the-fly in the victim’s browser. As a result, each session produces a unique and evasive phishing page, challenging conventional detection methods.
The attack begins with a benign webpage that, upon loading, communicates with popular LLMs like DeepSeek or Google Gemini. By bypassing LLM safety protocols, attackers retrieve tailored JavaScript snippets that execute in real time. The polymorphic nature of this malicious content prevents static detection, while its transmission from trusted domains further complicates network analysis. As the malicious payload is constructed and executed at runtime, traditional security measures struggle to identify the threat.
This emerging attack model is concerning as it indicates a shift in how phishing campaigns can be conducted, leveraging advanced AI capabilities for evasion tactics. The dynamic generation of code enables attackers to personalize their approaches based on the target, enhancing the effectiveness of their efforts.
The importance of this discovery lies in its potential real-world impact on both individuals and organizations. The ability to circumvent existing defenses necessitates stronger protections. Companies should implement runtime behavioral analysis to detect and block such attacks at the execution level in the browser. Limiting the use of unsanctioned LLM services and enhancing AI safety guardrails are also crucial to mitigate these risks.
To reduce exposure to these advanced threats, organizations should leverage threat intelligence, SIEMs, and advanced URL filtering to monitor and protect against suspicious behavior directly at the browser level.
Currently, there are no specific IOCs identified for this attack.
Click here for the full article



