Google discovers new PROMPTFLUX virus: artificial intelligence helps cyber threats rewrite their code

17.11.2025 | Technologies

Google researchers have uncovered the PROMPTFLUX malware, which uses AI to dynamically change and hide its own code, making it difficult for antivirus programs to detect.

Снимка от The Pancake of Heaven!, Wikimedia Commons (CC BY-SA 4.0)

The Google Threat Intelligence Group team announced that it has identified a new experimental type of malware — the PROMPTFLUX virus, which uses artificial intelligence (AI) to directly modify its own program code in the attack process. For this, Gemini AI is used — a model that allows the virus to periodically send specific requests (prompts) requesting new variations of VBScript for better concealment and avoidance of static antivirus signatures.

PROMPTFLUX is the first virus to implement a "Thinking Robot" module, repeatedly consulting an LLM (large language model) to change its algorithms in real time. Every hour, through an API key and a direct connection to AI, PROMPTFLUX automatically generates a new version of its core and hides it in the Windows autorun folder to ensure the persistence of the infection and make it difficult for analysts.

Google reports that several experimental versions of PROMPTFLUX have been discovered, some of which rewrite all their code every hour, using machine-readable prompts to maximize decryption complexity. The malware automatically registers AI responses in system logs and has the potential to spread through USB devices and network shares.

PROMPTFLUX is currently in the testing phase — there are no mass cases of large-scale system compromise, but experts warn that the technology is suitable for creating difficult-to-catch viruses and may mark a new trend in the evolution of cyberattacks.
"This is an important step towards fully autonomous, self-adapting malicious code. Each autogeneration of the virus complicates the task of protecting organizations," commented Google.

The Google report notes that in the future, malicious actors will move from using AI as an exception to making it a core tool for attack, capable of scaling malicious campaigns many times over. Experts advise organizations to increase their level of cybersecurity and implement dynamic AI analysis systems"