Google's Threat Intelligence Group has identified a significant trend where cybercriminals are now using large language models (LLMs) to create new, highly adaptive malware families. This approach enables "just-in-time" self-modification, allowing the malware to dynamically alter its code during execution for unprecedented versatility and evasion. One example is the experimental PromptFlux dropper, which uses Google's own Gemini AI to generate new, obfuscated script variants to avoid detection.
Another tool, PromptSteal, demonstrates how AI can be used for data mining operations. These AI-powered malwares, including the FruitShell reverse shell and the QuietVault credential stealer, represent a shift towards systems that can generate code on-demand, search for secrets, and adapt their functions. Google has taken action by disabling API access for malicious accounts associated with these threats.
Beyond creating malware, state-backed hacking groups from China, Iran, and North Korea are abusing AI models like Gemini to assist with various attack stages. Their activities include developing exploits, crafting phishing lures, debugging malware, and even creating deepfakes. In response, Google has reinforced its AI model safeguards and terminated abusive accounts.
Concurrently, underground forums are seeing a growing market for AI-powered cybercrime tools that lower the technical barrier for conducting sophisticated attacks. Google emphasizes that the development of AI must be responsible, incorporating strong safety measures to prevent and disrupt such malicious use. The company is using insights from these threats to continuously improve the security of its own AI platforms.
Read more...
