Security researchers have identified what may be the earliest known malware embedded with a large language model, dubbed MalTerminal. This Windows executable uses OpenAI’s GPT-4 to dynamically generate ransomware code or reverse shells based on user input. Although there is no evidence of real-world deployment, its existence signals a growing trend of LLM-integrated malicious tools.
Discovered by SentinelOne, MalTerminal includes deprecated API endpoints suggesting it was created before November 2023. Alongside the binary, Python scripts with similar functionality were found, as well as a defensive tool named FalconShield that uses GPT-4 to analyze files for malicious intent. This represents a qualitative shift in attacker capabilities, enabling runtime generation of harmful code and complicating defense efforts.
In a related development, threat actors are using hidden prompt injections in phishing emails to deceive AI-powered security scanners. These messages contain concealed instructions that trick AI systems into classifying malicious emails as safe, allowing them to reach inboxes. When opened, attachments exploit known vulnerabilities like Follina to disable defenses and deploy additional payloads.
Furthermore, attackers are leveraging AI site builders to host fake CAPTCHA pages that redirect to credential-harvesting sites. These platforms offer free hosting and credible branding, enabling scalable phishing campaigns that evade automated detection. The convergence of AI and cybercrime is creating new challenges for enterprise security, emphasizing the need for adaptive and context-aware defensive strategies.
Read more...
