AI-Powered 'BlackMamba' Keylogging Attack Evades Modern EDR Security
Researchers warn that polymorphic malware created with ChatGPT and other
LLMs will force a reinvention of security automation.
Researchers from HYAS Labs demonstrated the proof-of-concept attack, which
they call BlackMamba, which exploits a large language model (LLM)—the
technology on which ChatGPT is based—to synthesize a polymorphic
keylogger functionality on the fly. The attack is "truly polymorphic" in
that every time BlackMamba executes, it resynthesizes its keylogging
capability, the researchers wrote.
The BlackMamba attack, outlined in a blog post, demonstrates how AI can
allow the malware to dynamically modify benign code at runtime without any
command-and-control (C2) infrastructure, allowing it to slip past current
automated security systems that are attuned to look out for this type of
behavior to detect attacks.