There seems to be a new Trojan horse or phishing scam daily, yet cybersecurity and IT teams remain up to the challenge of familiarizing themselves with new threats. Polymorphic malware is on the rise, but it has existed for some time. What does artificial intelligence (AI) have to do with it, and how does it change the game for experts?
What is polymorphic malware?
Polymorphic malware alters a program’s code as it embeds itself into its target. It can take the form of a virus, bot and anything in between. Another method, metamorphic malware, performs similarly. However, polymorphic variants use a variable encryption key, whereas the latter does not. This is what gives this attack type its name — it constantly changes its appearance and signature, which makes it harder to isolate.
The anatomy of polymorphic malware is straightforward despite its ever-changing qualities. It has malicious code that changes shape. This is accompanied by another static code that perpetually encrypts and decrypts the malicious portion. When analysts try to find polymorphic code, they look for this unchanging aspect of the code to locate the shifting sections.
What are some modern examples of polymorphic malware?
- VIRLOCK: Infected cloud storage and connected apps by converting affected files into polymorphic malware
- URSNIF: Found its way into machines through phishing emails and malicious links
- VOBFUS: Duplicated itself on removable peripherals connected to affected hardware
How has generative AI changed the landscape?
AI in generative formats is the ideal place for polymorphic malware to flourish. AI threats are a critical concern in cybersecurity, especially when instances of tainted code have appeared from the average prompt. Amateur coders and professionals alike might expose themselves to malware if they leave what the AI generates unverified.
While AI has frequently helped generate benign code, in 2023 experts discovered it was possible to create polymorphic malware within ChatGPT. Researchers found ways to prompt the generative AI without explicitly asking for malicious content. With creative inputs, they could generate many mutations of the harmful output with little resistance.
The testing made people realize it was simple to disguise exploitative and dangerous prompts if delivered in an unusual order or by asking for code in an unintuitive combination. Other researchers took this a step further and created a proof of concept called BlackMamba to reveal how generative AI promotes polymorphic malware spread.
BlackMamba communicates with OpenAI’s API in a way that makes it hard to detect its intentions. Then, it generates a keylogger to poison the targeted user’s keystrokes in an unexpected place — Microsoft Teams. This was the application testers used to see how easy it would be to extract information through messaging sources, and it was successful without setting off any alerts. The proof of concept shows how cybercriminals could create an automated cycle of polymorphic malware output.
How can analysts defend against polymorphic malware?
Analysts can respond to polymorphic malware with common defensive strategies and a few more curated for this specific threat variant.
Use malware detection tools
Different software is available to help automate incident response and malware detection. Typically, they have three analysis tools, including static, dynamic and hybrid versions. Each has a specialty, including scrubbing filenames or reverse-engineering malicious software. Many programs offer behavior-based detection identification, scanning users and their interactions to find anomalies the software could deem curious.
Employ least-privilege and zero-trust architecture (ZTA)
ZTA often incorporates least-privilege practices to minimize access to secure networks. The ZTA mentality that no unauthorized user should have immediate entry is amplified by practical strategies, such as role-based controls, to limit people in what data they have access to. Business-critical systems and sensitive information must stay behind digital doors with only a handful or a single person with the key.
Leverage endpoint detection and response (EDR)
EDR analyzes specific devices, or the endpoints, in real time. EDR provides constant monitoring, identifying threat types and their frequencies. It keeps records of each attempt in secure logs so analysts can research each instance. EDR expands the team’s awareness of attack scopes and varieties.
Maintain updates
Many breaches occur because hackers find backdoors and vulnerabilities in antiquated software or neglected systems. Around 91% of companies have vulnerable applications, and 57% of those issues are never fixed.
Teams should regularly update as many assets as possible so they have the best chance of staying secure against polymorphic malware. Teams can also research third-party providers to see what they are doing to service their products and if it is enough to meet the security standards of the users.
Adapting to adaptive malware
Polymorphic malware is resourceful and malleable, changing with its environment to permeate as much into a system as possible. However, analysts and organizations must be equally savvy against these threats. Especially with generative AI, they will only become sneakier and more difficult to recover from if they find a vulnerability. Begin implementing measures today to defend against these unique threats before one appears as a surprise.