August 26, 2019
A honeytrap for malware.
I haven’t seen the sixth Mission Impossible movie – and I don’t think I will. I sat through the fifth – in suitably zombified state (returning home on a long-haul flight after a tough week’s business) – but only because one scene in it was shot in our shiny new modern London office. And that was one Mission Impossible installment too many really. Nope – not for me. Slap, bang, smash, crash, pow, wow. Oof. Nah, I prefer something a little more challenging, thought-provoking and just plain interesting. After all, I have precious little time as it is!
I really am giving Tom Cruise and Co. a major dissing here, aren’t I? But hold on. I have to give them their due for at least one scene done really rather well (i.e., thought provoking and plain interesting!). It’s the one where the good guys need to get a bad guy to rat on his bad-guy colleagues, or something like that. So they set up a fake environment in a ‘hospital’ with ‘CNN’ on the ‘TV’ and have ‘CNN’ broadcast a news report about atomic Armageddon. Suitably satisfied his apocalyptic manifesto had been broadcast to the world, the baddie gives up his pals (or was it a login code?) in the deal arranged with his interrogators. Oops. Here’s the clip.
Why do I like this scene so much? Because, actually, it demonstrates really well one of the methods of detecting… unseen-before cyberattacks! There are in fact many such methods – they vary depending on area of application, effectiveness, resource use, and other parameters (I write about them regularly here) – but there is one that always seems to stand out: emulation (about which I’ve also written plenty here before).
Like in the film, the emulator launches the object being investigated in an isolated, artificial environment, which encourages it to reveal its maliciousness.
But there’s one serious downside to such an approach – the very fact that the environment is artificial. The emulator does its best to make that artificial environment as close to a real environment of an operating system, but ever-increasingly smart malware still manages to differentiate it from the real thing, and the emulator observes how the malware has recognized it, so then has to regroup and improve its ’emulation’, and on and on in a never-ending cycle, which regularly opens the window of vulnerability on a protected computer. The fundamental problem is that the functionality of the emulator tries its best to look like a real OS, but never quite does it perfectly to be the spitting image of a real OS.
On the other hand, there’s another solution to the task of behavioral analysis of suspicious objects – analysis… on a real operating system – one on a virtual machine! Well why not? If the emulator never quite fully cracks it, let a real – albeit virtual – machine have a go. It would be the ideal ‘interrogation’ – conducted in a real environment, not an artificial one, but with no real negative consequences.
On hearing about this concept, some may rush to ask why it wasn’t thought of before. After all, emulation has been in the tech-mainstream since 1992 (!) already. Well, it turns out it’s not so simple.
First: analysis of suspicious objects on a virtual machine is a resource-intensive process, suited only to heavyweight enterprise-grade security solutions – where scanning needs to be super intensive so that absolutely zero maliciousness gets through the defenses. Alas, for home computers – much less smartphones – this technology’s not suitable… yet.
Second: actually, we do use this technology – internally here at the Kompany: we use it for internal investigations. But to be used as a market-ready product, we feel it’s just too early yet. Competitors have released similar products, but their effectiveness leaves a lot to be desired. As a rule such products are limited to just collecting logs and basic analysis that can only be called… a sieve!
Third: launching a file on a virtual machine – that’s just the beginning of a very long and tricky process. After all, the aim of the exercise is to have the maliciousness of an object reveal itself, and for that is needed: a smart hypervisor, logging and analysis of behavior, constant fine-tuning of the templates of dangerous actions, protection from anti-emulation tricks, execution optimization, and much more besides. And it is here where, without the false modesty, I can announce that we, truly, are way ahead – of the whole planet!
Recently we were granted a U.S. patent (US10339301) covering the creation of a suitable environment for a virtual machine for conducting deep, rapid analysis of suspicious objects. This is how it works:
- Virtual machines are created (for different types of objects) with pre-installed settings that ensure both their optimal execution and a maximally high detection rate.
- The hypervisor of a virtual machine works in tandem with system logging of an object’s behavior and system analysis thereof, helped out by updatable databases of templates of suspicious behavior, heuristics, the logic of reactions to actions, and more.
- Should suspicious actions be detected, the analysis system enters on-the-fly changes to the process of execution of the object on a virtual machine in order to encourage the object to show its malicious intentions. For example, the system can create files, amend the register, speed up time, and so on.
The last – third – point just these is the most unique and delicious feature of the technology. Let me give you an example to show you how.
The system detects a launched file has ‘fallen asleep’ and no longer manifests any activity. That’s because the object can be programmed to quietly do nothing for several (dozen) minutes (hours) until the beginning of malicious activity. When it does (do nothing), we speed up time on-the-fly inside the virtual machine so that its passes one, three or five minutes per second. The functionality of the file being analyzed doesn’t change, while the time of the wait is lowered by hundreds (or thousands) of times. And if, after its ‘snooze’, the malware decides to check the system clock (has it been ticking?), it will be fooled into thinking it has, and continue with its malicious mission – exposing itself in the process.
Another example:
The object uses a vulnerability of a specific library or tries to change the contents of this or that file or register. At first, with the help of the regular fopen() function, it tries to open the library (or file or register), and if it fails to do so (there’s no library, or no access rights to the file) – then it simply gives up. In such a scenario we change (on-the-fly) the value of the fopen() function from ‘file absent’ to ‘file exists’ (or, if necessary, we create the file itself and fill it with appropriate content), then we simply observe what the object does.
Such an approach works really well also in conditions of logic trees of conduct of the object. For example: if there exist file A and file B, then file C is modified and the job’s finished. However, it’s not known what the file being investigated will do if only one of either file A or file B exists. Therefore, we launch an iteration in parallel and ‘tell’ the file that file A exists but B doesn’t, then we analyze the further logic-tree activity.
What’s important to note is that the rules of reaction to execution of the file are configured by external, easily-updatable databases. To add new logic you don’t need to redevelop the whole engine. All that’s needed is to correctly describe the multitude of possible scenarios of malicious behavior and perform a one-click update.
And that, in a nutshell, is how this technology works. It will soon be added to KATA, and also delivered to the market in the form of separate solutions for enterprise purchasers of Kaspersky Sandbox.
Any questions? Please fire away in the comments.