I haven’t seen the sixth Mission Impossible movie – and I don’t think I will. I sat through the fifth – in suitably zombified state (returning home on a long-haul flight after a tough week’s business) – but only because one scene in it was shot in our shiny new modern London office. And that was one Mission Impossible installment too many really. Nope – not for me. Slap, bang, smash, crash, pow, wow. Oof. Nah, I prefer something a little more challenging, thought-provoking and just plain interesting. After all, I have precious little time as it is!
I really am giving Tom Cruise and Co. a major dissing here, aren’t I? But hold on. I have to give them their due for at least one scene done really rather well (i.e., thought provoking and plain interesting!). It’s the one where the good guys need to get a bad guy to rat on his bad-guy colleagues, or something like that. So they set up a fake environment in a ‘hospital’ with ‘CNN’ on the ‘TV’ and have ‘CNN’ broadcast a news report about atomic Armageddon. Suitably satisfied his apocalyptic manifesto had been broadcast to the world, the baddie gives up his pals (or was it a login code?) in the deal arranged with his interrogators. Oops. Here’s the clip.
Why do I like this scene so much? Because, actually, it demonstrates really well one of the methods of detecting… unseen-before cyberattacks! There are in fact many such methods – they vary depending on area of application, effectiveness, resource use, and other parameters (I write about them regularly here) – but there is one that always seems to stand out: emulation (about which I’ve also written plenty here before).
Like in the film, the emulator launches the object being investigated in an isolated, artificial environment, which encourages it to reveal its maliciousness.
But there’s one serious downside to such an approach – the very fact that the environment is artificial. The emulator does its best to make that artificial environment as close to a real environment of an operating system, but ever-increasingly smart malware still manages to differentiate it from the real thing, and the emulator observes how the malware has recognized it, so then has to regroup and improve its ’emulation’, and on and on in a never-ending cycle, which regularly opens the window of vulnerability on a protected computer. The fundamental problem is that the functionality of the emulator tries its best to look like a real OS, but never quite does it perfectly to be the spitting image of a real OS.
On the other hand, there’s another solution to the task of behavioral analysis of suspicious objects – analysis… on a real operating system – one on a virtual machine! Well why not? If the emulator never quite fully cracks it, let a real – albeit virtual – machine have a go. It would be the ideal ‘interrogation’ – conducted in a real environment, not an artificial one, but with no real negative consequences.
Read on…