Cybernews: If Aramco had our Antidrone…; and honeypots to make IoT malware stop!

Hi folks!

Recently there was a Cyber News from the Dark Side item of oh-my-Gulf proportions. You’ll no doubt have heard about it as it was all over the news for days just recently. It was the drone attack on Saudi Aramco that took out millions of barrels of crude per day and caused hundreds of millions of dollars in damage.

Alas, I’m afraid this is only the beginning. Remember those drones bringing Heathrow – or was it Gatwick? – to a standstill a while back? Well this is just a natural progression. There’ll be more, for sure. In Saudi, the Houthis claimed responsibility, but both Saudi and the US blame Iran; Iran denies responsibility. In short – same old saber-rattling in the Middle East. But that’s not what I want to talk about here – that’s geopolitics, which we don’t do, remember? ) No, what I want to talk about is that, as the finger-pointing continues, in the meantime we’ve come up with a solution to stop drone attacks like this one on Aramco. Soooo, ladies and gents, I hereby introduce to the world… our new Antidrone!

So how does it work?

The device works out the coordinates of a moving object, a neural network determines whether it’s a drone, and if it is, blocks the connection between it and its remote controller. As a result the drone either returns back to where it was launched, or it lands below where it is up in the sky when intercepted. The system can be stationary, or mobile – e.g., for installation on a motor vehicle.

The main focus of our antidrone is protection of critically important infrastructure, airports, industrial objects, and other property. The Saudi Aramco incident highlighted how urgently necessary such technology is in preventing similar cases, and it’s only going to become more so: in 2018 the world market for drones was estimated at $14 billion; by 2024 it’s forecast to be $43 billion!

Clearly the market for protection against maliciously-minded drones is going to grow too – fast. However, at the moment, our Antidrone is the only one on the Russian market that can detect objects by video using neural networks, and the first in the world to use laser scanning for tracking down the location of drones.

Read on…

If I had a dollar for every time I’ve been asked this question in 30 years…

Hi folks!

Can you guess what question I’m asked most of all during interviews and press conferences?

It started being asked back in the 1990s, quickly becoming the feared question that used to make me want to roll my eyes (I resisted the temptation:). Then after a few years I decided to simply embrace its inevitability and unavoidability, and started to improvise a bit and add extra detail to my answers. And still today, though my answers have been published and broadcast in probably all the mass media in the whole world – often more than once – I am asked it over and over, again and again. Of late though, it’s like I’ve come full circle: when I’m asked it I actually like to remember those days of long ago!

So, worked it out yet?

The question is: ‘What was the first virus you found?’ (plus questions relating to it, like when did I find it, how did I cure the computer it had infected, etc.).

Clearly, an important question, since, if it weren’t for it infecting my computer all those years ago: I may not have made a rather drastic career change; I may not have created the best antivirus in the world; I may not have raised one of the largest private companies in cybersecurity, and a lot more besides. So yes, a fateful role did that virus play – that virus that was among the early harbingers of what was to follow: billions of its ‘descendants’, then, later, cybercrime, cyberwarfare, cyber-espionage, and all the cyber-bad-guys behind it all – in every corner of the globe.

Anyway – the answer finally, perhaps?…

The virus’s name was Cascade.

But, why, suddenly, all the nostalgia about this virus?

Read on…

Flickr photostream

Instagram photostream

Threat Intelligence Portal: We need to go deeper.

I understand perfectly well that for 95% of you this post will be of no use at all. But for the remaining 5%, it has the potential to greatly simplify your working week (and many working weekends). In other words, we’ve some great news for cybersecurity pros – SOC teams, independent researchers, and inquisitive techies: the tools that our woodpeckers and GReAT guys use on a daily basis to keep churning out the best cyberthreat research in the world are now available to all of you, and free at that, with the lite version of our Threat Intelligence Portal. It’s sometimes called TIP for short, and after I’ve said a few words about it here, immediate bookmarking will be mandatory!

The Threat Intelligence Portal solves two main problems for today’s overstretched cybersecurity expert. First: ‘Which of these several hundred suspicious files should I choose first?’; second: ‘Ok, my antivirus says the file’s clean – what’s next?’

Unlike the ‘classics’ – Endpoint Security–class products, which return a concise Clean/Dangerous verdict, the analytic tools built into the Threat Intelligence Portal give detailed information about how suspicious a file is and in what specific aspects. And not only files. Hashes, IP addresses, and URLs can be thrown in too for good measure. All these items are quickly analyzed by our cloud and the results on each handed back on a silver platter: what’s bad about them (if anything), how rare an infection is, what known threats they even remotely resemble, what tools were used to create it, and so on. On top of that, executable files are run in our patented cloud sandbox, with the results made available in a couple of minutes.

Read on…

Enter your email address to subscribe to this blog

i-Closed-architecture and the illusion of unhackability.

The end of August brought us quite a few news headlines around the world on the cybersecurity of mobile operating systems; rather – a lack of cybersecurity of mobile operating systems.

First up there was the news that iPhones have been getting attacked for a full two years (!) via a full 14 vulnerabilities (!) in iOS-based software. To be attacked, all a user had to do was visit one of several hacked websites – nothing more – and they’d never know anything about it.

But before all you Android heads start with the ‘nah nana nah nahs’ aimed at the Apple brethren, the very same week the iScandal broke, it was reported that Android devices had been targeted by (possibly) some of the same hackers who had been attacking iPhones.

It would seem that this news is just the next in a very long line of confirmations that no matter what the OS, there may always be vulnerabilities that can be found in it that can be exploited by certain folks – be they individuals, groups of individuals, or even countries (via their secret services). But there’s more to this news: it brings about a return to the discussion of the pros and cons of closed-architecture operating systems like iOS.

Let me quote a tweet first that ideally describes the status of cybersecurity in the iEcosystem:

In this case Apple was real lucky: the attack was discovered by white-hat hackers at Google, who privately gave the iDevelopers all the details, who in turn bunged up the holes in their software, and half a year later (when most of their users had already updated their iOS) told the world about what had happened.

Question #1: How quickly would the company have been able to solve the problem if the information had gone public before the release of the patch?

Question #2: How many months – or years – earlier would these holes have been found by independent cybersecurity experts if they had been allowed access to the diagnostics of the operating system?

To be frank, what we’ve got here is a monopoly on research into iOS. Both the search for vulnerabilities and analysis of apps are made much more difficult by the excessive closed nature of the system. The result is almost complete silence on the security front in iOS. But that silence does not actually mean everything’s fine; it just means that no one actually knows what’s really going on in there – inside those very expensive shiny slabs of aluminum and glass. Even Apple itself…

This state of affairs allows Apple to continue to claim it has the most secure OS; of course it can – as no one knows what’s inside the box. Meanwhile, as time passes – yet no independent experts can meaningfully analyze what is inside the box – hundreds of millions of users are just lying in wait helpless until the next wave of attacks hits iOS. Or, put another way – in pictures…:

Now, Apple, to its credit, does put a lot of time and money into increasing security and confidentiality with regard to its products and ecosystems on the whole. Thing is, there isn’t a single company – no matter how large and powerful – can do what the whole world community of cybersecurity experts can combined. Moreover, the most bandied-about argument for iOS being closed to third-party security solutions is that any access of independent developers to the system would represent a potential vector of attack. But that it just nonsense!

Discovering vulnerabilities and flagging bad apps is possible with read-only diagnostic technologies, which can expose malicious anomalies upon analysis of system events. However, such apps are being firmly expelled from the App Store! I can’t see any good reason for this beside fear of losing the ‘iOS research monopoly’… oh, and of course the ability to continue pushing the message that iOS is the most secure mobile platform. And this is why, when iUsers ask me how they’re supposed to actually protect their iDevices, I have just one simple stock answer: all they can do is pray and hope – because the whole global cybersecurity community just ain’t around to help ).

Cyber-news: nuclear crypto mining.

Hi folks!

The i-news section is back with a bang after the summer holidays. Straightaway there’s some hot industrial cybersecurity news.

In case anybody missed my posts about how I spent this summer, here you go. Meanwhile, how some of the personnel at the South Ukraine Nuclear Power Plant spent their summer was reported in recent crime-related news. Ukraine’s Security Service (SBU) recently terminated cryptocurrency mining at the power plant’s restricted access facilities. This, erm, extra-curricular activity resulted in the leak of top-secret information about the power plant’s physical security. This is not only pretty depressing but also downright scary.

source

According to expert forecasts, the ICS market is set to reach $7 billion by 2024. Attacks on critical infrastructure are increasingly hitting the headlines. The recent Venezuela blackout, for example, immediately looked suspicious to me, and just a couple of days later it was announced that it was caused by a cyberattack.

This July, in collaboration with ARC Advisory Group, we published a lengthy report on the state of things in the industrial cybersecurity sphere. It’s a good read, with lots of interesting stuff in there. Here is a number for you to ponder on: in 2018, 52% of industrial cybersecurity incidents were caused by staff errors, or, in other words, because of the notorious human factor. Behind this number is a whole host of problems, including a shortage of professionals to fill key jobs, a lack of technical awareness among employees, and insufficient cybersecurity budgets. Go ahead and read the report – it’s free :)

Attention all those interested in industrial cybersecurity: you still have a few days (till August 30) to sign up for our annual Kaspersky Industrial Cybersecurity Conference 2019. This year, it’s being held from September 18-20 in Sochi, Russia. There’ll be presentations by over 30 international ICS experts, including yours truly. So, see you soon in sunny Sochi to talk about some serious problems and ways to deal with them!

A honeytrap for malware.

I haven’t seen the sixth Mission Impossible movie – and I don’t think I will. I sat through the fifth – in suitably zombified state (returning home on a long-haul flight after a tough week’s business) – but only because one scene in it was shot in our shiny new modern London office. And that was one Mission Impossible installment too many really. Nope – not for me. Slap, bang, smash, crash, pow, wow. Oof. Nah, I prefer something a little more challenging, thought-provoking and just plain interesting. After all, I have precious little time as it is!

I really am giving Tom Cruise and Co. a major dissing here, aren’t I? But hold on. I have to give them their due for at least one scene done really rather well (i.e., thought provoking and plain interesting!). It’s the one where the good guys need to get a bad guy to rat on his bad-guy colleagues, or something like that. So they set up a fake environment in a ‘hospital’ with ‘CNN’ on the ‘TV’ and have ‘CNN’ broadcast a news report about atomic Armageddon. Suitably satisfied his apocalyptic manifesto had been broadcast to the world, the baddie gives up his pals (or was it a login code?) in the deal arranged with his interrogators. Oops. Here’s the clip.

Why do I like this scene so much? Because, actually, it demonstrates really well one of the methods of detecting… unseen-before cyberattacks! There are in fact many such methods – they vary depending on area of application, effectiveness, resource use, and other parameters (I write about them regularly here) – but there is one that always seems to stand out: emulation (about which I’ve also written plenty here before).

Like in the film, the emulator launches the object being investigated in an isolated, artificial environment, which encourages it to reveal its maliciousness.

But there’s one serious downside to such an approach – the very fact that the environment is artificial. The emulator does its best to make that artificial environment as close to a real environment of an operating system, but ever-increasingly smart malware still manages to differentiate it from the real thing, and the emulator observes how the malware has recognized it, so then has to regroup and improve its ’emulation’, and on and on in a never-ending cycle, which regularly opens the window of vulnerability on a protected computer. The fundamental problem is that the functionality of the emulator tries its best to look like a real OS, but never quite does it perfectly to be the spitting image of a real OS.

On the other hand, there’s another solution to the task of behavioral analysis of suspicious objects – analysis… on a real operating system – one on a virtual machine! Well why not? If the emulator never quite fully cracks it, let a real – albeit virtual – machine have a go. It would be the ideal ‘interrogation’ – conducted in a real environment, not an artificial one, but with no real negative consequences.

Read on…

We SOCked it 2 ’em – and passed the SOC 2 audit!

Last year I told you how, as part of our Global Transparency Initiative, we had plans to undergo an independent audit to receive SOC 2 certification. Well, finally, we can announce that we did undergo this third party audit… and passed! Hurray! And it wasn’t easy: it took a lot of work by a great many of our K-folks. But now that’s all behind us, and I’m very proud that we’ve done it!

So what does this mysterious SOC abbreviation stand for, and (whatever it may be) why is it needed?

Ok. The abbreviation stands for Service Organization Controls, and SOC 2 is a report based on the ‘Trust Services principles and criteria’ of the American Institute of CPAs (AICPA) [CPA: Certified Public Accountants], which evaluates an organization’s information systems relevant to security, availability, processing integrity, and confidentiality/privacy. Put another way, this is a (worldwide recognized) standard for audits of information risk control systems. Its main aim is to provide information on how effective a company’s control mechanisms are (so other companies can assess any risks associated with working therewith).

We decided to seek SOC 2 to be able to confirm the reliability of our products and prove to our customers and partners that our internal processes correspond to the highest of international standards and that we’ve nothing to hide. The audit for us was conducted by one of the Big Four accounting firms (I can’t tell you which as per the respective contract’s terms and conditions, in case you were wondering). Over the past year different K-departments have been working closely with the auditors sharing with them all the information they’ve needed, and that includes R&D, IT, Information Security, and our internal audit team.

The final report, which we received this week, confirms the soundness of the internal control mechanisms used for our automatic AV database updates, and also that the process of developing and launching our antivirus databases is protected against unauthorized access. Hurray!

And if you’re a customer, partner or state regulator, please get in touch if you’d like to see a copy of the report.

That’s all for today folks, but I’ll be back tomorrow with a quick rewind back to STARMUS and some more detail of the presentations thereat.

Meanwhile, privyet, from…

Cyber-news from the dark side – cyber-hypocrisy, an eye for a Mirai, GCHQ-watching-you, and keeping BlueKeep at bay.

Hi folks!

Let’s kick off with some good news….

‘Most tested, most awarded’ – still ).

Just recently, the respected independent test lab AV-Comparatives released the results of its annual survey. Taking place at the end of 2018, the survey, er, surveyed 3000 respondents worldwide. Of the 19 questions asked of each, one was ‘Which desktop anti-malware security solution do you primarily use?‘. And guess which brand came top in the answers for Europe, Asia, and South/Central America? Yes: K! In North America we came second (and I’m sure that’s only temporary). In addition, in Europe we were chosen as the most frequently used security solution for smartphones. We’re also at the top of the list of companies whose products users most often ask to test, both in the ‘home’ segment and among antivirus products for business. Great! We like tests, and you can see why! Btw – here’s more detail on the independent tests and reviews that our products undergo.

“Thou hypocrite, first cast out the beam out of thine own eye;
and then shalt thou see clearly to cast the speck out of thy brother’s eye.”
Matthew 7:5

In May, yet another backdoor with features reeeaaal useful for espionage was discovered. In whose tech was the backdoor found? Russia’s? China’s? Actually – Cisco‘s (again)! Was there a hullabaloo about it in the media? Incessant front-page headlines and discussion about threats to national security? Talk of banning Cisco equipment outside the U.S., etc.? Oh, what, you missed it too?! Yet at the same time, Huawei’s international lynching is not only in full swing – it’s in full swing without such backdoors, and without any convincing evidence thereof whatsoever.

source

Read on…

The pig is back!

Hi folks!

Once upon a time, long, long ago, we had a pet pig. Not a real one – and it didn’t even have a name – but its squeal became a famous one. Now, those of you who’ve been using Kaspersky Lab products for decades will no doubt know what I’m referring to. For the relative newbies among you, let me let you in on the joke…

In the cyber-antiquity of the 1990s, we added a feature to our AV product: when it detected a virus, it gave out a loud piggy-squeal! Some folks hated it; others loved it!

source

But after a while, for one reason or another, eventually the piggy squeal disappeared; incidentally – as did the ‘K’ icon in the tray, replaced by a more modern and understandable symbol.

Now, any good company has a circle of devoted fans (we even have an official fan club), and we’re no exception. And many of these fans down the years have written to me imploring us to ‘Bring back the pig!’ or asking ‘Where’s the ‘K’ in the Taskbar gone?!’

Well not long ago, we figured that, if that’s what folks want, why not give it to them? And since these days customizing products is really simple… that’s just what we did. So, herewith, announcing…

the return of the piggy! :)

Right. So how do you actually go about activating its squeals and bringing back the ‘K’? Here’s how:

In one of the most recent versions of our personal products was added an update (19.0.0.1088(e), which, btw, internally is codenamed ‘K icon and pig’!). And the update works for all our personal products: KFA, KAV, KIS, KTS, KSC and KSOS.

All this talk of piggies and Ks… but might they affect the quality/ speed/ efficiency/ effectiveness/ whatever of our products? Simple answer: no – in no way at all. Nice. Right – back to all this talk of piggies and Ks…

Here are the instructions:

  1. Make sure you have update 19.0.0.1088(e) or later, with the default settings applied;
  2. Make sure you have Windows 7 or older (for example XP) (sorry folks, this doesn’t work on Windows 10);
  3. Right-click on the product’s icon in the Taskbar, choose ‘About’, and here we apply some magic…
  4. Now type IDKFA (in caps, like here);
  5. Next – download a test file (a file that pretends to be a virus): eicar test file;
  6. The file won’t download though (the product blocks it in the browser), and instead of the download window opening – you guessed it: piggy squeal;
  7. There’s another way of doing it: pause the protection. Bingo!

You can change the icon in exactly the same way, only you need to type IDDQD instead of IDKFA. Btw: if you type it a second time, the icon will revert back to the standard one.

And if you’re wondering why on earth you need to type IDDQD or IDKFA, check this out ).

So there you have it. The pig is back. As is the K! Well, we had to make up for the ‘Lab’ being dropped, right? )