Tag Archives: cyber warfare

Here’s to aggressive detection of maliciousness!

In recent years there’s been all sorts written about us in the U.S. press, and the article last Thursday in the Wall Street Journal at first seemed to be just more of the same: the latest in a long line of conspiratorial smear-articles. Here’s why it seemed so: according to anonymous sources, a few years ago Russian government-backed hackers, allegedly, with the help of a hack into the product of Your Humble Servant, stole from the home computer of an NSA employee secret documentation. Btw: our formal response to this story is here.

However, if you strip the article of the content regarding alleged Kremlin-backed hackers, there emerges an outline to a very different – believable – possible scenario, one in which, as the article itself points out, we are ‘aggressive in [our] methods of fighting malware’.

Ok, let’s go over the article again…

In 2015 a certain NSA employee – a developer working on the U.S. cyber-espionage program – decided to work from home for a bit and so copied some secret documentation onto his (her?) home computer, probably via a USB stick. Now, on that home computer he’d – quite rightly and understandably – installed the best antivirus in the world, and – also quite rightly – had our cloud-based KSN activated. Thus the scene was set, and he continued his daily travails on state-backed malware in the comfort of his own home.

Let’s go over that just once more…

So, a spy-software developer was working at home on same spy-software, having all the instrumentation and documentation he needed for such a task, and protecting himself from the world’s computer maliciousness with our cloud-connected product.

Now, what could have happened next? This is what:

Malware could have been detected as suspicious by the AV and sent to the cloud for analysis. For this is the standard process for processing any newly-found malware – and by ‘standard’ I mean standard across the industry; all our competitors use a similar logic in this or that form. And experience shows it’s a very effective method for fighting cyberthreats (that’s why everyone uses it).

So what happens with the data that gets sent to the cloud? In ~99.99% of cases, analysis of the suspicious objects is done by our machine learning technologies, and if they’re malware, they’re added to our malware detection database (and also to our archive), and the rest goes in the bin. The other ~0.1% of data is sent for manual processing by our virus analysts, who analyze it and make their verdicts as to whether it’s malware or not.

Ok – I hope that part’s all clear.

Next: What about the possibility of hack into our products by Russian-government-backed hackers?

Theoretically such a hack is possible (program code is written by humans, and humans will make mistakes), but I put the probability of an actual hack at zero. Here’s one example as to why:

In the same year as what the WSJ describes occurred, we discovered on our own network an attack by an unknown seemingly state-sponsored actor – Duqu2. Consequently we conducted a painstakingly detailed audit of our source code, updates and other technologies, and found… – no signs whatsoever of any third-party breach of any of it. So as you can see, we take any reports about possible vulnerabilities in our products very seriously. And this new report about possible vulnerabilities is no exception, which is why we’ll be conducting another deep audit very soon.

The takeaway:

If the story about our product’s uncovering of government-grade malware on an NSA employee’s home computer is real, then that, ladies and gents, is something to be proud of. Proactively detecting previously unknown highly-sophisticated malware is a real achievement. And it’s the best proof there is of the excellence of our technologies, plus confirmation of our mission: to protect against any cyberthreat no matter where it may come from or its objective.

So, like I say… here’s to aggressive detection of malware. Cheers!

Keeping Cybersecurity Separate from Geopolitics.

Last week, Kaspersky Lab was in the spotlight again in another ‘sensational’ news stream.

I say ‘again’ as this isn’t the first time we’ve been faced with allegations, ungrounded speculation and all sorts of other made-up things since the change of the geopolitical situation a few years ago. With the U.S. and Russia at odds, somehow, my company, its innovative and proven products as well as our amazing employees are repeatedly being defamed, given that I started the company in Russia 20 years ago. While this wasn’t really a problem before, I get it– it’s definitely not popular to be Russian right now in some countries.

For some reason the assumption continues to resonate that since we’re Russian, we must also be tied to the Russian government. But really, as a global company, does anyone seriously think we could survive this long if we were a pawn of ANY government? Our whole business is based on one thing – besides expertise – and that’s trust. Would we really risk our whole business by undermining our trustworthiness?

Especially given that the best non-Kaspersky Lab security researchers (hackers) are constantly scouring our code/products to find and report vulnerabilities. In fact, we even have a public bug bounty program, where we pay researchers to examine our products and search for any issues or possible security concerns. If there was anything suspicious or nefarious to find, they would have publicly shouted it to the roof tops by now.

Read on: Five destructive repercussions of a technology sanctions game…

Cyber-Forecast: 2017.

Such is the way Homo Sapiens are: we’re constantly – even recklessly – looking to the future to try and work out what it might hold for us. Many say we should all live in the present – after all, the future never comes – but, well, that doesn’t work for everyone, and most of us do need to make at least some plans for our futures.

But there are different approaches to looking ahead.

There’s belief in fate, pure guessing, flipping a coin, and so on. There’s also not thinking about the future at all. But there’s a far superior, science-based approach too. This is doing the eastern spirituality thing a bit – not quite being in the present but carefully analyzing the present instead – to be able to predict the future as accurately as possible. And this is exactly what is done to predict the cyber-future; in particular – the security of the cyber-future. And that’s what we do – little by little every day, but also broadly and deeply and especially – and merrily – every year, when we bring together the world’s cybersecurity elite for a week-long pow-wow in a tropical seaside resort, which pow-wow we call the Security Analyst Summit (SAS):

Oops – wrong vid. Here u go…:

Dough! Nope. This one:

I don’t know quite how it’s done but every single year SAS just gets better. I mean, it’s always been GReAT, but the GReATness just keeps going up and up: more experts, better quality content, better and more original ideas, slicker, cooler, and more and more world scoops and exclusive material.

And it’s exclusive material that I’ll be writing about in this here post. Specifically, my Top-5 favorite presentations from SAS-2017. I’m not saying the others were no good or just so-so, it’s just I wasn’t physically able to see them all as they were running simultaneously in different halls. Also – everyone has their own taste; well here’s a guide to mine!…

Off we go!…

Read on: A Maze for a Penguin Under the Moonlight…

StoneDrill: We’ve Found New Powerful ‘Shamoon-ish’ Wiper Malware – and It’s Serious.

If you’re a regular reader of this here blog of mine, you’ll know about our GReAT (Global Research and Analysis Team) – 40+ top-notch cybersecurity experts dotted all around the globe specializing in protecting our customers from the most sophisticated cyberthreats out there. GReATers like to compare their work to paleontology: exploring the deep web for the ‘bones’ of ‘cyber monsters’. Some may consider this an old-fashioned approach: what’s so special about analyzing the ‘bones’ of ‘creatures’ from the distant past when it’s protecting your networks from monsters that are alive now that’s key? Well, here’s a fresh story that proves that sometimes you won’t find today’s living monsters without looking at old ones…

Some of you will be aware of so-called wipers – a type of malware which, once installed on an attacked PC, completely wipes all data from it – leaving the owner of the computer with a completely clean, hardly operating piece of hardware. The most famous (and infamous) wiper is Shamoon – malware which in 2012 made a lot of noise in the Middle East by destroying data on 30,000+ endpoints at the world’s largest oil company – Saudi Aramco, and also hitting another energy  giant – Rasgas. Just imagine: 30,000+ pieces of inoperable hardware in the world’s largest  oil company…

Shamoon, Shamoon 2.0, StoneDrill, Newsbeef. The wipers are spreading across the globe

Curiously, since it’s devastating campaign against the Saudi company in 2012, little has been heard of Shamoon, until it returned in 2016 as Shamoon 2.0, with several new waves of attacks – again in the Middle East.

Since the new waves of Shamoon attacks began, we’ve been tuning our sensors to search for as many versions of this malware as possible (because, let’s face it, we don’t want ANY of our customers to EVER be struck by malware like Shamoon). And we managed to find several versions – hurray! But together with our haul of Shamooners, our nets unexpectedly caught a completely new type of wiper malware, which we’ve named StoneDrill.

The code base of StoneDrill is different to that of Shamoon, and that’s why we think it’s a completely new malware family; it also utilizes some advanced detection avoidance techniques, which Shamoon doesn’t. So it’s a new player, for sure. And one of the most unusual – and worrying – things we’ve learned about this malware is that, unlike Shamoon, StoneDrill doesn’t limit the scope of its targets to Saudi Arabia or other neighboring countries. We’ve found only two targets of this malware so far, and one of them is based in Europe.

Why is this worrying? Because this finding indicates that certain malicious actors armed with devastating cyber-tools are testing the water in regions in which previously actors of this type were rarely interested.

Read on: more wipers!…

Uh-Oh Cyber-News: Infect a Friend, Rebooting Boeings, No-Authentication Holes, and More.

Hi folks!

Herewith, the next installment in my ‘Uh-oh Cyber-News’ column – the one in which I keep you up to date with all that’s scarily fragile and frailly scary in the digital world.

Since the last ‘Uh-oh’ a lot has piled up that really needs bringing to your attention. Yep, the flow of ‘Uh-ohs’ has indeed turned from mere mountain-stream trickle to full-on Niagara levels. And that flow just keeps on getting faster and faster…

As a veteran of cyber-defense, I can tell you that in times past cataclysms of a planetary scale were discussed for maybe half a year. While now the stream of messages is like salmon in spawning season: overload! So many they’re hardly worth mentioning as they’re already yesterday’s news before you can say ‘digital over-DDoSe’. “I heard how they hacked Mega-Corporation X the other day and stole everything; even the boss’s hamster was whisked away by a drone!”…

Anyway, since the stream of consciousness cyber-scandals is rapidly on the up and up, accordingly, the number of such scandals I’ll be writing about has also gone up. In the past there were three of four per blogpost. Today: seven!

Popcorn/coffee/beer at the ready? Off we go…

1) Infect a Friend and Get Your Own Files Unlocked for Free.

Read on: Effective Hacker Headhunting…

Uh-oh Cyber-News: Infected Nuclear Reactors, Cyber-Bank Robbers, and Cyber-Dam-Busters.

Just a quick read of the news these days and you can find yourself wanting to reach for… a Geiger counter. I mean, some of the news stories are just so alarming of late. Or am I overreacting? Let’s see…

Uh-oh News Item No. 1: Apocalypse Averted – for Now. 

inews-1Photo courtesy of Wikipedia

It was reported that the IT system of Unit B of the Gundremmingen Nuclear Power Plant in Swabia, Bavaria, southwestern Germany – right on the 30-year anniversary to-the-day of the Chernobyl disaster (!) – had been infected by some malware. However, it was also reported that there’s no reason to worry at all as no danger’s being posed whatsoever. All’s ok; we can all sleep soundly; everything’s under control; the danger level couldn’t be lower.

After sighing a ‘pheewwwww’ and mopping one’s brow, you read further…

… And as you do, you get a few more details of the incident. And it does indeed seem all is ok: the background radiation level, after all, didn’t go up – that’s the main thing, surely. Right? But then you read further still…

And you find out that the (Internet-isolated) system that was infected happens to be the one that controls the movement of nuclear fuel rods. It’s here you stop, rub the eyes, and read that again slowly…

WHAAAAT?

Read on: Cyber-Spy-Novel-Worthy …

Get Your KICS en Route to Industrial Protection.

Hurray!

We’ve launched our KICS (Kaspersky Industrial CyberSecurity), the special cyber-inoculation against cyber-disease, which protect factories, power plants, hospitals, airports, hotels, warehouses, your favorite deli, and thousands of other types of enterprises that use industrial control systems (ICS). Or, put another way, since it’s rare for an enterprise today to manage without such systems, we’ve just launched a cyber-solution for millions of large, medium and small production and service businesses all around the world!

So what’s this KICS all about exactly? What’s it for? First, rewind…

Before the 2000s a cyberattack on an industrial installation was a mere source of inspiration for science fiction writers. But on August 14, 2003 in northeastern USA and southeastern Canada, the science fiction became a reality:

kaspersky-industrial-security-1Oops

Because of certain power grid glitches, 50 million North Americans went without electricity – some for several hours, others for several days. Many reasons were put forward as to the reasons behind this man-made catastrophe, including unkempt trees, a bolt of lightning, malicious squirrels, and… a side-effect from a cyberattack using the Slammer (Blaster) computer worm.

Read on: Hacked in 60 seconds…

The Big Picture.

Last spring (2015), we discovered Duqu 2.0 – a highly professional, very expensive, cyber-espionage operation. Probably state-sponsored. We identified it when we were testing the beta-version of the Kaspersky Anti Targeted Attack (KATA) platform – our solution that defends against sophisticated targeted attacks just like Duqu 2.0.

And now, a year later, I can proudly proclaim: hurray!! The product is now officially released and fully battle ready!

Kaspersky Anti-Targeted Attack Platform

But first, let me now go back in time a bit to tell you about why things have come to this – why we’re now stuck with state-backed cyber-spying and why we had to come up with some very specific protection against it.

(While for those who’d prefer to go straight to the beef in this here post – click here.)

‘The good old days’ – words so often uttered as if bad things just never happened in the past. The music was better, society was fairer, the streets were safer, the beer had a better head, and on and on and on. Sometimes, however, things really were better; one example being how relatively easy it was to fight cyber-pests in years past.

Of course, back then I didn’t think so. We were working 25 hours a day, eight days a week, all the time cursing the virus writers and their phenomenal reproduction rate. Each month (and sometimes more often) there were global worm epidemics and we were always thinking that things couldn’t get much worse. How wrong we were…

At the start of this century viruses were written mainly by students and cyber-hooligans. They’d neither the intention nor the ability to create anything really serious, so the epidemics they were responsible for were snuffed out within days – often using proactive methods. They simply didn’t have any motivation for coming up with anything more ominous; they were doing it just for kicks when they’d get bored of Doom and Duke Nukem :).

The mid-2000s saw big money hit the Internet, plus new technologies that connected everything from power plants to mp3 players. Professional cybercriminal groups also entered the stage seeking the big bucks the Internet could provide, while cyber-intelligence-services-cum-armies were attracted to it by the technological possibilities if offered. These groups had the motivation, means and know-how to create reeeaaaally complex malware and conduct reeeaaaally sophisticated attacks while remaining under the radar.

Around about this time… ‘antivirus died’: traditional methods of protection could no longer maintain sufficient levels of security. Then a cyber-arms race began – a modern take on the eternal model of power based on violence – either attacking using it or defending against its use. Cyberattacks became more selective/pinpointed in terms of targets chosen, more stealthy, and a lot more advanced.

In the meantime ‘basic’ AV (which by then was far from just AV) had evolved into complex, multi-component systems of multi-level protection, crammed full of all sorts of different protective technologies, while advanced corporate security systems had built up yet more formidable arsenals for controlling perimeters and detecting intrusions.

However, that approach, no matter how impressive on the face of it, had one small but critical drawback for large corporations: it did little to proactively detect the most professional targeted attacks – those that use unique malware using specific social engineering and zero-days. Malware that can stay unnoticed to security technologies.

I’m talking attacks carefully planned months if not years in advance by top experts backed by bottomless budgets and sometimes state financial support. Attacks like these can sometimes stay under the radar for many years; for example, the Equation operation we uncovered in 2014 had roots going back as far as 1996!

Banks, governments, critical infrastructure, manufacturing – tens of thousands of large organizations in various fields and with different forms of ownership (basically the basis of today’s world economy and order) – all of it turns out to be vulnerable to these super professional threats. And the demand for targets’ data, money and intellectual property is high and continually rising.

So what’s to be done? Just accept these modern day super threats as an inevitable part of modern life? Give up the fight against these targeted attacks?

No way.

Anything that can be attacked – no matter how sophisticatedly – can be protected to a great degree if you put serious time and effort and brains into that protection. There’ll never be 100% absolute protection, but there is such a thing as maximal protection, which makes attacks economically unfeasible to carry out: barriers so formidable that the aggressors decide to give up putting vast resources into getting through them, and instead go off and find some lesser protected victims. Of course there’ll be exceptions, especially when politically motivated attacks against certain victims are on the agenda; such attacks will be doggedly seen through to the end – a victorious end for the attacker; but that’s no reason to quit putting up a fight.

All righty. Historical context lesson over, now to that earlier mentioned sirloin…

…Just what the doctor ordered against advanced targeted attacks – our new Kaspersky Anti Targeted Attack platform (KATA).

So what exactly is this KATA, how does it work, and how much does it cost?

First, a bit on the anatomy of a targeted attack…

A targeted attack is always exclusive: tailor-made for a specific organization or individual.

The baddies behind a targeted attack start out by scrupulously gathering information on the targets right down to the most minor of details – for the success of an attack depends on the completeness of such a ‘dossier’ almost as much as the budget of the operation. All the targeted individuals are spied on and analyzed: their lifestyles, families, hobbies, and so on. How the corporate network is constructed is also studied carefully. And on the basis of all the information collected an attack strategy is selected.

Next, (i) the network is penetrated and remote (& undetected) access with maximum privileges is obtained. After that, (ii) the critical infrastructure nodes are compromised. And finally, (iii) ‘bombs away!’: the pilfering or destruction of data, the disruption of business processes, or whatever else might be the objective of the attack, plus the equally important covering one’s tracks so no one knows who’s responsible.

The motivation, the duration of the various prep-and-execution stages, the attack vectors, the penetration technologies, and the malware itself – all of it is very individual. But not matter how exclusive an attack gets, it will always have an Achilles’ heel. For an attack will always cause at least a few tiny noticeable happenings (network activity, certain behavior of files and other objects, etc.), anomalies being thrown up, and abnormal network activity. So seeing the bird’s-eye view big picture – in fact the whole picture formed from different sources around the network – makes it possible to detect a break-in.

To collect all the data about such anomalies and the creation of the big picture, KATA uses sensors – special ‘e-agents’ – which continuously analyze IP/web/email traffic plus events on workstations and servers.

For example, we intercept IP traffic (HTTP(s), FTP, DNS) using TAP/SPAN; the web sensor integrates with the proxy servers via ICAP; and the mail sensor is attached to the email servers via POP3(S). The agents are real lightweight (for Windows – around 15 megabytes), are compatible with other security software, and make hardly any impact at all on either network or endpoint resources.

All collected data (objects and metadata) are then transferred to the Analysis Center for processing using various methods (sandbox, AV scanning and adjustable YARA rules, checking file and URL reputations, vulnerability scanning, etc.) and archiving. It’s also possible to plug the system into our KSN cloud, or to keep things internal – with an internal copy of KpSN for better compliance.

Once the big picture is assembled, it’s time for the next stage! KATA reveals suspicious activity and can inform the admins and SIEM (Splunk, Qradar, ArcSight) about any unpleasantness detected. Even better – the longer the system works and the more data accumulates about the network, the more effective it is, since atypical behavior becomes easier to spot.

More details on how KATA works… here.

Ah yes; nearly forgot… how much does all this cost?

Well, there’s no simple answer to that one. The price of the service depends on dozens of factors, including the size and topology of the corporate network, how the solution is configured, and how many accompanying services are used. One thing is clear though: the cost pales into insignificance if compared with the potential damage it prevents.

Cyber-news: Vulnerable nuclear power stations, and cyber-saber… control?

Herewith, a quick review of and comment on some ‘news’ – rather, updates – on what I’ve been banging on about for years! Hate to say ‘told you so’, but… TOLD YOU SO!

First.

(Random pic of) the Cattenom Nuclear Power Plant in France where, I hope, all is tip-top in terms of cybersecurity(Random pic of) the Cattenom Nuclear Power Plant in France where, I hope, all is tip-top in terms of cybersecurity

I’ve been pushing for better awareness of problems of cybersecurity of industry and infrastructure for, er, let’s see, more than 15 years. There has of late been an increase in discussion of this issue around the world by state bodies, research institutes, the media and the general public; however, to my great chagrin, though there’s been a lot of talk, there’s still not been much in the way of real progress in actually getting anything done physically, legally, diplomatically, and all the other …lys. Here’s one stark example demonstrating this:

Earlier this week, Chatham House, the influential British think tank, published a report entitled ‘Cyber Security at Civil Nuclear Facilities: Understanding the Risks’. Yep, the title alone brings on goosebumps; but some of the details inside… YIKES.

I won’t go into those details here; you can read the report yourself – if you’ve plenty of time to spare. I will say here that the main thrust of the report is that the risk of a cyberattack on nuclear power plants is growing all around the world. UH-OH.

The report is based exclusively on interviews with experts. Yes, meaning no primary referenceable evidence was used. Hmmm. A bit like someone trying to explain the contents of an erotic movie – doesn’t really compare to watching the real thing. Still, I guess this is to be expected: this sector is, after all, universally throughout the whole world, secret.

All the same, now let me describe the erotic movie from how it was described to me (through reading the report)! At least, let me go through its main conclusions – all of which, if you really think about them, are apocalyptically alarming:

  1. Physical isolation of computer networks of nuclear power stations doesn’t exist: it’s a myth (note, this is based on those stations that were surveyed, whichever they may be; nothing concrete). The Brits note that VPN connections are often used at nuclear power stations – often by contractors; they’re often undocumented (not officially declared), and are sometimes simply forgotten about while actually staying fully alive and ready for use [read: abuse].
  2. A long list of industrial systems connected to the Internet can be found on the Internet via search engines like Shodan.
  3. Where physical isolation may exist, it can still be easily gotten around with the use of USB sticks (as in Stuxnet).
  4. Throughout the whole world the atomic energy industry is far from keen on sharing information on cyber-incidents, making it tricky to accurately understand the extent of the the security situation. Also, the industry doesn’t collaborate much with other industries, meaning it doesn’t learn from their experience and know-how.
  5. To cut costs, regular commercial (vulnerable) software is increasingly used in the industry.
  6. Many industrial control systems are ‘insecure by design’. Plus patching them without interrupting the processes they control is very difficult.
  7. And much more besides in the full 53-page report.

These scary facts and details are hardly news for IT security specialists. Still, let’s hope that high-profile publications such as this one will start to bring about change. The main thing at present is for all the respective software to be patched asap, and for industrial IT security in general to be bolstered to a safe level before a catastrophe occurs – not after.

Among other things, the report recommends promoting ‘secure by design’ industrial control systems. Hear hear! We’re totally in support of that one! Our secure OS is one such initiative. To make industrial control systems, including SCADA, impenetrable, requires an overhaul of the principles of cybersecurity on the whole. Unfortunately, the road towards that is long – and we’re only at the very beginning of it. Still, at least we’re all clear on which direction to head toward. Baby steps…

Second.

For several years I’ve also been pushing for the creation of a global agreement against cyberwar. Though we signs of a better understanding of the logic of such an agreement on the part of all the respective parties – academics, diplomats, governments, international organizations, etc. – we’re seeing little real progress towards any such concrete agreement, just like with the securing of industrial systems. Still, at least the reining in of cyber-spying and cyberwar is on the agenda at last.

Photo: Michael Reynolds/EPA. SourcePhoto: Michael Reynolds/EPA. Source

For example, Barack Obama and Xi Jinping at the end of September agreed that their countries – the two largest economies in the world – won’t engage in commercial cyberspying on each other anymore. Moreover, the topic of cybersecurity dominated their joint press conference (together with a load measures aimed at slowing climate change). Curiously, the thorny issues of political and military cyber-espionage weren’t brought up at all!

So. Does this represent a breakthrough? Of course not.

Still, again, at least this small step is in the right direction. There have also been rumors that Beijing and Washington are holding negotiations regarding an agreement on prohibiting attacks in cyberspace. At the September meeting of the leaders the topic wasn’t brought up, but let’s hope it will be soon. It would be an important, albeit symbolic, step.

Of course, ideally, such agreements would be signed in the future by all countries in the world, bringing the prospect of a demilitarized Internet and cyberspace that little bit closer. Yes, that would be the best scenario; however, for the moment, not the most realistic. Let’s just keep pushing for it.

Your car controlled remotely by hackers: it’s arrived.

Every now and again (once every several years or so), a high-profile unpleasantness occurs in the cyberworld – some unexpected new maliciousness that fairly bowls the world over. For most ‘civilians’ it’s just the latest in a constant stream of seemingly inevitable troublesome cyber-surprises. As for my colleagues and me, we normally nod, wink, grimace, and raise the eyebrows à la Roger Moore among ourselves while exclaiming something like: ‘We’ve been expecting you Mr. Bond. What took you so long?’

For we’re forever studying and analyzing the main tendencies of the Dark Web so we can get an idea of who’s behind its murkiness and of the motivations involved; that way we can predict how things are going to develop.

Every time one of these new ‘unexpected’ events occurs, I normally find myself in the tricky position of having to give a speech (rather – speeches) along the lines of ‘Welcome to the new era‘. Trickiest of all is admitting I’m just repeating myself from a speech made years ago. The easy bit: I just have to update that old speech a bit by adding something like: ‘I did warn you about this; and you thought I was just scaremongering to sell product!’

Ok, you get it (no one likes being told ‘told you so’, so I’ll move on:).

So. What unpleasant cyber-unexpectedness is it this time? Actually, one affecting something close to my heart: the world of automobiles!

A few days ago WIRED published an article with an opening sentence that reads: ‘I was driving at 70 mph on the edge of downtown St. Louis when the exploit began to take hold.‘ Eek!

The piece goes on to describe a successful experiment in which hackers security researchers remotely ‘kill’ a car that’s too clever by half: they dissected (over months) the computerized Uconnect system of a Jeep Cherokee, eventually found a vulnerability, and then managed to seize control of the critical functions of the vehicle via the Internet – while the WIRED reporter was driving the vehicle on a highway! I kid you not folks. And we’re not talking a one-off ‘lab case’ here affecting one car. Nope, the hole the researchers found and exploited affects almost half a million cars. Oops – and eek! again.

Jeep Cherokee smart car remotely hacked by Charlie Miller and Chris Valasek. The image originally appeared in Wired

However, the problem of security of ‘smart’ cars is nothing new. I first ‘joked’ about this topic back in 2002. Ok, it was on April 1. But now it’s for real! You know what they say… Be careful what you wish for joke about (there’s many a true word spoken in jest:).

Not only is the problem not new, it’s also quite logical that it’s becoming serious: manufacturers compete for customers, and as there’s hardly a customer left who doesn’t carry at all times a smartphone, it’s only natural that the car (the more expensive – the quicker) has steadily been transformed into its appendage (an appendage of the smartphone – not the user, just in case anyone didn’t understand me correctly).

More and more control functions of smart cars are now firmly in the domain of the smartphone. And Uconnect isn’t unique here; practically every large car manufacturer has its own similar technology, some more advanced than others: there’s Volvo On CallBMW Connected DriveAudi MMIMercedes-Benz COMANDGM OnstarHyundai Blue Link and many others.

More and more convenience for the modern car-driving consumer – all well and good. The problem is though that in this manufacturers’ ‘arms race’ to try and outdo each other, critical IT security matters often go ignored.

Why? 

First, the manufacturers see being ahead of the Jones’s as paramount: the coolest tech functionality via a smartphone sells cars. ‘Security aspects? Let’s get to that later, eh? We need to roll this out yesterday.’

Second, remote control cars – it’s a market with good prospects.

Third, throughout the auto industry there’s a tendency – still today! – to view all the computerized tech on cars as something separate, mysterious, faddy (yep!) and not really car-like, so no one high up in the industry has a genuine desire to ‘get their hands dirty’ with it; therefore, the brains applied to it are chronically insufficient to make the tech secure.

It all adds up to a situation where fancy motorcars are becoming increasingly hackable and thus stealable. Great. Just what the world needs right now.

What the…?

Ok. That’s the basic outline. Now for the technical background and detail to maybe get to know what the #*@! is going on here!…

Way back in 1985 Bosch developed CAN. No, not their compatriot avant-garde rockers (who’d been around since 1968), but a ‘controller area network’ – a ‘vehicle bus’ (onboard communications network), which interconnects and regulates the exchange of data among different devices – actually, those devices’ microcontrollers – directly, without a central computer.

For example, when the ‘AC’ button on the dashboard is pressed, the dashboard’s microcontroller sends a signal to the microcontroller of the air conditioner saying ‘turn on, the driver wants cooling down’. Or when the brake pedal is pressed, the microcontroller of the pedal mechanism sends an instruction to the brake pads to press up against the brake discs.

CAN stands for 'controller area network', a 'vehicle bus' which interconnects and regulates the exchange of data among different devices шт a smart car

Put another way, the electronics system of a modern automobile is a peer-to-peer computer network – designed some 30 years ago. It gets better: despite the fact that over three decades CAN has been repeatedly updated and improved, it still doesn’t have any security functions! Maybe that’s to be expected – what extra security can be demanded of, say, a serial port? CAN too is a low level protocol and its specifications explicitly state that its security needs to be provided by the devices/applications that use it.

Maybe they don’t read the manuals. Or maybe they’re too busy trying to stay ahead of competitors and come up with the best smart car features.

Whatever the reasons, the fundamental fact causing all the trouble remains: Some auto manufacturers keep squeezing onto CAN more and more controllers without considering basic rules of security. Onto one and the same bus – which has neither access control nor any other security features – they strap the entire computerized management system that controls absolutely everything. And it’s connected to the Internet. Eek!

Hooking up devices to the Internet isn't a good idea. Engineers should think twice before doing this

Just like on any big computer network (e.g., the Internet), cars too need a strict ‘division of trust’ for controllers. Operations on a car where there’s communication with the outside world – be it installation of an app on the media system from an online store, or sending car performance diagnostics to the manufacturer – need to be firmly and securely split from the engine control, the security and other critical systems.

If you show an IT security specialist a car, lots of functions of which can be controlled by, say, an Android app, he or she would be able to demonstrate in no time at all a dozen or so different ways to get round the ‘protection’ and seize control of the functions the app can control. Such an experiment would also demonstrate how the car isn’t all that different really from a bank account: bank accounts can be hacked with specially designed technologies, in their case with banking Trojans. But there is a further potential method that could be used to hack a car just like a bank account too: with the use of a vulnerability, like in the case of the Jeep Cherokee.

Any reasons to be cheerful?…

…There are some.

Now, the auto industry (and just about everyone else) seems to be well aware of the degree of seriousness of the problem of cybersecurity of its smart car sector (thanks to security researchers like those in the WIRED article, though some manufacturers are loath to show their gratitude openly).

A sign of this is how recently the US Alliance of Automobile Manufacturers announced the creation of an Information Sharing and Analysis Center, “that will serve as a central hub for intelligence and analysis, providing timely sharing of cyber threat information and potential vulnerabilities in motor vehicle electronics or associated in-vehicle networks.” Good-o. I just don’t see how they plan to get along without security industry folks involved.

And it’s not just the motor industry that’s now on its toes: hours (!) after the publication of the WIRED article (the timing was a coincidence, it was reported) new federal legislation in the US was introduced establishing standardization of motor industry technologies in the field of cybersecurity. Meantime, we’re hardly twiddling thumbs or sat on hands: we’re actively working with several auto brands, consulting them on how to get their smart-car cybersecurity tightened up proper.

So, as you can see, there is light at the end of the tunnel. However…

…However, the described cybersecurity issue isn’t limited just to the motor industry.

CAN and other standards like it are used in manufacturing, the energy sector, transportation, utilities, ‘smart houses’, even in the elevator in your office building – in short – EVERYWHERE! And everywhere it’s the same problem: the growth of functionality of all this new tech is hurtling ahead without taking security into account!

What seems more important is always improving the tech faster, making it better than the competition, giving it smartphone connectivity and hooking it up to the Internet. And then they wonder how it’s possible to control an airplane via its entertainment system!

What needs doing?

First things first, we need to move back to pre-Internet technologies, like propeller-driven aircraft with analog-mechanical control systems…

…Not :). No one’s planning on turning the clocks back, and anyway, it just wouldn’t work: the technologies of the past are slow, cumbersome, inefficient, inconvenient and… a lot less secure! Nope, there’s no going backwards. Only forwards!

In our era of polymers, biotechnologies and all-things-digital, movement forward is producing crazy results. Just look around you – and inside your pockets. Everything is moving, flying, being communicated, delivered and received, exchanged… all at vastly faster speeds to those of the past. Cars (and other vehicles) are only a part of that.

All that does make life more comfortable and convenient, and digitization is solving many old problems of reliability and security. But alas, at the same time it’s creating new problems. And if we keep galloping forward at breakneck speed, without looking back, improvising as we hurtle along to get the very best functionality, well, in the end there are going to be unpredictable – even fatal – consequences. A bit like how it was with the Zeppelin.

There is an alternative – a much better one: What we need are industry standards; new, modern architecture, and a responsible attitude to the development of features – by taking into account security – as a priority.

In all, the WIRED article has shown us a very interesting investigation. It will be even more interesting seeing how things progress in the industry from here. Btw, at the Black Hat conference in Vegas in August there’ll be a presentation by the authors of the Jeep hack – that’ll be something worth following…

Smart cars can be remotely hacked. Fact. Period. Shall we go back to the Stone Age? @e_kaspersky explains:Tweet

PS: Call me retrogressive (in fact I’m just paranoid:), but no matter how smart the computerization of a car, I’d straight away just switch it all off – if there was such a possibility. Of course, there isn’t. There should be: a button, say, next to the hazard lights’ button: ‘No Cyber’!…

…PPS: ‘Dream on, Kasper’, you might say. And perhaps you’d be right: soon, the way things are heading, a car without a connection to the ‘cloud’ won’t start!

PPPS: But the cloud (and all cars connected to it) will soon enough be hacked via some ever-so crucial function, like facial recognition of the driver to set the mirror and seat automatically.

PPPPS: Then cars will be given away for free, but tied to a particular filling station network digital network – with pop-ups appearing right on the windscreen. During the ad-break control will be taken over and put into automatic Google mode.

PPPPPS: What else can any of you bright sparks add to this stream-of-consciousness brainstorming-rambling? :)