Darwinism in IT Security, Pt. 3: Time to Deal with These No-Good Parasites.

Hi all!

On a bit of a roll here on the survival-of-the-fittest-in-IT theme. Wasn’t planning a trilogy… it just kinda happened. Sort of…

…Sort of, as, well, the specific problem of parasites in the IT Security world I’ll be writing about today has been at the back of my mind for a long time already. This Darwinism talk seemed the perfect opportunity to finally let rip. You’ll see what I mean…

Today folks: parasites. But not those we’re fighting against (the ‘very’ bad guys); those who claim are also fighting the very bad guys (philosophical question: who’s worse?).

Infosec parasites practicing detection adoption is killing the industry and indirectly assisting cybercrime

The IT industry today is developing at a galloping pace. Just 10-15 years ago its main themes were desktop antiviruses, firewalls and backups; today there’s a mass of new different security solutions, approaches and ideas. Sometimes we manage to stay ahead of the curve; sometimes we have some catch-up to do. And there are other times we fall into a stupor from astonishment – not from new technologies, innovations or fresh ideas, but from the barefaced brazenness and utter unscrupulousness of our colleagues in the security industry.

But first, let me explain how events have been developing.

There’s a very useful service called the VirusTotal multiscanner. It aggregates around 60 antivirus engines, which it uses to scan files and URLs folks send it for malware checking, and then it returns the verdict.

Example: Joe Bloggs finds a suspicious application or office document on a hard drive/USB stick/the Internet. Joe’s own antivirus software doesn’t flag it as containing a malware, but Joe is the paranoid type; he wants to make really sure it’s not infected. So he heads over to the VirusTotal site, which doesn’t have just one antivirus solution like he does, but ~60. It’s free too, so it’s a no brainer. So Joe uploads the file to VirusTotal and gets instant info on what all the different AVs think about it.

First of all, to clarify: both the folks at VirusTotal and those at VirusTotal’s owners Google are firmly on the ‘good guys’ side. They have no connection with parasites whatsoever. VirusTotal is run by a very professional team, which has for years been fulfilling the task at hand extremely effectively. (Still need convincing? How about VirusTotal winning the MVP award last year at the Security Analyst Summit (SAS)?) Today VirusTotal is one of the most important sources of new malware samples and malicious URLs; and also a very cool archeological tool for researching targeted attacks.

The problem lies with a handful of shady users of the multiscanner who, alas, are becoming more and more unblushingly unabashed in how they conduct themselves.

Read on: Things getting interesting… for wrong reasons

Darwinism in IT Security – Pt. 2: Inoculation from BS.

Hi folks!

As promised, herewith, more on the connection between evolution theory and how protection against cyberthreats develops.

To date, what precisely brings about mutations of living organisms is unknown. Some of the more unconventional experts reckon it’s the work of viruses, which intentionally rearrange genes (yep, there’s who really rules the world!). But whatever the case may be, similar mutation processes also occur in IT Security – sometimes with the help of viruses too.

The market is tired of prophets; these days monetizing ‘panaceas’ requires a lot more investment and marketing efforts

In line with the best traditions of the principle of the struggle for existence, security technologies evolve over time: new categories of products appear, others become extinct, while some products merge with others. Regarding the latter for example, integrity checkers were a major breakthrough in the mid-90s, but nowadays they’re a minor part of endpoint solutions. New market segments and niches appear (for example, Anti-APT) to complement the existing arsenals of protective technologies – this being a normal process of positive symbiosis for good. But all the while nasty parasites crawl out of the woodwork to warm themselves in the sun. C’est la vie – as it’s always been, and there’s nothing you can do about it.

In the struggle for market share in IT Security there regularly appear prophets prophesizing a sudden end to ‘traditional’ technologies and – by happy chance – simultaneous (‘just in time!’) invention of a bullshit product revolutionary panacea (with generous discounts for the first five customers).

ai_oil_2

But this isn’t something new: any of you remember anti-spyware? In the early 2000s a huge bubble of products to get rid of spyware grew up from nothing. Much BS was fired the consumer’s way about the inability of ‘traditional antivirus’ to cope with this particular problem, but right from the beginning it was all just made up.

But the market has grown used to and tired of such prophets, and these days monetizing ‘panaceas’ requires a lot more investment and snake oil marketing efforts.

Read on: David and Don Draper Against Goliath…

Flickr photostream

  • Lake Garda
  • Lake Garda
  • Lake Garda
  • Lake Garda

Instagram photostream

Darwinism in IT Security: Adapt or Die.

“It is not the strongest of the species that survives but the most adaptable to change.”
– Charles Darwin

It’s been a while since I’ve opined on these here cyber-pages on my favorite topic – the future of IT Security, so here’s making up for that. Get ready for a lot of words – hopefully none too extraneous – on the latest Infosec tech, market and tendencies, with a side dish of assorted facts and reflections. Popcorn at the ready – off we go…

I’ll be writing here about ideal IT Security and how the security industry is evolving towards it (and what’s happening along that evolutionary road towards it), and how all that can be explained with the help of Mr. Darwin’s theory of evolution. How natural selection leads certain species to dominate, while others fall by the wayside – left for the paleontologists in years to come. Oh, and what is symbiosis, and what are parasites.

ai_oil_1

I’ll start with some definitions…

Almost-Perfection in an Imperfect World.

Perfect protection – 100% security – is impossible. The IT Security industry can and should of course aim for perfection, in the process creating the best-protected systems possible, but each inching nearer 100% costs exponentially more – so much more that the cost of protection winds up being greater than the cost of potential damage from the harshest of scenarios of a successful attack.

Ideal protection is that where the cost of a successful attack is greater than the gain

Accordingly, it’s logical to give the following definition of realistic (attainable) ideal protection (from the viewpoint of potential victims): Ideal protection is that where the cost to hack our system is greater than the cost of the potential damage that could be caused. Or, looking at it from the other side of the barricades: Ideal protection is that where the cost of a successful attack is greater than the gain attackers would receive.

Of course, there’ll be times when how much an attack may cost doesn’t matter to the attackers; for example, to state-backed cyberwar-mongers. But that doesn’t mean we just give up.

So how do we develop a security system that provides realistic (attainable) ideal (maximum) protection?

Read on: The survival of IT’s fittest…

Enter your email address to subscribe to this blog

Uh-oh Cyber-News: Infected Nuclear Reactors, Cyber-Bank Robbers, and Cyber-Dam-Busters.

Just a quick read of the news these days and you can find yourself wanting to reach for… a Geiger counter. I mean, some of the news stories are just so alarming of late. Or am I overreacting? Let’s see…

Uh-oh News Item No. 1: Apocalypse Averted – for Now. 

inews-1Photo courtesy of Wikipedia

It was reported that the IT system of Unit B of the Gundremmingen Nuclear Power Plant in Swabia, Bavaria, southwestern Germany – right on the 30-year anniversary to-the-day of the Chernobyl disaster (!) – had been infected by some malware. However, it was also reported that there’s no reason to worry at all as no danger’s being posed whatsoever. All’s ok; we can all sleep soundly; everything’s under control; the danger level couldn’t be lower.

After sighing a ‘pheewwwww’ and mopping one’s brow, you read further…

… And as you do, you get a few more details of the incident. And it does indeed seem all is ok: the background radiation level, after all, didn’t go up – that’s the main thing, surely. Right? But then you read further still…

And you find out that the (Internet-isolated) system that was infected happens to be the one that controls the movement of nuclear fuel rods. It’s here you stop, rub the eyes, and read that again slowly…

WHAAAAT?

Read on: Cyber-Spy-Novel-Worthy …

Get Your KICS en Route to Industrial Protection.

Hurray!

We’ve launched our KICS (Kaspersky Industrial CyberSecurity), the special cyber-inoculation against cyber-disease, which protect factories, power plants, hospitals, airports, hotels, warehouses, your favorite deli, and thousands of other types of enterprises that use industrial control systems (ICS). Or, put another way, since it’s rare for an enterprise today to manage without such systems, we’ve just launched a cyber-solution for millions of large, medium and small production and service businesses all around the world!

So what’s this KICS all about exactly? What’s it for? First, rewind…

Before the 2000s a cyberattack on an industrial installation was a mere source of inspiration for science fiction writers. But on August 14, 2003 in northeastern USA and southeastern Canada, the science fiction became a reality:

kaspersky-industrial-security-1Oops

Because of certain power grid glitches, 50 million North Americans went without electricity – some for several hours, others for several days. Many reasons were put forward as to the reasons behind this man-made catastrophe, including unkempt trees, a bolt of lightning, malicious squirrels, and… a side-effect from a cyberattack using the Slammer (Blaster) computer worm.

Read on: Hacked in 60 seconds…

The Big Picture.

Last spring (2015), we discovered Duqu 2.0 – a highly professional, very expensive, cyber-espionage operation. Probably state-sponsored. We identified it when we were testing the beta-version of the Kaspersky Anti Targeted Attack (KATA) platform – our solution that defends against sophisticated targeted attacks just like Duqu 2.0.

And now, a year later, I can proudly proclaim: hurray!! The product is now officially released and fully battle ready!

Kaspersky Anti-Targeted Attack Platform

But first, let me now go back in time a bit to tell you about why things have come to this – why we’re now stuck with state-backed cyber-spying and why we had to come up with some very specific protection against it.

(While for those who’d prefer to go straight to the beef in this here post – click here.)

‘The good old days’ – words so often uttered as if bad things just never happened in the past. The music was better, society was fairer, the streets were safer, the beer had a better head, and on and on and on. Sometimes, however, things really were better; one example being how relatively easy it was to fight cyber-pests in years past.

Of course, back then I didn’t think so. We were working 25 hours a day, eight days a week, all the time cursing the virus writers and their phenomenal reproduction rate. Each month (and sometimes more often) there were global worm epidemics and we were always thinking that things couldn’t get much worse. How wrong we were…

At the start of this century viruses were written mainly by students and cyber-hooligans. They’d neither the intention nor the ability to create anything really serious, so the epidemics they were responsible for were snuffed out within days – often using proactive methods. They simply didn’t have any motivation for coming up with anything more ominous; they were doing it just for kicks when they’d get bored of Doom and Duke Nukem :).

The mid-2000s saw big money hit the Internet, plus new technologies that connected everything from power plants to mp3 players. Professional cybercriminal groups also entered the stage seeking the big bucks the Internet could provide, while cyber-intelligence-services-cum-armies were attracted to it by the technological possibilities if offered. These groups had the motivation, means and know-how to create reeeaaaally complex malware and conduct reeeaaaally sophisticated attacks while remaining under the radar.

Around about this time… ‘antivirus died’: traditional methods of protection could no longer maintain sufficient levels of security. Then a cyber-arms race began – a modern take on the eternal model of power based on violence – either attacking using it or defending against its use. Cyberattacks became more selective/pinpointed in terms of targets chosen, more stealthy, and a lot more advanced.

In the meantime ‘basic’ AV (which by then was far from just AV) had evolved into complex, multi-component systems of multi-level protection, crammed full of all sorts of different protective technologies, while advanced corporate security systems had built up yet more formidable arsenals for controlling perimeters and detecting intrusions.

However, that approach, no matter how impressive on the face of it, had one small but critical drawback for large corporations: it did little to proactively detect the most professional targeted attacks – those that use unique malware using specific social engineering and zero-days. Malware that can stay unnoticed to security technologies.

I’m talking attacks carefully planned months if not years in advance by top experts backed by bottomless budgets and sometimes state financial support. Attacks like these can sometimes stay under the radar for many years; for example, the Equation operation we uncovered in 2014 had roots going back as far as 1996!

Banks, governments, critical infrastructure, manufacturing – tens of thousands of large organizations in various fields and with different forms of ownership (basically the basis of today’s world economy and order) – all of it turns out to be vulnerable to these super professional threats. And the demand for targets’ data, money and intellectual property is high and continually rising.

So what’s to be done? Just accept these modern day super threats as an inevitable part of modern life? Give up the fight against these targeted attacks?

No way.

Anything that can be attacked – no matter how sophisticatedly – can be protected to a great degree if you put serious time and effort and brains into that protection. There’ll never be 100% absolute protection, but there is such a thing as maximal protection, which makes attacks economically unfeasible to carry out: barriers so formidable that the aggressors decide to give up putting vast resources into getting through them, and instead go off and find some lesser protected victims. Of course there’ll be exceptions, especially when politically motivated attacks against certain victims are on the agenda; such attacks will be doggedly seen through to the end – a victorious end for the attacker; but that’s no reason to quit putting up a fight.

All righty. Historical context lesson over, now to that earlier mentioned sirloin…

…Just what the doctor ordered against advanced targeted attacks – our new Kaspersky Anti Targeted Attack platform (KATA).

So what exactly is this KATA, how does it work, and how much does it cost?

First, a bit on the anatomy of a targeted attack…

A targeted attack is always exclusive: tailor-made for a specific organization or individual.

The baddies behind a targeted attack start out by scrupulously gathering information on the targets right down to the most minor of details – for the success of an attack depends on the completeness of such a ‘dossier’ almost as much as the budget of the operation. All the targeted individuals are spied on and analyzed: their lifestyles, families, hobbies, and so on. How the corporate network is constructed is also studied carefully. And on the basis of all the information collected an attack strategy is selected.

Next, (i) the network is penetrated and remote (& undetected) access with maximum privileges is obtained. After that, (ii) the critical infrastructure nodes are compromised. And finally, (iii) ‘bombs away!’: the pilfering or destruction of data, the disruption of business processes, or whatever else might be the objective of the attack, plus the equally important covering one’s tracks so no one knows who’s responsible.

The motivation, the duration of the various prep-and-execution stages, the attack vectors, the penetration technologies, and the malware itself – all of it is very individual. But not matter how exclusive an attack gets, it will always have an Achilles’ heel. For an attack will always cause at least a few tiny noticeable happenings (network activity, certain behavior of files and other objects, etc.), anomalies being thrown up, and abnormal network activity. So seeing the bird’s-eye view big picture – in fact the whole picture formed from different sources around the network – makes it possible to detect a break-in.

To collect all the data about such anomalies and the creation of the big picture, KATA uses sensors – special ‘e-agents’ – which continuously analyze IP/web/email traffic plus events on workstations and servers.

For example, we intercept IP traffic (HTTP(s), FTP, DNS) using TAP/SPAN; the web sensor integrates with the proxy servers via ICAP; and the mail sensor is attached to the email servers via POP3(S). The agents are real lightweight (for Windows – around 15 megabytes), are compatible with other security software, and make hardly any impact at all on either network or endpoint resources.

All collected data (objects and metadata) are then transferred to the Analysis Center for processing using various methods (sandbox, AV scanning and adjustable YARA rules, checking file and URL reputations, vulnerability scanning, etc.) and archiving. It’s also possible to plug the system into our KSN cloud, or to keep things internal – with an internal copy of KpSN for better compliance.

Once the big picture is assembled, it’s time for the next stage! KATA reveals suspicious activity and can inform the admins and SIEM (Splunk, Qradar, ArcSight) about any unpleasantness detected. Even better – the longer the system works and the more data accumulates about the network, the more effective it is, since atypical behavior becomes easier to spot.

More details on how KATA works… here.

Ah yes; nearly forgot… how much does all this cost?

Well, there’s no simple answer to that one. The price of the service depends on dozens of factors, including the size and topology of the corporate network, how the solution is configured, and how many accompanying services are used. One thing is clear though: the cost pales into insignificance if compared with the potential damage it prevents.

Best test scores – the fifth year running!

Quicker, more reliable, more techy, and of course the most modest…

… Yep, you guessed it, that’ll be us folks – YET AGAIN!

We’ve just been awarded Product of the Year once more by independent Austrian test lab AV-Comparatives. Scoring top @ AV-C is becoming a yearly January tradition: 2011201220132014, and now 2015! Hurray!

year award 2015 product of the year_CS6

Image00002

Now for a bit about how they determine the winner…

Read on: Five main criteria…

Cyber-news: Vulnerable nuclear power stations, and cyber-saber… control?

Herewith, a quick review of and comment on some ‘news’ – rather, updates – on what I’ve been banging on about for years! Hate to say ‘told you so’, but… TOLD YOU SO!

First.

(Random pic of) the Cattenom Nuclear Power Plant in France where, I hope, all is tip-top in terms of cybersecurity(Random pic of) the Cattenom Nuclear Power Plant in France where, I hope, all is tip-top in terms of cybersecurity

I’ve been pushing for better awareness of problems of cybersecurity of industry and infrastructure for, er, let’s see, more than 15 years. There has of late been an increase in discussion of this issue around the world by state bodies, research institutes, the media and the general public; however, to my great chagrin, though there’s been a lot of talk, there’s still not been much in the way of real progress in actually getting anything done physically, legally, diplomatically, and all the other …lys. Here’s one stark example demonstrating this:

Earlier this week, Chatham House, the influential British think tank, published a report entitled ‘Cyber Security at Civil Nuclear Facilities: Understanding the Risks’. Yep, the title alone brings on goosebumps; but some of the details inside… YIKES.

I won’t go into those details here; you can read the report yourself – if you’ve plenty of time to spare. I will say here that the main thrust of the report is that the risk of a cyberattack on nuclear power plants is growing all around the world. UH-OH.

The report is based exclusively on interviews with experts. Yes, meaning no primary referenceable evidence was used. Hmmm. A bit like someone trying to explain the contents of an erotic movie – doesn’t really compare to watching the real thing. Still, I guess this is to be expected: this sector is, after all, universally throughout the whole world, secret.

All the same, now let me describe the erotic movie from how it was described to me (through reading the report)! At least, let me go through its main conclusions – all of which, if you really think about them, are apocalyptically alarming:

  1. Physical isolation of computer networks of nuclear power stations doesn’t exist: it’s a myth (note, this is based on those stations that were surveyed, whichever they may be; nothing concrete). The Brits note that VPN connections are often used at nuclear power stations – often by contractors; they’re often undocumented (not officially declared), and are sometimes simply forgotten about while actually staying fully alive and ready for use [read: abuse].
  2. A long list of industrial systems connected to the Internet can be found on the Internet via search engines like Shodan.
  3. Where physical isolation may exist, it can still be easily gotten around with the use of USB sticks (as in Stuxnet).
  4. Throughout the whole world the atomic energy industry is far from keen on sharing information on cyber-incidents, making it tricky to accurately understand the extent of the the security situation. Also, the industry doesn’t collaborate much with other industries, meaning it doesn’t learn from their experience and know-how.
  5. To cut costs, regular commercial (vulnerable) software is increasingly used in the industry.
  6. Many industrial control systems are ‘insecure by design’. Plus patching them without interrupting the processes they control is very difficult.
  7. And much more besides in the full 53-page report.

These scary facts and details are hardly news for IT security specialists. Still, let’s hope that high-profile publications such as this one will start to bring about change. The main thing at present is for all the respective software to be patched asap, and for industrial IT security in general to be bolstered to a safe level before a catastrophe occurs – not after.

Among other things, the report recommends promoting ‘secure by design’ industrial control systems. Hear hear! We’re totally in support of that one! Our secure OS is one such initiative. To make industrial control systems, including SCADA, impenetrable, requires an overhaul of the principles of cybersecurity on the whole. Unfortunately, the road towards that is long – and we’re only at the very beginning of it. Still, at least we’re all clear on which direction to head toward. Baby steps…

Second.

For several years I’ve also been pushing for the creation of a global agreement against cyberwar. Though we signs of a better understanding of the logic of such an agreement on the part of all the respective parties – academics, diplomats, governments, international organizations, etc. – we’re seeing little real progress towards any such concrete agreement, just like with the securing of industrial systems. Still, at least the reining in of cyber-spying and cyberwar is on the agenda at last.

Photo: Michael Reynolds/EPA. SourcePhoto: Michael Reynolds/EPA. Source

For example, Barack Obama and Xi Jinping at the end of September agreed that their countries – the two largest economies in the world – won’t engage in commercial cyberspying on each other anymore. Moreover, the topic of cybersecurity dominated their joint press conference (together with a load measures aimed at slowing climate change). Curiously, the thorny issues of political and military cyber-espionage weren’t brought up at all!

So. Does this represent a breakthrough? Of course not.

Still, again, at least this small step is in the right direction. There have also been rumors that Beijing and Washington are holding negotiations regarding an agreement on prohibiting attacks in cyberspace. At the September meeting of the leaders the topic wasn’t brought up, but let’s hope it will be soon. It would be an important, albeit symbolic, step.

Of course, ideally, such agreements would be signed in the future by all countries in the world, bringing the prospect of a demilitarized Internet and cyberspace that little bit closer. Yes, that would be the best scenario; however, for the moment, not the most realistic. Let’s just keep pushing for it.

The abracadabra of anonymous sources.

Who killed JFK?

Who’s controlling the Bermuda Triangle?

What’s the Freemasons’ objective?

Easy! For it turns out that answers to these questions couldn’t be more straightforward. All you have to do is add: ‘according to information from anonymous sources‘, and voila! — there’s your answer — to any question, about anything, or anyone. And the answers are all the more credible – not because of their… credibility – but because of the level of prestige commonly ascribed to the particular media outlet that broke the story.

Just recently, Reuters got a ‘world exclusive’ of jaw-dropping proportions in the antivirus world. The article, filled with sensational – false – allegations, claims Kaspersky Lab (KL), creates very specific, targeted malware, and distributes it anonymously to other anti-malware competitors, with the sole purpose of causing serious trouble for them and harming their market share. Oh yes. But they forgot to add that we conjure all this up during steamy banya sessions, after parking the bears we ride outside.

The Reuters story is based on information provided by anonymous former KL employees. And the accusations are complete nonsense, pure and simple.

Disgruntled ex-employees often say nasty things about their former employers, but in this case, the lies are just ludicrous. Maybe these sources managed to impress the journalist, but in my view publishing such an ‘exclusive’ – WITHOUT A SHRED OF EVIDENCE – is not what I understand to be good journalism. I’m just curious to see what these ‘ex-employees’ tell the media next time about us, and who might believe their BS.

The reality is that the Reuters story is a conflation of a number of facts with a generous amount of pure fiction.

In 2012-2013, the anti-malware industry suffered badly because of serious problems with false positives. And unfortunately, we were among the companies badly affected. It turned out to be a coordinated attack on the industry: someone was spreading legitimate software laced with malicious code targeting specifically the antivirus engines of many companies, including KL. It remains a mystery who staged the attack, but now I’m being told it was me! I sure didn’t see that one coming, and am totally surprised by this baseless accusation!

Here’s how it happened: in November 2012 our products produced false positives on several files that were in fact legitimate. These were the Steam client, Mail.ru game center, and QQ client. An internal investigation showed that these incidents occurred as the result of a coordinated attack by an unknown third party.

For several months prior to the incidents, through intra-industry information-exchange channels such as the VirusTotal website, our anti-malware research lab repeatedly received numerous slightly modified legitimate files of Steam, Mail.ru and QQ. The creator(s) of these files added pieces of malicious code to them.

Later we came to the conclusion that the attackers might have had prior knowledge of how different companies’ detection algorithms work and injected the malicious code precisely in a place where auto systems would search for it.

These newly received modified files were evaluated as malicious and stored in our databases. In total, we received several dozen legitimate files containing malicious code.

False positives started to appear once the legitimate owners of the files released updated versions of their software. The system compared the files to the malware database – which contained very similar files – and deemed the legitimate files malicious. After that, we upgraded our detection algorithms to avoid such detections.

Meanwhile the attacks continued through 2013 and we continued to receive modified legitimate files. We also became aware that our company was not the only one targeted by this attack: other industry players received these files as well and mistakenly detected them.

In 2013 there was a closed-door meeting among leading cybersecurity and other software industry players that also suffered from the attack – as well as vendors that were not affected by the problem but were aware of it. During that meeting the participants exchanged information about the incidents, tried to figure out the reasons behind them, and worked on an action plan. Unfortunately no breakthrough occurred, though some interesting theories regarding attribution were expressed. In particular, the participants of the meeting considered that some other AV vendor could be behind the attack, or that the attack was an attempt by an unknown but powerful malicious actor to adjust its malware in order to avoid detection by key AV products.

Accusations such as these are nothing new. As far back as the late nineties I’d take with me to press conferences a placard with the word ‘No!’ on it. It saved me so much time. I’d just point to it when every third question was: “Do you write viruses yourselves, for your product to then ‘cure’ the infections?” Oh yeah. Sure. And still today I get asked the same all the time. Do they really think an 18+ year-old business built 100% on trust would be doing such things?

It seems some folks just prefer to presume guilt until innocence is proven. I guess there’ll always be folks like that. C’est la vie. But I really do hope that people will see through these anonymous, silly and groundless accusations… What I can say for sure is that we’ll continue working very closely with the industry to make the digital world safer, and that our commitment and resolve to expose cyberthreats regardless of their source or origin won’t waiver.

.@kaspersky rubbishes claims they poisoned competitors with false positivesTweet

Your car controlled remotely by hackers: it’s arrived.

Every now and again (once every several years or so), a high-profile unpleasantness occurs in the cyberworld – some unexpected new maliciousness that fairly bowls the world over. For most ‘civilians’ it’s just the latest in a constant stream of seemingly inevitable troublesome cyber-surprises. As for my colleagues and me, we normally nod, wink, grimace, and raise the eyebrows à la Roger Moore among ourselves while exclaiming something like: ‘We’ve been expecting you Mr. Bond. What took you so long?’

For we’re forever studying and analyzing the main tendencies of the Dark Web so we can get an idea of who’s behind its murkiness and of the motivations involved; that way we can predict how things are going to develop.

Every time one of these new ‘unexpected’ events occurs, I normally find myself in the tricky position of having to give a speech (rather – speeches) along the lines of ‘Welcome to the new era‘. Trickiest of all is admitting I’m just repeating myself from a speech made years ago. The easy bit: I just have to update that old speech a bit by adding something like: ‘I did warn you about this; and you thought I was just scaremongering to sell product!’

Ok, you get it (no one likes being told ‘told you so’, so I’ll move on:).

So. What unpleasant cyber-unexpectedness is it this time? Actually, one affecting something close to my heart: the world of automobiles!

A few days ago WIRED published an article with an opening sentence that reads: ‘I was driving at 70 mph on the edge of downtown St. Louis when the exploit began to take hold.‘ Eek!

The piece goes on to describe a successful experiment in which hackers security researchers remotely ‘kill’ a car that’s too clever by half: they dissected (over months) the computerized Uconnect system of a Jeep Cherokee, eventually found a vulnerability, and then managed to seize control of the critical functions of the vehicle via the Internet – while the WIRED reporter was driving the vehicle on a highway! I kid you not folks. And we’re not talking a one-off ‘lab case’ here affecting one car. Nope, the hole the researchers found and exploited affects almost half a million cars. Oops – and eek! again.

Jeep Cherokee smart car remotely hacked by Charlie Miller and Chris Valasek. The image originally appeared in Wired

However, the problem of security of ‘smart’ cars is nothing new. I first ‘joked’ about this topic back in 2002. Ok, it was on April 1. But now it’s for real! You know what they say… Be careful what you wish for joke about (there’s many a true word spoken in jest:).

Not only is the problem not new, it’s also quite logical that it’s becoming serious: manufacturers compete for customers, and as there’s hardly a customer left who doesn’t carry at all times a smartphone, it’s only natural that the car (the more expensive – the quicker) has steadily been transformed into its appendage (an appendage of the smartphone – not the user, just in case anyone didn’t understand me correctly).

More and more control functions of smart cars are now firmly in the domain of the smartphone. And Uconnect isn’t unique here; practically every large car manufacturer has its own similar technology, some more advanced than others: there’s Volvo On CallBMW Connected DriveAudi MMIMercedes-Benz COMANDGM OnstarHyundai Blue Link and many others.

More and more convenience for the modern car-driving consumer – all well and good. The problem is though that in this manufacturers’ ‘arms race’ to try and outdo each other, critical IT security matters often go ignored.

Why? 

First, the manufacturers see being ahead of the Jones’s as paramount: the coolest tech functionality via a smartphone sells cars. ‘Security aspects? Let’s get to that later, eh? We need to roll this out yesterday.’

Second, remote control cars – it’s a market with good prospects.

Third, throughout the auto industry there’s a tendency – still today! – to view all the computerized tech on cars as something separate, mysterious, faddy (yep!) and not really car-like, so no one high up in the industry has a genuine desire to ‘get their hands dirty’ with it; therefore, the brains applied to it are chronically insufficient to make the tech secure.

It all adds up to a situation where fancy motorcars are becoming increasingly hackable and thus stealable. Great. Just what the world needs right now.

What the…?

Ok. That’s the basic outline. Now for the technical background and detail to maybe get to know what the #*@! is going on here!…

Way back in 1985 Bosch developed CAN. No, not their compatriot avant-garde rockers (who’d been around since 1968), but a ‘controller area network’ – a ‘vehicle bus’ (onboard communications network), which interconnects and regulates the exchange of data among different devices – actually, those devices’ microcontrollers – directly, without a central computer.

For example, when the ‘AC’ button on the dashboard is pressed, the dashboard’s microcontroller sends a signal to the microcontroller of the air conditioner saying ‘turn on, the driver wants cooling down’. Or when the brake pedal is pressed, the microcontroller of the pedal mechanism sends an instruction to the brake pads to press up against the brake discs.

CAN stands for 'controller area network', a 'vehicle bus' which interconnects and regulates the exchange of data among different devices шт a smart car

Put another way, the electronics system of a modern automobile is a peer-to-peer computer network – designed some 30 years ago. It gets better: despite the fact that over three decades CAN has been repeatedly updated and improved, it still doesn’t have any security functions! Maybe that’s to be expected – what extra security can be demanded of, say, a serial port? CAN too is a low level protocol and its specifications explicitly state that its security needs to be provided by the devices/applications that use it.

Maybe they don’t read the manuals. Or maybe they’re too busy trying to stay ahead of competitors and come up with the best smart car features.

Whatever the reasons, the fundamental fact causing all the trouble remains: Some auto manufacturers keep squeezing onto CAN more and more controllers without considering basic rules of security. Onto one and the same bus – which has neither access control nor any other security features – they strap the entire computerized management system that controls absolutely everything. And it’s connected to the Internet. Eek!

Hooking up devices to the Internet isn't a good idea. Engineers should think twice before doing this

Just like on any big computer network (e.g., the Internet), cars too need a strict ‘division of trust’ for controllers. Operations on a car where there’s communication with the outside world – be it installation of an app on the media system from an online store, or sending car performance diagnostics to the manufacturer – need to be firmly and securely split from the engine control, the security and other critical systems.

If you show an IT security specialist a car, lots of functions of which can be controlled by, say, an Android app, he or she would be able to demonstrate in no time at all a dozen or so different ways to get round the ‘protection’ and seize control of the functions the app can control. Such an experiment would also demonstrate how the car isn’t all that different really from a bank account: bank accounts can be hacked with specially designed technologies, in their case with banking Trojans. But there is a further potential method that could be used to hack a car just like a bank account too: with the use of a vulnerability, like in the case of the Jeep Cherokee.

Any reasons to be cheerful?…

…There are some.

Now, the auto industry (and just about everyone else) seems to be well aware of the degree of seriousness of the problem of cybersecurity of its smart car sector (thanks to security researchers like those in the WIRED article, though some manufacturers are loath to show their gratitude openly).

A sign of this is how recently the US Alliance of Automobile Manufacturers announced the creation of an Information Sharing and Analysis Center, “that will serve as a central hub for intelligence and analysis, providing timely sharing of cyber threat information and potential vulnerabilities in motor vehicle electronics or associated in-vehicle networks.” Good-o. I just don’t see how they plan to get along without security industry folks involved.

And it’s not just the motor industry that’s now on its toes: hours (!) after the publication of the WIRED article (the timing was a coincidence, it was reported) new federal legislation in the US was introduced establishing standardization of motor industry technologies in the field of cybersecurity. Meantime, we’re hardly twiddling thumbs or sat on hands: we’re actively working with several auto brands, consulting them on how to get their smart-car cybersecurity tightened up proper.

So, as you can see, there is light at the end of the tunnel. However…

…However, the described cybersecurity issue isn’t limited just to the motor industry.

CAN and other standards like it are used in manufacturing, the energy sector, transportation, utilities, ‘smart houses’, even in the elevator in your office building – in short – EVERYWHERE! And everywhere it’s the same problem: the growth of functionality of all this new tech is hurtling ahead without taking security into account!

What seems more important is always improving the tech faster, making it better than the competition, giving it smartphone connectivity and hooking it up to the Internet. And then they wonder how it’s possible to control an airplane via its entertainment system!

What needs doing?

First things first, we need to move back to pre-Internet technologies, like propeller-driven aircraft with analog-mechanical control systems…

…Not :). No one’s planning on turning the clocks back, and anyway, it just wouldn’t work: the technologies of the past are slow, cumbersome, inefficient, inconvenient and… a lot less secure! Nope, there’s no going backwards. Only forwards!

In our era of polymers, biotechnologies and all-things-digital, movement forward is producing crazy results. Just look around you – and inside your pockets. Everything is moving, flying, being communicated, delivered and received, exchanged… all at vastly faster speeds to those of the past. Cars (and other vehicles) are only a part of that.

All that does make life more comfortable and convenient, and digitization is solving many old problems of reliability and security. But alas, at the same time it’s creating new problems. And if we keep galloping forward at breakneck speed, without looking back, improvising as we hurtle along to get the very best functionality, well, in the end there are going to be unpredictable – even fatal – consequences. A bit like how it was with the Zeppelin.

There is an alternative – a much better one: What we need are industry standards; new, modern architecture, and a responsible attitude to the development of features – by taking into account security – as a priority.

In all, the WIRED article has shown us a very interesting investigation. It will be even more interesting seeing how things progress in the industry from here. Btw, at the Black Hat conference in Vegas in August there’ll be a presentation by the authors of the Jeep hack – that’ll be something worth following…

Smart cars can be remotely hacked. Fact. Period. Shall we go back to the Stone Age? @e_kaspersky explains:Tweet

PS: Call me retrogressive (in fact I’m just paranoid:), but no matter how smart the computerization of a car, I’d straight away just switch it all off – if there was such a possibility. Of course, there isn’t. There should be: a button, say, next to the hazard lights’ button: ‘No Cyber’!…

…PPS: ‘Dream on, Kasper’, you might say. And perhaps you’d be right: soon, the way things are heading, a car without a connection to the ‘cloud’ won’t start!

PPPS: But the cloud (and all cars connected to it) will soon enough be hacked via some ever-so crucial function, like facial recognition of the driver to set the mirror and seat automatically.

PPPPS: Then cars will be given away for free, but tied to a particular filling station network digital network – with pop-ups appearing right on the windscreen. During the ad-break control will be taken over and put into automatic Google mode.

PPPPPS: What else can any of you bright sparks add to this stream-of-consciousness brainstorming-rambling? :)