Cybersecurity – the new dimension of automotive quality.

Quite a lot of folks seem to think that the automobile of the 21st century is a mechanical device. Sure, it has added electronics for this and that, some more than others, but still, at the end of the day – it’s a work of mechanical engineering: chassis, engine, wheels, steering wheel, pedals… The electronics – ‘computers’ even – merely help all the mechanical stuff out. They must do – after all, dashboards these days are a sea of digital displays, with hardly any analog dials to be seen at all.

Well, let me tell you straight: it ain’t so!

A car today is basically a specialized computer – a ‘cyber-brain’, controlling the mechanics-and-electrics we traditionally associate with the word ‘car’ – the engine, the brakes, the turn indicators, the windscreen wipers, the air conditioner, and in fact everything else.

In the past, for example, the handbrake was 100% mechanical. You’d wrench it up – with your ‘hand’ (imagine?!), and it would make a kind of grating noise as you did. Today you press a button. 0% mechanics. 100% computer controlled. And it’s like that with almost everything.

Now, most folks think that a driver-less car is a computer that drives the car. But if there’s a human behind the wheel of a new car today, then it’s the human doing the driving (not a computer), ‘of course, silly!’

Here I go again…: that ain’t so either!

With most modern cars today, the only difference between those that drive themselves and those that are driven by a human is that in the latter case the human controls the onboard computers. While in the former – the computers all over the car are controlled by another, main, central, very smart computer, developed by companies like Google, Yandex, Baidu and Cognitive Technologies. This computer is given the destination, it observes all that’s going on around it, and then decides how to navigate its way to the destination, at what speed, by which route, and so on based on mega-smart algorithms, updated by the nano-second.

A short history of the digitalization of motor vehicles

So when did this move from mechanics to digital start?

Some experts in the field reckon the computerization of the auto industry began in 1955 – when Chrysler started offering a transistor radio as an optional extra on one of its models. Others, perhaps thinking that a radio isn’t really an automotive feature, reckon it was the introduction of electronic ignition, ABS, or electronic engine-control systems that ushered in automobile-computerization (by Pontiac, Chrysler and GM in 1963, 1971 and 1979, respectively).

No matter when it started, what followed was for sure more of the same: more electronics; then things started becoming more digital – and the line between the two is blurry. But I consider the start of the digital revolution in automotive technologies as February 1986, when, at the Society of Automotive Engineers convention, the company Robert Bosch GmbH presented to the world its digital network protocol for communication among the electronic components of a car – CAN (controller area network). And you have to give those Bosch guys their due: still today this protocol is fully relevant – used in practically every vehicle the world over!

// Quick nerdy post-CAN-introduction digi-automoto backgrounder: 

The Bosch boys gave us various types of CAN buses (low-speed, high-speed, FD-CAN), while today there’s FlexRay (transmission), LIN (low-speed bus), optical MOST (multimedia), and finally, on-board Ethernet (today – 100mbps; in the future – up to 1gbps). When cars are designed these days various communications protocols are applied. There’s drive by wire (electrical systems instead of mechanical linkages), which has brought us: electronic gas pedals, electronic brake pedals (used by Toyota, Ford and GM in their hybrid and electro-mobiles since 1998), electronic handbrakes, electronic gearboxes, and electronic steering (first used by Infinity in its Q50 in 2014).

BMW buses and interfaces

Read on…

The Catcher in the YARA – predicting black swans.

It’s been a long, long time since humanity has had a year like this one. I don’t think I’ve known a year with such a high concentration of black swans of various types and forms in it. And I don’t mean the kind with feathers. I’m talking about unexpected events with far-reaching consequences, as per the theory of Nassim Nicholas Taleb, published in his book The Black Swan: The Impact of the Highly Improbable in 2007. One of the main tenets of the theory is that, with hindsight, surprising events that have occurred seem so ‘obvious’ and predictable; however, before they occur – no one does indeed predict them.

Cybersecurity experts have ways of dealing with ambiguity and predicting black swans with YARA

Example: this ghastly virus that’s had the world in lockdown since March. It turns out there’s a whole extended family of such viruses – several dozen coronaviridae, and new ones are found regularly. Cats, dogs, birds, bats all get them. Humans get them; some cause common colds; others… So surely vaccines need to be developed against them as they have been for other deadly viruses like smallpox, polio, whatever. Sure, but that doesn’t always help a great deal. Look at flu – still no vaccine that inoculates folks after how many centuries? And anyway, to even start to develop a vaccine you need to know what you’re looking for, and that is more art than science, apparently.

So, why am I telling you this? What’s the connection to… it’s inevitably gonna be either cybersecurity or exotic travel, right?! Today – the former ).

Now, one of the most dangerous cyberthreats in existence are zero-days – rare, unknown (to cybersecurity folks et al.) vulnerabilities in software, which can do oh-my-grotesque large-scale awfulness and damage – but they often remain undiscovered up until the moment when (sometimes after) they’re exploited to inflict the awfulness.

However, cybersecurity experts have ways of dealing with unknown-cyber-quantities and predicting black swans. And in this post I want to talk about one such way: YARA.

GReAT’s Costin Raiu examined Hacking Team’s emails and put together out of practically nothing a YARA rule, which detected a zero-day exploit

Briefly, YARA helps malware research and detection by identifying files that meet certain conditions and providing a rule-based approach to creating descriptions of malware families based on textual or binary patterns. (Ooh, that sounds complicated. See the rest of this post for clarification.:) Thus, it’s used to search for similar malware by identifying patterns. The aim: to be able to say: ‘it looks like these malicious programs have been made by the same folks, with similar objectives’.

Ok, let’s take another metaphor: like a black swan, another water-based one; this time – the sea…

Let’s say a network you (as a cyber-sleuth) are studying (= examining for the presence of suspicious files/directories) is the ocean, which is full of thousands of different kinds of fish, and you’re an industrial fisherman out on the ocean in your ship casting off huge drift nets to catch the fish – but only certain breeds of fish (= malware created by particular hacker groups) are interesting to you. Now, the drift net is special: it has special ‘compartments’ into which fish only get into as per their particular breed (= malware characteristics). Then, at the end of the shift, what you have is a lot of caught fish all compartmentalized, and some of those fish will be relatively new, unseen before fish (new malware samples) about which you know practically nothing, but they’re in certain compartments labeled, say, ‘Looks like Breed X’ (hacker group X) and ‘Looks like Breed Y’ (hacker group Y).

We have a case that fits the fish/fishing metaphor perfectly. In 2015, our YARA guru and head of GReAT, Costin Raiu, went full-on cyber-Sherlock mode to find an exploit for Microsoft’s Silverlight software. You really need to read that article on the end of the ‘case’ link there but, if very briefly, what Costin did was carefully examine certain hacker-leaked email correspondence (of ‘Hacking Team’: hackers hacking hackers; go figure!) published in a detailed news article to put together out of practically nothing a YARA rule, which went on to help find the exploit and thus protect the world from all sorts of mega-trouble.

So, about these YARA rules…

Graduates receive a certificate confirming their new status as a YARA ninja. Previous graduates say it really does help in their professional career

We’ve been teaching the art of creating YARA rules for years. And since the cyberthreats YARA helps uncover are rather complex, we always ran the courses in person – offline – and only for a narrow group of top cyber-researchers. Of course, since March, offline training have been tricky due to lockdown; however, the need for education has hardly gone away, and indeed we’ve seen no dip in interest in our courses. This is only natural: the cyber-baddies continue to think up ever more sophisticated attacks – even more so under lockdown. Accordingly, keeping our special know-how about YARA to ourselves during lockdown looked just plain wrong. Therefore, we’ve (i) transferred the training format from offline to online, and (ii) made it accessible to anyone who wants to do it. For sure it’s paid, but the price for such a course at such a level (the very highest:) is very competitive and market-level.

Introducing! ->

Read on…

Flickr photostream

  • Yakutsk - Tiksi - Yakutsk
  • Yakutsk - Tiksi - Yakutsk
  • Yakutsk - Tiksi - Yakutsk
  • Yakutsk - Tiksi - Yakutsk

Instagram photostream

Into resource-heavy gaming? Check out our gaming mode.

Nearly 30 years ago, in 1993, the first incarnation of the cult computer game Doom appeared. And it was thanks to it that the few (imagine!) home computer owners back then found out that the best way of protecting yourself from monsters is to use a shotgun and a chainsaw ).

Now, I was never big into gaming (there simply wasn’t enough time – far too busy:); however, occasionally, after a long day’s slog, colleagues and I would spend an hour or so as first-person shooters, hooked up together on our local network. I even recall Duke Nukem corporate championships – results tables in which would be discussed at lunch in the canteen, and even bets being made/taken as to who would win! Thus, gaming – it was never far away.

Meanwhile, our antivirus appeared – complete with pig squeal (turn on English subs – bottom-right of video) to give fright to even the most fearsome of cyber-monsters. The first three releases went just fine. Then came the fourth. It came with a great many new technologies against complex cyberthreats, but we hadn’t thought through the architecture well enough – and we didn’t test it sufficiently either. The main issue was the way it hogged resources, slowing down computers. And software generally back then – and gaming in particular – was becoming more and more resource-intensive by the day; the last thing anyone needed was antivirus bogarting processor and RAM too.

So we had to act fast. Which we did. And then just two years later we launched our legendary sixth version, which surpassed everyone on speed (also reliability and flexibility). And for the last 15 years our solutions have been among the very best on performance.

Alas, leopards are thought to never lose their spots. A short-term issue affecting computer performance turned into a myth – and it’s still believed by some today. Competitors were of course happy to see this myth grow… to mythical proportions; we weren’t.

But, what has any of this K memory-laning got to do with Doom? Well…

Read on…

Enter your email address to subscribe to this blog

An early-warning system for cyber-rangers (aka – Adaptive Anomaly Control).

Most probably, if you’re normally office-based, your office right now is still rather – or completely – empty, just like ours. At our HQ the only folks you’ll see are the occasional security guards, and the only noise you’ll hear is the hum of the cooling systems of our heavily-loaded servers given that everyone’s hooked up and working from home.

You’d never imagine that, unseen, our technologies, experts and products are working 24/7 protecting the cyberworld. But they are. But the bad guys are up to new nasty tricks at the same time. Just as well, then, that we have an early-warning system in our cyber-protection collection of tools. But I’ll get to that in a bit…

The role of an IT security guy or girl in some ways resembles that of a forest ranger: to catch the poachers (malware) and neutralize the threat they pose for the forest’s dwellers, first of all you need to find them. Of course, you could simply wait until a poacher’s rifle goes off and run toward where the sound came from, but that doesn’t exclude the possibility that you’ll be too late and that the only thing you’d be able to do is clear up the mess.

You could go full-paranoiac: placing sensors and video cameras all over the forest, but then you might find yourself reacting to any and every rustle that’s picked up (and soon losing sleep, then your mind). But when you realize that poachers have learned to hide really well – in fact, to not leave any trace at all of their presence – it then becomes clear that the most important aspect of security is the ability to separate suspicious events from regular, harmless ones.

Increasingly, today’s cyber-poachers are camouflaging themselves with the help of perfectly legitimate tools and operations.

A few examples: opening a document in Microsoft Office, a system administrator being granted remote access, the launch of a script in PowerShell, and the activation of a data encryption mechanism. Then there’s the new wave of so-called fileless malware, leaving literally zero traces on a hard drive, which seriously limits the effectiveness of traditional approaches to protection.

Examples: (i) the Platinum threat actor used fileless technologies to penetrate computers of diplomatic organizations; and (ii) office documents with malicious payload were used for infections via phishing in the operations of the DarkUniverse APT; and there are plenty more. One more example: the fileless ransomware-encryptor ‘Mailto’ (aka Netwalker), which uses a PowerShell script for loading malicious code directly into the memory of trusted system processes.

Now, if traditional protection isn’t up to the task, it’s possible to try and forbid to users a whole range of operations, and to introduce tough policies on access and usage of software. However, given this, both the users and the bad guys will eventually probably find ways round the prohibitions (just like the prohibition of alcohol was always gotten around too:).

Much better would be to find a solution that can detect anomalies in standard processes and for the system administrator to be informed about them. But what is crucial is for such a solution to be able to learn how to automatically determine accurately the degree of ‘suspiciousness’ of processes in all their great variety, so as not to torment the system administrator with constant cries of ‘wolf!’

Well – you’ve guessed it! – we have such a solution: Adaptive Anomaly Control, a service built upon three main components – rules, statistics and exceptions.

Read on…

Playing hide and seek catch – with fileless malware.

Malicious code… – it gets everywhere…

It’s a bit like a gas, which will always fill the space it finds itself in – only different: it will always get through ‘holes’ (vulnerabilities) in a computer system. So our job (rather – one of them) is to find such holes and bung them up. Our goal is to do this proactively; that is, before malware has discovered them yet. And if it does find holes – we’re waiting, ready to zap it.

In fact it’s proactive protection and the ability to foresee the actions of attackers and create a barrier in advance that distinguishes genuinely excellent, hi-tech cybersecurity from marketing BS.

Here today I want to tell you about another way our proactive protection secures against yet another, particularly crafty kind of malware. Yes, I want to tell you about something called fileless (aka – bodiless) malicious code – a dangerous breed of ghost-malware that’s learned to use architectural drawbacks in Windows to infect computers. And also about our patented technology that fights this particular cyber-disease. And I’ll do so just as you like it: complex things explained simply, in the light, gripping manner of a cyber-thriller with elements of suspense ).

First off, what does fileless mean?

Well, fileless code, once it’s gotten inside a computer system, doesn’t create copies of itself in the form of files on disk – thereby avoiding detection by traditional methods, for example with an antivirus monitor.

So, how does such ‘ghost malware’ exist inside a system? Actually, it resides in the memory of trusted processes! Oh yes. Oh eek.

In Windows (actually, not only Windows), there has always existed the ability to execute dynamic code, which, in particular, is used for just-in-time compilation; that is, turning program code into machine code not straight away, but as and when it may be needed. This approach increases the execution speed for some applications. And to support this functionality Windows allows applications to place code into the process memory (or even into other trusted process memory) and execute it.

Hardly a great idea from the security standpoint, but what can you do? It’s how millions of applications written in Java, .NET, PHP, Python and other languages and for other platforms have been working for decades.

Predictably, the cyberbaddies took advantage of the ability to use dynamic code, inventing various methods to abuse it. And one of the most convenient and therefore widespread methods they use is something called reflective PE injection. A what?! Let me explain (it is, actually, rather interesting, so do please bear with me:)…

Launching an application by clicking on its icon – fairly simple and straightforward, right? It does look simple, but actually, under the hood, there’s all sorts goes on: a system loader is called up, which takes the respective file from disk, loads it into memory and executes it. And this standard process is controlled by antivirus monitors, which check the application’s security on the fly.

Now, when there’s a ‘reflection’, code is loaded bypassing the system loader (and thus also bypassing the antivirus monitor). The code is placed directly into the memory of a trusted process, creating a ‘reflection’ of the original executable module. Such reflection can be executed as a real module loaded by a standard method, but it isn’t registered in the list of modules and, as mentioned above, it doesn’t have a file on disk.

What’s more, unlike other techniques for injecting code (for example, via shellcode), a reflection injection allows to create functionally advanced code in high-level programming languages and standard development frameworks with hardly any limitations. So what you get is: (i) no files, (ii) concealment behind trusted process, (iii) invisibility to traditional protective technologies, and (iv) a free hand to cause some havoc.

So naturally, reflected injections were a mega-hit with developers of malicious code: At first they appeared in exploit packs, then cyber-spies got in on the game (for example, Lazarus and Turla), then advanced cybercriminals (as it’s a useful and legitimate way of executing complex code!), then petty cybercriminals.

Now, on the other side of the barricades, finding such a fileless infection is no walk in the cyber-park. So it’s no wonder really that most cybersecurity brands aren’t too hot at it. Some can hardly do it at all.

Read on…

Cyber-tales from the dark side: unexpected vulnerabilities, hacking-as-a-service, and space-OS.

Our first month of summer in lockdown – done. And though the world seems to be opening up steadily, we at K decided to take no chances – remaining practically fully working-from-home. But that doesn’t mean we’re working any less effectively: just as well, since the cybercriminals sure haven’t been furloughed. Still, there’ve been no major changes to the global picture of threats of late. All the same, those cyberbaddies, as always, have been pulling cybertricks out of their hats that fairly astonish. So here are a few of them from last month.

A zero-day in ‘super-secure’ Linux Tails 

Facebook sure knows how to spend it. Turns out it spent a very six-figure sum when it sponsored the creation of a zero-day exploit of a vulnerability in the Tails OS (= Linux, specially tuned for heightened privacy) for an FBI investigation, which led to the catching of a pedophile. It was known for some time beforehand that this deranged paranoiac used this particular – particularly secure – operating system. FB’s first step was to use its strength in mapping accounts to connect all the ones the criminal used. However, getting from that cyber-victory to a physical postal address didn’t work out. Apparently, they ordered development of an exploit for a video-player application. This choice of software made sense as the sex-pest nutcase would ask of his victims’ videos and would probably watch them on the same computer.

It’s been reported that developers at Tails weren’t informed about the vulnerability exploited, but then it turned out that it was already patched. Employees of the company are keeping shtum about all this, but what’s clear is that a vulnerability-to-order isn’t the best publicity. There does remain some hope that the exploit was a one-off for a single, particularly nasty low-life, and that this wouldn’t be repeated for a regular user.

The takeaway: no matter how super-mega-secure a Linux-based project claims to be, there’s no guarantee there are no vulnerabilities in it. To be able to guarantee such a thing, the whole basic working principles and architecture of the whole OS need overhauling. Erm, yes, actually, this is a cheeky good opportunity to say hi to this ).

Hacking-as-a-service 

Here’s another tale-from-the-tailor-made-cyber-nastiness side. The (thought-to-be Indian) Dark Basin cybercriminal group has been caught with its hand in the cyber-till. This group is responsible for more than a thousand hacks-to-order. Targets have included bureaucrats, journalists, political candidates, activists, investors, and businessmen from various countries. Curiously, the hackers from Delhi used really simple, primitive tools: first they simply created phishing emails made to look like they’re from a colleague or friend, cobbled together false Google News updates on topics interesting to the user, and sent similar direct messages on Twitter. Then they sent emails and messages containing shortened links to credential-phishing websites that look like genuine sites, and that was that – credentials stolen, then other things stolen. And that’s it! No complex malware or exploits! And btw: it looks like the initial information about what a victim is interested in always came from the party ordering the cyber-hit.

Now, cybercrime-to-order is popular and has been around for ages. In this case though the hackers took it to a whole other – conveyor – level, outsourcing thousands of hits.

Source

Read on…

Cyber hygiene: essential for fighting supply chain attacks.

Hi folks!

Quite often, technical matters that are as clear as day to techie-professionals are somewhat tricky to explain to non-techie-folks. Still, I’m going to have a go at doing just that here today. Why? Because it’s a darn exciting and amazingly interesting world! And who knows – maybe this read could inspire you to become a cybersecurity professional?!…

Let’s say you need to build a house. And not just a standard-format house, but something unique – custom-built to satisfy all your whims and wishes. First you need an architect who’ll draw up the design based on what you tell them; the design is eventually decided upon and agreed; project documentation appears, as does the contractor who’ll be carrying out the construction work; building inspectors keep an eye on quality; while at the same time interior designers draw up how things will look inside, again as per your say-so; in short – all the processes you generally need when constructing a built-to-order home. Many of the works are unique, as per your specific instructions, but practically everything uses standard materials and items: bricks, mortar, concrete, fixtures and fittings, and so on.

Well the same goes for the development of software.

Many of the works involved in development are also unique, requiring architects, designers, technical documentation, engineer-programmers… and often specific knowledge and skills. But in the process of development of any software a great many standard building bricks libraries are used, which carry out all sorts of ‘everyday’ functions. Like when you build a house – you build the walls with standard bricks; the same goes for software products: modules with all sorts of different functionalities use a great many standardized libraries, [~= bricks].

Ok, that should now be clear to everyone. But where does cybersecurity come into all of this?

Well, digital maliciousness… it’s kinda the same as house-building construction defects – which may be either trivial or critical.

Let’s say there’s some minor damage done to a completed house that’s ready to move into, which isn’t all that bad. You just remedy the issue: plaster over, re-paint, re-tile. But what if the issue is deep within the construction elements? Like toxic materials that were used in construction in the past? Yes, it can become expensive painful.

Well the same goes for software. If a contagion attaches itself to the outside, it’s possible to get rid of it: lance it off, clean up the wound, get the software back on its feet. But if the digital contamination gets deep inside – into the libraries and modules [= bricks] out of which the final product [house] is built… then you’ve got some serious trouble on your hands. And it just so happens that finding such deep digital pestilence can be reeeaaally tricky; actually extracting the poison out of the working business process – more so.

That’s all a bit abstract; so how about some examples? Actually, there are plenty of those. Here are a few…

Even in the long-distant past, during the Windows 98 era, there was one such incident when the Chernobyl virus (also called CIH, or Spacefiller) found its way into the distributions of computer games of various developers – and from there it spread right round the world. A similar thing happened years later in the 2000s: a cyber-infection called Induc penetrated Delphi libraries.

Thus, what we have are cyberthreats attacking businesses from outside, but also the more serious threats from a different type of cyber-disease that manages to get inside the internal infrastructure of a software company and poison a product under development.

Let’s use another figurative example to explain all this – a trip to your local supermarket to get the week’s groceries in… during mask-and-glove-wearing, antiseptic-drenching lockdown!… Yes, I’m using this timely example as I’m sure you’ll all know it rather well (unless you’re the Queen or some other VIP, perhaps live off the land and don’t use supermarkets… but I digress).

So yes: you’ve grabbed the reusable shopping bags, washed your hands for 20 seconds with soap, donned the faced mask, put the gloves on, and off you go. And that’s about it for your corona-protective measures. But once you’re at the supermarket you’re at the mercy of the good sense and social responsibility and sanitary measures of the supermarket itself plus every single producer of all the stuff that you can buy in it. Then there are all the delivery workers, packing workers, warehouse workers, drivers. And at any link in this long chain, someone could accidentally (or on purpose) sneeze right onto your potatoes!

Well it’s the same in the digital world – only magnified.

For the supply chain of modern-day ‘hybrid’ ecosystems of IT development is much, much longer, while at the same time we catch more than 300,000 brand new cyber-maliciousnesses EVERY DAY! What’s more, the complexity of all that brand new maliciousness itself is rising constantly. To try and control how much hand-washing and mask-and-glove wearing is going on at every developer of every separate software component, plus how effective cyber-protection systems of the numerous suppliers of cloud services are… – it’s all an incredibly difficult task. Even more difficult if a used product is open-source, and its assembly is fashionably automated and works with default trust settings and on-the-fly.

All rather worrying. But when you also learn that, of late, attacks on supply chains happen to be among most advanced cyber-evil around – it gets all rather yikes. Example: the ShadowPad group attacked financial organizations via a particular brand of server-infrastructure management software. Other sophisticated cybercriminals attack open source libraries, while our industry colleagues have reminded us that developers are mostly unable to sufficiently verify that components they install that use various libraries don’t contain malicious code.

Here’s another example: attacks on libraries of containers, like those of Docker Hub. On the one hand, using containers makes the development of apps and services more convenient, more agile. On the other, more often than not developers don’t build their own containers and instead download ready-made ones – and inside… – much like a magician’s hat – there could be anything lurking. Like a dove, or your car keys that were in your pocket. Or a rabbit. Or Alien! :) ->

Read on…

Which hacker group is attacking my corporate network? Don’t guess – check!

Around four years ago cybersecurity became a pawn used in geopolitical games of chess. Politicians of all stripes and nationalities wag fingers at and blame each other for hostile cyber-espionage operations, while at the same time – with the irony seemingly lost on them – bigging-up their own countries’ cyber weapons tools that are also used in offensive operations. And caught in the crossfire of geopolitical shenanigans are independent cybersecurity companies, who have the ability and gall guts to uncover all this very dangerous tomfoolery.

But, why? It’s all very simple…

First, ‘cyber’ is still really quite the cool/romantic/sci-fi/Hollywood/glamorous term it appears to have always been since its inception. It also sells – including newspapers online newspaper subscriptions. It’s popular – including to politicians: it’s a handy distraction – given its coolness and popularity – when distraction is something that’s needed, which is often.

Second, ‘cyber’ is really techy – most folks don’t understand it. As a result, the media, when covering anything to do with it, and always seeking more clicks on their stories, are able to print all manner of things that aren’t quite true (or completely false), but few readers notice. So what you get are a lot of stories in the press stating that this or that country’s hacker group is responsible for this or that embarrassing/costly/damaging/outrageous cyberattack. But can any of it be believed?

We stick to the technical attribution – it’s our duty and what we do as a business

Generally, it’s hard to know if it can be believed or not. Given this, is it actually possible to accurately attribute a cyberattack to this or that nation state or even organization?

There are two aspects to the answer…

From the technical standpoint, cyberattacks possess an array of particular characteristics, but impartial system analysis thereof can only go so far in determining how much an attack looks like it’s the work of this or that hacker group. However, whether this or that hacker group might belong to… Military Intelligence Sub-Unit 233, the National Advanced Defense Research Projects Group, or the Joint Strategic Capabilities and Threat Reduction Taskforce (none of which exist, to save you Googling them:)… that is a political aspect, and here, the likelihood of manipulation of facts is near 100%. It turns from being technical, evidence-based, accurate conclusions to… palm or coffee grounds’ readings for fortune-telling. So we leave that to the press. We stay well away. Meanwhile, curiously, the percentage of political flies dousing themselves in the fact-based ointment of pure cybersecurity grows several-fold with the approach of key political events. Oh, just like the one that’s scheduled to take place in five months’ time!

For knowing the identity of one’s attacker makes fighting it much easier: an incident response can be rolled out smoothly and with minimal risk to the business

So yes, political attribution is something we avoid. We stick to the technical side; in fact – it’s our duty and what we do as a business. And we do it better than anyone, I might modestly add ). We keep a close watch on all large hacker groups and their operations (600+ of them), and pay zero attention to what their affiliation might be. A thief is a thief, and should be in jail. And now, finally, 30+ years since I started out in this game, after collecting non-stop so much data about digital wrongdoing, we feel we’re ready to start sharing what we’ve got – in the good sense ).

Just the other day we launched a new awesome service aimed squarely at cybersecurity experts. It’s called the Kaspersky Threat Attribution Engine (KTAE). What it does is analyze suspicious files and determine from which hacker group a given cyberattack comes from. For knowing the identity of one’s attacker makes fighting it much easier: informed countermeasure decisions can be made, a plan of action can be drawn up, priorities can be set out, and on the whole an incident response can be rolled out smoothly and with minimal risk to the business.

So how do we do it?

Read on…

Cyber-tales update from the quarantined side: March 92, 2020.

Most folks around the world have been in lockdown now for around three months! And you’ll have heard mention of a certain movie over those last three months, I’m sure, plenty; but here’s a new take on it: Groundhog Day is no longer a fun film! Then there’s the ‘damned if you’re good, damned if you’re bad’ thing with the weather: it stays bad and wet and wintry: that’s an extra downer for everyone (in addition to lockdown); it gets good and dry and summery: that’s a downer for everyone also, as no one can go out for long to enjoy it!

Still, I guess that maybe it’s some consolation that most all of us are going through the same thing sat at home. Maybe. But that’s us – good/normal folks. What about cyber-evil? How have they been ‘coping’, cooped up at home? Well, the other week I gave you some stats and trends about that. Today I want to follow that up with an update – for, yes, the cyber-baddies move fast. // Oh, and btw – if you’re interested in more cyber-tales from the dark side, aka I-news, check out this archives tag.

First off, a few more statistics – updated ones; reassuring ones at that…

March, and then even more so – April – saw large jumps in overall cybercriminal activity; however, May has since seen a sharp drop back down – to around the pre-corona levels of January-February:

At the same time we’ve been seeing a steady decline in all coronavirus-connected malware numbers:

// By ‘coronavirus-connected malware’ is meant cyberattacks that have used the coronavirus topic in some way to advance its criminal aims.

So, it would appear the news is promising. The cyber-miscreants are up to their mischief less than before. However, what the stats don’t show is – why; or – what are they doing instead? Surely they didn’t take the whole month of May off given its rather high number of days-off in many parts of the world, including those for celebrating the end of WWII? No, can’t be that. What then?…

Read on…

The world’s cyber-pulse during the pandemic.

Among the most common questions I get asked during these tough times is how the cyber-epidemiological situation has changed. How has cybersecurity been affected in general by the mass move over to remote working (or not working, for the unlucky ones, but also sat at home all the time). And, more specifically, what new cunning tricks have the cyber-swine been coming up with, and what should folks do to stay protected from them?

Accordingly, let me summarize it all in this here blogpost…

As always, criminals – including cybercriminals – closely monitor and then adapt to changing conditions so as to maximize their criminal income. So when most of the world suddenly switches to practically a full-on stay-at-home regime (home working, home entertainment, home shopping, home social interaction, home everything, etc.!), the cybercriminal switches his/her tactics in response.

Now, for cybercriminals, the main thing they’ve been taking notice of is that most everyone while in lockdown has greatly increased the time they spend on the internet. This means a larger general ‘attack surface’ for their criminal deeds.

In particular, many of the folks now working from home, alas, aren’t provided with quality, reliable cyber-protection by their employers. This means there are now more opportunities for cybercriminals hacking into the corporate networks the employees are hooked up to, leading to potentially very rich criminal pickings for the bad guys.

So, of course, the bad guys are going after these rich pickings. We see this evidenced by the sharp increase in brute-force attacks on database servers and RDP (technology that allows, say, an employee, to get full access to their work computer – its files, desktop, everything – remotely, e.g., from home) ->

Read on…