Tag Archives: technology

OpenTIP, season 2: drop by more often!

A year ago I addressed cybersecurity specialists to let them know about a new tool we’d developed – our Open Threat Intelligence Portal (OpenTIP). Tools for analysis of complex threats (or merely suspicious objects) – the very same ones used by our famous cyber-ninjas in GReAT – became accessible to anyone who wanted to use them. And use them lots of folks wanted – testing zillions of files every month.

But in just a year a lot has changed. Things have become much more difficult for cybersecurity experts due to practically the whole world having to work remotely because of coronavirus. Maintaining the security of corporate networks has become a hundred times more troublesome. Time, which was precious enough as it was before corona, has become a highly precious resource. And today the most common request we get from our more sophisticated users is simple and direct: ‘Please give us API access and increase rate limits!’

You asked. We delivered…

In the new version of OpenTIP there’s now user registration available. And I highly recommend regular visitors do register, since when you do a large chunk of the paid Threat Intelligence Portal turns up out of the ether.

Read on…

Cybersecurity – the new dimension of automotive quality.

Quite a lot of folks seem to think that the automobile of the 21st century is a mechanical device. Sure, it has added electronics for this and that, some more than others, but still, at the end of the day – it’s a work of mechanical engineering: chassis, engine, wheels, steering wheel, pedals… The electronics – ‘computers’ even – merely help all the mechanical stuff out. They must do – after all, dashboards these days are a sea of digital displays, with hardly any analog dials to be seen at all.

Well, let me tell you straight: it ain’t so!

A car today is basically a specialized computer – a ‘cyber-brain’, controlling the mechanics-and-electrics we traditionally associate with the word ‘car’ – the engine, the brakes, the turn indicators, the windscreen wipers, the air conditioner, and in fact everything else.

In the past, for example, the handbrake was 100% mechanical. You’d wrench it up – with your ‘hand’ (imagine?!), and it would make a kind of grating noise as you did. Today you press a button. 0% mechanics. 100% computer controlled. And it’s like that with almost everything.

Now, most folks think that a driver-less car is a computer that drives the car. But if there’s a human behind the wheel of a new car today, then it’s the human doing the driving (not a computer), ‘of course, silly!’

Here I go again…: that ain’t so either!

With most modern cars today, the only difference between those that drive themselves and those that are driven by a human is that in the latter case the human controls the onboard computers. While in the former – the computers all over the car are controlled by another, main, central, very smart computer, developed by companies like Google, Yandex, Baidu and Cognitive Technologies. This computer is given the destination, it observes all that’s going on around it, and then decides how to navigate its way to the destination, at what speed, by which route, and so on based on mega-smart algorithms, updated by the nano-second.

A short history of the digitalization of motor vehicles

So when did this move from mechanics to digital start?

Some experts in the field reckon the computerization of the auto industry began in 1955 – when Chrysler started offering a transistor radio as an optional extra on one of its models. Others, perhaps thinking that a radio isn’t really an automotive feature, reckon it was the introduction of electronic ignition, ABS, or electronic engine-control systems that ushered in automobile-computerization (by Pontiac, Chrysler and GM in 1963, 1971 and 1979, respectively).

No matter when it started, what followed was for sure more of the same: more electronics; then things started becoming more digital – and the line between the two is blurry. But I consider the start of the digital revolution in automotive technologies as February 1986, when, at the Society of Automotive Engineers convention, the company Robert Bosch GmbH presented to the world its digital network protocol for communication among the electronic components of a car – CAN (controller area network). And you have to give those Bosch guys their due: still today this protocol is fully relevant – used in practically every vehicle the world over!

// Quick nerdy post-CAN-introduction digi-automoto backgrounder: 

The Bosch boys gave us various types of CAN buses (low-speed, high-speed, FD-CAN), while today there’s FlexRay (transmission), LIN (low-speed bus), optical MOST (multimedia), and finally, on-board Ethernet (today – 100mbps; in the future – up to 1gbps). When cars are designed these days various communications protocols are applied. There’s drive by wire (electrical systems instead of mechanical linkages), which has brought us: electronic gas pedals, electronic brake pedals (used by Toyota, Ford and GM in their hybrid and electro-mobiles since 1998), electronic handbrakes, electronic gearboxes, and electronic steering (first used by Infinity in its Q50 in 2014).

BMW buses and interfaces

Read on…

The Catcher in the YARA – predicting black swans.

It’s been a long, long time since humanity has had a year like this one. I don’t think I’ve known a year with such a high concentration of black swans of various types and forms in it. And I don’t mean the kind with feathers. I’m talking about unexpected events with far-reaching consequences, as per the theory of Nassim Nicholas Taleb, published in his book The Black Swan: The Impact of the Highly Improbable in 2007. One of the main tenets of the theory is that, with hindsight, surprising events that have occurred seem so ‘obvious’ and predictable; however, before they occur – no one does indeed predict them.

Cybersecurity experts have ways of dealing with ambiguity and predicting black swans with YARA

Example: this ghastly virus that’s had the world in lockdown since March. It turns out there’s a whole extended family of such viruses – several dozen coronaviridae, and new ones are found regularly. Cats, dogs, birds, bats all get them. Humans get them; some cause common colds; others… So surely vaccines need to be developed against them as they have been for other deadly viruses like smallpox, polio, whatever. Sure, but that doesn’t always help a great deal. Look at flu – still no vaccine that inoculates folks after how many centuries? And anyway, to even start to develop a vaccine you need to know what you’re looking for, and that is more art than science, apparently.

So, why am I telling you this? What’s the connection to… it’s inevitably gonna be either cybersecurity or exotic travel, right?! Today – the former ).

Now, one of the most dangerous cyberthreats in existence are zero-days – rare, unknown (to cybersecurity folks et al.) vulnerabilities in software, which can do oh-my-grotesque large-scale awfulness and damage – but they often remain undiscovered up until the moment when (sometimes after) they’re exploited to inflict the awfulness.

However, cybersecurity experts have ways of dealing with unknown-cyber-quantities and predicting black swans. And in this post I want to talk about one such way: YARA.

GReAT’s Costin Raiu examined Hacking Team’s emails and put together out of practically nothing a YARA rule, which detected a zero-day exploit

Briefly, YARA helps malware research and detection by identifying files that meet certain conditions and providing a rule-based approach to creating descriptions of malware families based on textual or binary patterns. (Ooh, that sounds complicated. See the rest of this post for clarification.:) Thus, it’s used to search for similar malware by identifying patterns. The aim: to be able to say: ‘it looks like these malicious programs have been made by the same folks, with similar objectives’.

Ok, let’s take another metaphor: like a black swan, another water-based one; this time – the sea…

Let’s say a network you (as a cyber-sleuth) are studying (= examining for the presence of suspicious files/directories) is the ocean, which is full of thousands of different kinds of fish, and you’re an industrial fisherman out on the ocean in your ship casting off huge drift nets to catch the fish – but only certain breeds of fish (= malware created by particular hacker groups) are interesting to you. Now, the drift net is special: it has special ‘compartments’ into which fish only get into as per their particular breed (= malware characteristics). Then, at the end of the shift, what you have is a lot of caught fish all compartmentalized, and some of those fish will be relatively new, unseen before fish (new malware samples) about which you know practically nothing, but they’re in certain compartments labeled, say, ‘Looks like Breed X’ (hacker group X) and ‘Looks like Breed Y’ (hacker group Y).

We have a case that fits the fish/fishing metaphor perfectly. In 2015, our YARA guru and head of GReAT, Costin Raiu, went full-on cyber-Sherlock mode to find an exploit for Microsoft’s Silverlight software. You really need to read that article on the end of the ‘case’ link there but, if very briefly, what Costin did was carefully examine certain hacker-leaked email correspondence (of ‘Hacking Team’: hackers hacking hackers; go figure!) published in a detailed news article to put together out of practically nothing a YARA rule, which went on to help find the exploit and thus protect the world from all sorts of mega-trouble.

So, about these YARA rules…

Graduates receive a certificate confirming their new status as a YARA ninja. Previous graduates say it really does help in their professional career

We’ve been teaching the art of creating YARA rules for years. And since the cyberthreats YARA helps uncover are rather complex, we always ran the courses in person – offline – and only for a narrow group of top cyber-researchers. Of course, since March, offline training have been tricky due to lockdown; however, the need for education has hardly gone away, and indeed we’ve seen no dip in interest in our courses. This is only natural: the cyber-baddies continue to think up ever more sophisticated attacks – even more so under lockdown. Accordingly, keeping our special know-how about YARA to ourselves during lockdown looked just plain wrong. Therefore, we’ve (i) transferred the training format from offline to online, and (ii) made it accessible to anyone who wants to do it. For sure it’s paid, but the price for such a course at such a level (the very highest:) is very competitive and market-level.

Introducing! ->

Read on…

Enter your email address to subscribe to this blog
(Required)

Into resource-heavy gaming? Check out our gaming mode.

Nearly 30 years ago, in 1993, the first incarnation of the cult computer game Doom appeared. And it was thanks to it that the few (imagine!) home computer owners back then found out that the best way of protecting yourself from monsters is to use a shotgun and a chainsaw ).

Now, I was never big into gaming (there simply wasn’t enough time – far too busy:); however, occasionally, after a long day’s slog, colleagues and I would spend an hour or so as first-person shooters, hooked up together on our local network. I even recall Duke Nukem corporate championships – results tables in which would be discussed at lunch in the canteen, and even bets being made/taken as to who would win! Thus, gaming – it was never far away.

Meanwhile, our antivirus appeared – complete with pig squeal (turn on English subs – bottom-right of video) to give fright to even the most fearsome of cyber-monsters. The first three releases went just fine. Then came the fourth. It came with a great many new technologies against complex cyberthreats, but we hadn’t thought through the architecture well enough – and we didn’t test it sufficiently either. The main issue was the way it hogged resources, slowing down computers. And software generally back then – and gaming in particular – was becoming more and more resource-intensive by the day; the last thing anyone needed was antivirus bogarting processor and RAM too.

So we had to act fast. Which we did. And then just two years later we launched our legendary sixth version, which surpassed everyone on speed (also reliability and flexibility). And for the last 15 years our solutions have been among the very best on performance.

Alas, leopards are thought to never lose their spots. A short-term issue affecting computer performance turned into a myth – and it’s still believed by some today. Competitors were of course happy to see this myth grow… to mythical proportions; we weren’t.

But, what has any of this K memory-laning got to do with Doom? Well…

Read on…

Top-5 K-technologies that got us into the Global Top-100 Innovators.

We’ve done it again! For the second time we’re in the Derwent Top 100 Global Innovators – a prestigious list of global companies that’s drawn up based on their patent portfolios. I say prestigious, as on the list we’re rubbing shoulders with companies such as Amazon, Facebook, Google, Microsoft, Oracle, Symantec and Tencent; also – the list isn’t just a selection of seemingly strong companies patents-wise: it’s formed upon the titanic analytical work of Clarivate Analytics, which sees it evaluate more than 14,000 (!) candidate companies on all sorts of criteria, of which the main one is citation rate, aka ‘influence’. And as if that wasn’t tough enough, in five years the threshold requirement for inclusion in the Top-100 on this criterion has risen some 55%:

In a bit more detail, the citation rate is the level of influence of inventions on the innovations of other companies. For us, it’s how often we’re mentioned by other inventors in their patents. And to be formally mentioned in another company’s patent means you’ve come up with something new and genuinely innovative and helpful, which aids their ‘something new and genuinely innovative and helpful’. Of course, such an established system of acknowledging other innovators – it’s no place for those who come up with mere BS patents. And that’s why none of those come anywhere near this Top-100. Meanwhile, we’re straight in there – in among the top 100 global innovator companies that genuinely move technological progress forward.

Wow, that feels good. It’s like a pat on the back for all our hard work: true recognition of the contributions we’ve been making. Hurray!

Still reeling – glowing! – from all this, ever the curious one, I wondered which, say, five, of our patented technologies are the most cited – the most influential. So I had a look. And here’s what I found…

5th place – 160 citations: US8042184B1 – ‘Rapid analysis of data stream for malware presence’.

Read on…

An early-warning system for cyber-rangers (aka – Adaptive Anomaly Control).

Most probably, if you’re normally office-based, your office right now is still rather – or completely – empty, just like ours. At our HQ the only folks you’ll see are the occasional security guards, and the only noise you’ll hear is the hum of the cooling systems of our heavily-loaded servers given that everyone’s hooked up and working from home.

You’d never imagine that, unseen, our technologies, experts and products are working 24/7 protecting the cyberworld. But they are. But the bad guys are up to new nasty tricks at the same time. Just as well, then, that we have an early-warning system in our cyber-protection collection of tools. But I’ll get to that in a bit…

The role of an IT security guy or girl in some ways resembles that of a forest ranger: to catch the poachers (malware) and neutralize the threat they pose for the forest’s dwellers, first of all you need to find them. Of course, you could simply wait until a poacher’s rifle goes off and run toward where the sound came from, but that doesn’t exclude the possibility that you’ll be too late and that the only thing you’d be able to do is clear up the mess.

You could go full-paranoiac: placing sensors and video cameras all over the forest, but then you might find yourself reacting to any and every rustle that’s picked up (and soon losing sleep, then your mind). But when you realize that poachers have learned to hide really well – in fact, to not leave any trace at all of their presence – it then becomes clear that the most important aspect of security is the ability to separate suspicious events from regular, harmless ones.

Increasingly, today’s cyber-poachers are camouflaging themselves with the help of perfectly legitimate tools and operations.

A few examples: opening a document in Microsoft Office, a system administrator being granted remote access, the launch of a script in PowerShell, and the activation of a data encryption mechanism. Then there’s the new wave of so-called fileless malware, leaving literally zero traces on a hard drive, which seriously limits the effectiveness of traditional approaches to protection.

Examples: (i) the Platinum threat actor used fileless technologies to penetrate computers of diplomatic organizations; and (ii) office documents with malicious payload were used for infections via phishing in the operations of the DarkUniverse APT; and there are plenty more. One more example: the fileless ransomware-encryptor ‘Mailto’ (aka Netwalker), which uses a PowerShell script for loading malicious code directly into the memory of trusted system processes.

Now, if traditional protection isn’t up to the task, it’s possible to try and forbid to users a whole range of operations, and to introduce tough policies on access and usage of software. However, given this, both the users and the bad guys will eventually probably find ways round the prohibitions (just like the prohibition of alcohol was always gotten around too:).

Much better would be to find a solution that can detect anomalies in standard processes and for the system administrator to be informed about them. But what is crucial is for such a solution to be able to learn how to automatically determine accurately the degree of ‘suspiciousness’ of processes in all their great variety, so as not to torment the system administrator with constant cries of ‘wolf!’

Well – you’ve guessed it! – we have such a solution: Adaptive Anomaly Control, a service built upon three main components – rules, statistics and exceptions.

Read on…

Playing hide and seek catch – with fileless malware.

Malicious code… – it gets everywhere…

It’s a bit like a gas, which will always fill the space it finds itself in – only different: it will always get through ‘holes’ (vulnerabilities) in a computer system. So our job (rather – one of them) is to find such holes and bung them up. Our goal is to do this proactively; that is, before malware has discovered them yet. And if it does find holes – we’re waiting, ready to zap it.

In fact it’s proactive protection and the ability to foresee the actions of attackers and create a barrier in advance that distinguishes genuinely excellent, hi-tech cybersecurity from marketing BS.

Here today I want to tell you about another way our proactive protection secures against yet another, particularly crafty kind of malware. Yes, I want to tell you about something called fileless (aka – bodiless) malicious code – a dangerous breed of ghost-malware that’s learned to use architectural drawbacks in Windows to infect computers. And also about our patented technology that fights this particular cyber-disease. And I’ll do so just as you like it: complex things explained simply, in the light, gripping manner of a cyber-thriller with elements of suspense ).

First off, what does fileless mean?

Well, fileless code, once it’s gotten inside a computer system, doesn’t create copies of itself in the form of files on disk – thereby avoiding detection by traditional methods, for example with an antivirus monitor.

So, how does such ‘ghost malware’ exist inside a system? Actually, it resides in the memory of trusted processes! Oh yes. Oh eek.

In Windows (actually, not only Windows), there has always existed the ability to execute dynamic code, which, in particular, is used for just-in-time compilation; that is, turning program code into machine code not straight away, but as and when it may be needed. This approach increases the execution speed for some applications. And to support this functionality Windows allows applications to place code into the process memory (or even into other trusted process memory) and execute it.

Hardly a great idea from the security standpoint, but what can you do? It’s how millions of applications written in Java, .NET, PHP, Python and other languages and for other platforms have been working for decades.

Predictably, the cyberbaddies took advantage of the ability to use dynamic code, inventing various methods to abuse it. And one of the most convenient and therefore widespread methods they use is something called reflective PE injection. A what?! Let me explain (it is, actually, rather interesting, so do please bear with me:)…

Launching an application by clicking on its icon – fairly simple and straightforward, right? It does look simple, but actually, under the hood, there’s all sorts goes on: a system loader is called up, which takes the respective file from disk, loads it into memory and executes it. And this standard process is controlled by antivirus monitors, which check the application’s security on the fly.

Now, when there’s a ‘reflection’, code is loaded bypassing the system loader (and thus also bypassing the antivirus monitor). The code is placed directly into the memory of a trusted process, creating a ‘reflection’ of the original executable module. Such reflection can be executed as a real module loaded by a standard method, but it isn’t registered in the list of modules and, as mentioned above, it doesn’t have a file on disk.

What’s more, unlike other techniques for injecting code (for example, via shellcode), a reflection injection allows to create functionally advanced code in high-level programming languages and standard development frameworks with hardly any limitations. So what you get is: (i) no files, (ii) concealment behind trusted process, (iii) invisibility to traditional protective technologies, and (iv) a free hand to cause some havoc.

So naturally, reflected injections were a mega-hit with developers of malicious code: At first they appeared in exploit packs, then cyber-spies got in on the game (for example, Lazarus and Turla), then advanced cybercriminals (as it’s a useful and legitimate way of executing complex code!), then petty cybercriminals.

Now, on the other side of the barricades, finding such a fileless infection is no walk in the cyber-park. So it’s no wonder really that most cybersecurity brands aren’t too hot at it. Some can hardly do it at all.

Read on…

Cyber hygiene: essential for fighting supply chain attacks.

Hi folks!

Quite often, technical matters that are as clear as day to techie-professionals are somewhat tricky to explain to non-techie-folks. Still, I’m going to have a go at doing just that here today. Why? Because it’s a darn exciting and amazingly interesting world! And who knows – maybe this read could inspire you to become a cybersecurity professional?!…

Let’s say you need to build a house. And not just a standard-format house, but something unique – custom-built to satisfy all your whims and wishes. First you need an architect who’ll draw up the design based on what you tell them; the design is eventually decided upon and agreed; project documentation appears, as does the contractor who’ll be carrying out the construction work; building inspectors keep an eye on quality; while at the same time interior designers draw up how things will look inside, again as per your say-so; in short – all the processes you generally need when constructing a built-to-order home. Many of the works are unique, as per your specific instructions, but practically everything uses standard materials and items: bricks, mortar, concrete, fixtures and fittings, and so on.

Well the same goes for the development of software.

Many of the works involved in development are also unique, requiring architects, designers, technical documentation, engineer-programmers… and often specific knowledge and skills. But in the process of development of any software a great many standard building bricks libraries are used, which carry out all sorts of ‘everyday’ functions. Like when you build a house – you build the walls with standard bricks; the same goes for software products: modules with all sorts of different functionalities use a great many standardized libraries, [~= bricks].

Ok, that should now be clear to everyone. But where does cybersecurity come into all of this?

Well, digital maliciousness… it’s kinda the same as house-building construction defects – which may be either trivial or critical.

Let’s say there’s some minor damage done to a completed house that’s ready to move into, which isn’t all that bad. You just remedy the issue: plaster over, re-paint, re-tile. But what if the issue is deep within the construction elements? Like toxic materials that were used in construction in the past? Yes, it can become expensive painful.

Well the same goes for software. If a contagion attaches itself to the outside, it’s possible to get rid of it: lance it off, clean up the wound, get the software back on its feet. But if the digital contamination gets deep inside – into the libraries and modules [= bricks] out of which the final product [house] is built… then you’ve got some serious trouble on your hands. And it just so happens that finding such deep digital pestilence can be reeeaaally tricky; actually extracting the poison out of the working business process – more so.

That’s all a bit abstract; so how about some examples? Actually, there are plenty of those. Here are a few…

Even in the long-distant past, during the Windows 98 era, there was one such incident when the Chernobyl virus (also called CIH, or Spacefiller) found its way into the distributions of computer games of various developers – and from there it spread right round the world. A similar thing happened years later in the 2000s: a cyber-infection called Induc penetrated Delphi libraries.

Thus, what we have are cyberthreats attacking businesses from outside, but also the more serious threats from a different type of cyber-disease that manages to get inside the internal infrastructure of a software company and poison a product under development.

Let’s use another figurative example to explain all this – a trip to your local supermarket to get the week’s groceries in… during mask-and-glove-wearing, antiseptic-drenching lockdown!… Yes, I’m using this timely example as I’m sure you’ll all know it rather well (unless you’re the Queen or some other VIP, perhaps live off the land and don’t use supermarkets… but I digress).

So yes: you’ve grabbed the reusable shopping bags, washed your hands for 20 seconds with soap, donned the faced mask, put the gloves on, and off you go. And that’s about it for your corona-protective measures. But once you’re at the supermarket you’re at the mercy of the good sense and social responsibility and sanitary measures of the supermarket itself plus every single producer of all the stuff that you can buy in it. Then there are all the delivery workers, packing workers, warehouse workers, drivers. And at any link in this long chain, someone could accidentally (or on purpose) sneeze right onto your potatoes!

Well it’s the same in the digital world – only magnified.

For the supply chain of modern-day ‘hybrid’ ecosystems of IT development is much, much longer, while at the same time we catch more than 300,000 brand new cyber-maliciousnesses EVERY DAY! What’s more, the complexity of all that brand new maliciousness itself is rising constantly. To try and control how much hand-washing and mask-and-glove wearing is going on at every developer of every separate software component, plus how effective cyber-protection systems of the numerous suppliers of cloud services are… – it’s all an incredibly difficult task. Even more difficult if a used product is open-source, and its assembly is fashionably automated and works with default trust settings and on-the-fly.

All rather worrying. But when you also learn that, of late, attacks on supply chains happen to be among most advanced cyber-evil around – it gets all rather yikes. Example: the ShadowPad group attacked financial organizations via a particular brand of server-infrastructure management software. Other sophisticated cybercriminals attack open source libraries, while our industry colleagues have reminded us that developers are mostly unable to sufficiently verify that components they install that use various libraries don’t contain malicious code.

Here’s another example: attacks on libraries of containers, like those of Docker Hub. On the one hand, using containers makes the development of apps and services more convenient, more agile. On the other, more often than not developers don’t build their own containers and instead download ready-made ones – and inside… – much like a magician’s hat – there could be anything lurking. Like a dove, or your car keys that were in your pocket. Or a rabbit. Or Alien! :) ->

Read on…

Which hacker group is attacking my corporate network? Don’t guess – check!

Around four years ago cybersecurity became a pawn used in geopolitical games of chess. Politicians of all stripes and nationalities wag fingers at and blame each other for hostile cyber-espionage operations, while at the same time – with the irony seemingly lost on them – bigging-up their own countries’ cyber weapons tools that are also used in offensive operations. And caught in the crossfire of geopolitical shenanigans are independent cybersecurity companies, who have the ability and gall guts to uncover all this very dangerous tomfoolery.

But, why? It’s all very simple…

First, ‘cyber’ is still really quite the cool/romantic/sci-fi/Hollywood/glamorous term it appears to have always been since its inception. It also sells – including newspapers online newspaper subscriptions. It’s popular – including to politicians: it’s a handy distraction – given its coolness and popularity – when distraction is something that’s needed, which is often.

Second, ‘cyber’ is really techy – most folks don’t understand it. As a result, the media, when covering anything to do with it, and always seeking more clicks on their stories, are able to print all manner of things that aren’t quite true (or completely false), but few readers notice. So what you get are a lot of stories in the press stating that this or that country’s hacker group is responsible for this or that embarrassing/costly/damaging/outrageous cyberattack. But can any of it be believed?

We stick to the technical attribution – it’s our duty and what we do as a business

Generally, it’s hard to know if it can be believed or not. Given this, is it actually possible to accurately attribute a cyberattack to this or that nation state or even organization?

There are two aspects to the answer…

From the technical standpoint, cyberattacks possess an array of particular characteristics, but impartial system analysis thereof can only go so far in determining how much an attack looks like it’s the work of this or that hacker group. However, whether this or that hacker group might belong to… Military Intelligence Sub-Unit 233, the National Advanced Defense Research Projects Group, or the Joint Strategic Capabilities and Threat Reduction Taskforce (none of which exist, to save you Googling them:)… that is a political aspect, and here, the likelihood of manipulation of facts is near 100%. It turns from being technical, evidence-based, accurate conclusions to… palm or coffee grounds’ readings for fortune-telling. So we leave that to the press. We stay well away. Meanwhile, curiously, the percentage of political flies dousing themselves in the fact-based ointment of pure cybersecurity grows several-fold with the approach of key political events. Oh, just like the one that’s scheduled to take place in five months’ time!

For knowing the identity of one’s attacker makes fighting it much easier: an incident response can be rolled out smoothly and with minimal risk to the business

So yes, political attribution is something we avoid. We stick to the technical side; in fact – it’s our duty and what we do as a business. And we do it better than anyone, I might modestly add ). We keep a close watch on all large hacker groups and their operations (600+ of them), and pay zero attention to what their affiliation might be. A thief is a thief, and should be in jail. And now, finally, 30+ years since I started out in this game, after collecting non-stop so much data about digital wrongdoing, we feel we’re ready to start sharing what we’ve got – in the good sense ).

Just the other day we launched a new awesome service aimed squarely at cybersecurity experts. It’s called the Kaspersky Threat Attribution Engine (KTAE). What it does is analyze suspicious files and determine from which hacker group a given cyberattack comes from. For knowing the identity of one’s attacker makes fighting it much easier: informed countermeasure decisions can be made, a plan of action can be drawn up, priorities can be set out, and on the whole an incident response can be rolled out smoothly and with minimal risk to the business.

So how do we do it?

Read on…

Unsecure ATMs should be quarantined too!

Each year, accompanied by travel companions, I tend to take more than a hundred flights all around the world. And practically everywhere these days we always pay by card or phone, and mostly contactless like Apple or Google Pay. In China you can even pay via WeChat when you’re at the market buying fruit and veg from grannies. And the sadly famous biovirus makes the use of virtual money more popular even still.

At the other end of the spectrum, you get the odd surprise: in Hong Kong, of all places, you need to pay cash for a taxi – always! In Frankfurt, of all places, last year in two separate restaurants they only took cash too. EH?!! We had to go on a long search for an ATM and withdraw euros instead of enjoying our post-dinner brandy. The inhumanity! :) Anyway, all this goes to prove that, despite there being progressive payment systems in place all around the globe, there still appears to be a need for the good old ATM everywhere too, and it looks like that need won’t be going away any time soon.

So what am I driving at here? Of course, cybersecurity!…

ATMs = money ⇒ they’ve been hacked, they’re getting hacked, and they’ll continue to be hacked – all the more. Indeed, their hacking is only getting worse: research shows how from 2017-2019 the number of ATMs attacked by malware more than doubled (by a factor of ~2.5).

Question: can the inside and outside of an ATM be constantly monitored? Surely yes, may well have been your answer. Actually, not so…

There are still plenty of ATMs in streets, in stores, in underpasses, in subway/metro stations with a very slow connection. They barely have enough broadband for managing transactions; they hardly get round to keeping watch of what’s going on around them too.

So, given this lack of monitoring because of the network connection, we stepped in to fill the gap and raise the security level of ATMs. We applied the best practices of optimization (which we’re masters of – with 25 years of experience), and also radically brought down the amount of traffic needed by our dedicated ‘inoculation jab’ against ATM threats – Kaspersky Embedded Systems Security, or KESS.

Get this: the minimum speed requirement for an internet connection for our KESS is… 56 kilobits (!!!) a second. Goodness! That’s the speed my dial-up modem in 1998!

Just to compare, the average speed of 4G internet today in developed nations is from between 30,000 and 120,000 kilobits per second. And 5G promises 100 million-plus kbps (hundreds of gigabits) (that is, if they don’t destroy all the masts before then). But don’t let prehistoric internet speeds fool you: the protection provided couldn’t be better. Indeed, many an effective manager could learn a thing or two from us about optimization without loss of quality.

Read on…