Drones – no more airport interruption scandals: we’re here to ground you.

For a few weeks already, this here mysterious, shiny, clearly hi-tech, futuristo device has been complementing the minimalistic office furniture of my corner office at our HQ. It’s so shiny and fancy and slick and post-modern that whenever I get a visitor – which is not often of late due to our general WFH-policy – it’s the first thing they notice, and the first question is always, simply, obviously – ‘what is that?!’ ->

Is it a bird, is it a plane, is it a camera (on a tripod), is it a gun, is it some kind of scanner? Warmer, warmer!…

But before I tell you – quick digression!…

Read on…

OpenTIP, season 2: drop by more often!

A year ago I addressed cybersecurity specialists to let them know about a new tool we’d developed – our Open Threat Intelligence Portal (OpenTIP). Tools for analysis of complex threats (or merely suspicious objects) – the very same ones used by our famous cyber-ninjas in GReAT – became accessible to anyone who wanted to use them. And use them lots of folks wanted – testing zillions of files every month.

But in just a year a lot has changed. Things have become much more difficult for cybersecurity experts due to practically the whole world having to work remotely because of coronavirus. Maintaining the security of corporate networks has become a hundred times more troublesome. Time, which was precious enough as it was before corona, has become a highly precious resource. And today the most common request we get from our more sophisticated users is simple and direct: ‘Please give us API access and increase rate limits!’

You asked. We delivered…

In the new version of OpenTIP there’s now user registration available. And I highly recommend regular visitors do register, since when you do a large chunk of the paid Threat Intelligence Portal turns up out of the ether.

Read on…

Ransomware: no more jokes.

First: brief backgrounder…

On September 10, the ransomware-malware DoppelPaymer encrypted 30 servers of a hospital in the German city of Dusseldorf, due to which throughput of sick patients fell dramatically. A week ago, due to this fall, the hospital wasn’t able to accept a patient who was in need of an urgent operation, and had to send her to a hospital in a neighboring city. She died on the way. It was the first known case of loss of human life as a result of a ransomware attack.

A very sad case indeed – especially when you look closer: there was the fatal ‘accident’ itself (presuming the attackers didn’t foresee a fatality caused by their ghastly actions); there was also a clear neglect of the following of basic rules of cybersecurity hygiene; and there was also an inability on the part of the law enforcement authorities to successfully counter the organized criminals involved.

The hackers attacked the hospital’s network via a vulnerability (aka Shitrix) on the Citrix Netscaler servers, which was patched as far back as January. It appears that the system administrators waited way too long before finally getting round to installing the patch, and in the meantime the bad guys were able to penetrate the network and install a backdoor.

Up to here – that’s all fact. From here on in: conjecture that can’t be confirmed – but which does look somewhat likely…

It can’t be ruled out that after some time access to the backdoor was sold to other hackers on underground forums as ‘access to a backdoor at a university’. The attack indeed was initially aimed at the nearby Heinrich Heine University. It was this university that was specified in the extortionists’ email demanding a ransom for the return of the data they’d encrypted. When the hackers found out that it was a hospital – not a university – they were quick to hand it all the encryption keys (and then they disappeared). It looks like Trojan’ed hospitals aren’t all that attractive to cybercriminals – they’re deemed assets that are too ‘toxic’ (as has been demonstrated in the worst – mortal – way).

It’s likely that the Russian-speaking Evil Corp hacker group is behind DoppelPaymer, a group with dozens of other high-profile hacks and shakedowns (including on Garmin‘s network) to its name. In 2019 the US government issued a indictment for individuals involved in Evil Corp, and offered a reward of five million dollars for help in catching them. What’s curious is that the identities of the criminals are known, and up until recently they’d been swaggering about and showing off their blingy gangster-style lifestyles – including on social media.

Source

What’s the world come to? There’s so much wrong here. First, there’s the fact that hospitals are suffering at the hands of ransomware hackers in the first place – even though, at least in this deadly case in Dusseldorf, it looks like it was a case of mistaken identity (hospital – not a university). Second, there’s the fact that universities are being targeted (often to steal research data – including COVID-19 related). But here’s my ‘third’ – from the cybersecurity angle…

How can a hospital be so careless? Not patching a vulnerability on time – leaving the door wide open for cyber-scum to walk right through it and backdoor everything? How many times have we repeated that FreeBSD (which is what Netscaler works on) is in no way a guarantee of security, and in fact is just the opposite: a cybersecurity expert’s faux ami? This operating system is far from being immune and has weaknesses that can be used in sophisticated cyberattacks. And then of course there’s the fact that such a critical institution as a hospital (also infrastructural organizations), need to have multi-level protection, where each level backs up the others: if the hospital had had reliable protection installed on its network the hackers would probably never have managed to pull off what they did.

The German police are now investigating the chain of events that led up to the death of the patient. And I hope that the German authorities will turn to those of Russia with a formal request for cooperation in detaining the criminals involved.

See, for police to open a criminal case, a formal statement/request or subject matter of a crime committed needs to be presented at the very least. This or that article in the press or some other kind of non-formal comments or announcement aren’t recognized by the legal system. No formal request – no case. Otherwise attorneys would easily cause the case to collapse in the blink of an eye. However, if there is what looks like credible evidence of a crime committed, there’s an inter-governmental interaction procedure in place that needs to be followed. OTT-formal: yes; but that’s ‘just the way it is’. Governments need to get past their political prejudices and act together. Folks are dying already – and while international cooperation is largely frozen by geopolitics, cybercriminals will keep on reaching new heights lows of depraved actions against humanity.

UPD: The first step toward reinstating cooperation in cybersecurity has been taken. Fingers crossed…

Btw: Have you noticed how there’s hardly ever any news of successful attacks by ransomware hackers against Russian organizations? Have you ever wondered why? I personally won’t entertain for a moment the silly conspiracy theories about these hackers working for Russian secret services – as there are many ransomware groups around the world. Here’s why, IMHO: Because most Russian companies are protected by good quality cyber-protection, and soon they will be protected by a cyber-immune operating system – yep, that very protection that’s been banned for use in U.S. state institutions. Go figure.

UPD2: Just yesterday a ransomware attack was reported on one of America’s largest hospital chains, UHS: its computers – which serve ~250 facilities across the whole country – were shut down, which led to cancelled surguries, diverted ambulances, and patient registrations having to be completed oin paper. There are no further details as yet…

Enter your email address to subscribe to this blog
(Required)

Cybersecurity – the new dimension of automotive quality.

Quite a lot of folks seem to think that the automobile of the 21st century is a mechanical device. Sure, it has added electronics for this and that, some more than others, but still, at the end of the day – it’s a work of mechanical engineering: chassis, engine, wheels, steering wheel, pedals… The electronics – ‘computers’ even – merely help all the mechanical stuff out. They must do – after all, dashboards these days are a sea of digital displays, with hardly any analog dials to be seen at all.

Well, let me tell you straight: it ain’t so!

A car today is basically a specialized computer – a ‘cyber-brain’, controlling the mechanics-and-electrics we traditionally associate with the word ‘car’ – the engine, the brakes, the turn indicators, the windscreen wipers, the air conditioner, and in fact everything else.

In the past, for example, the handbrake was 100% mechanical. You’d wrench it up – with your ‘hand’ (imagine?!), and it would make a kind of grating noise as you did. Today you press a button. 0% mechanics. 100% computer controlled. And it’s like that with almost everything.

Now, most folks think that a driver-less car is a computer that drives the car. But if there’s a human behind the wheel of a new car today, then it’s the human doing the driving (not a computer), ‘of course, silly!’

Here I go again…: that ain’t so either!

With most modern cars today, the only difference between those that drive themselves and those that are driven by a human is that in the latter case the human controls the onboard computers. While in the former – the computers all over the car are controlled by another, main, central, very smart computer, developed by companies like Google, Yandex, Baidu and Cognitive Technologies. This computer is given the destination, it observes all that’s going on around it, and then decides how to navigate its way to the destination, at what speed, by which route, and so on based on mega-smart algorithms, updated by the nano-second.

A short history of the digitalization of motor vehicles

So when did this move from mechanics to digital start?

Some experts in the field reckon the computerization of the auto industry began in 1955 – when Chrysler started offering a transistor radio as an optional extra on one of its models. Others, perhaps thinking that a radio isn’t really an automotive feature, reckon it was the introduction of electronic ignition, ABS, or electronic engine-control systems that ushered in automobile-computerization (by Pontiac, Chrysler and GM in 1963, 1971 and 1979, respectively).

No matter when it started, what followed was for sure more of the same: more electronics; then things started becoming more digital – and the line between the two is blurry. But I consider the start of the digital revolution in automotive technologies as February 1986, when, at the Society of Automotive Engineers convention, the company Robert Bosch GmbH presented to the world its digital network protocol for communication among the electronic components of a car – CAN (controller area network). And you have to give those Bosch guys their due: still today this protocol is fully relevant – used in practically every vehicle the world over!

// Quick nerdy post-CAN-introduction digi-automoto backgrounder: 

The Bosch boys gave us various types of CAN buses (low-speed, high-speed, FD-CAN), while today there’s FlexRay (transmission), LIN (low-speed bus), optical MOST (multimedia), and finally, on-board Ethernet (today – 100mbps; in the future – up to 1gbps). When cars are designed these days various communications protocols are applied. There’s drive by wire (electrical systems instead of mechanical linkages), which has brought us: electronic gas pedals, electronic brake pedals (used by Toyota, Ford and GM in their hybrid and electro-mobiles since 1998), electronic handbrakes, electronic gearboxes, and electronic steering (first used by Infinity in its Q50 in 2014).

BMW buses and interfaces

Read on…

The Catcher in the YARA – predicting black swans.

It’s been a long, long time since humanity has had a year like this one. I don’t think I’ve known a year with such a high concentration of black swans of various types and forms in it. And I don’t mean the kind with feathers. I’m talking about unexpected events with far-reaching consequences, as per the theory of Nassim Nicholas Taleb, published in his book The Black Swan: The Impact of the Highly Improbable in 2007. One of the main tenets of the theory is that, with hindsight, surprising events that have occurred seem so ‘obvious’ and predictable; however, before they occur – no one does indeed predict them.

Cybersecurity experts have ways of dealing with ambiguity and predicting black swans with YARA

Example: this ghastly virus that’s had the world in lockdown since March. It turns out there’s a whole extended family of such viruses – several dozen coronaviridae, and new ones are found regularly. Cats, dogs, birds, bats all get them. Humans get them; some cause common colds; others… So surely vaccines need to be developed against them as they have been for other deadly viruses like smallpox, polio, whatever. Sure, but that doesn’t always help a great deal. Look at flu – still no vaccine that inoculates folks after how many centuries? And anyway, to even start to develop a vaccine you need to know what you’re looking for, and that is more art than science, apparently.

So, why am I telling you this? What’s the connection to… it’s inevitably gonna be either cybersecurity or exotic travel, right?! Today – the former ).

Now, one of the most dangerous cyberthreats in existence are zero-days – rare, unknown (to cybersecurity folks et al.) vulnerabilities in software, which can do oh-my-grotesque large-scale awfulness and damage – but they often remain undiscovered up until the moment when (sometimes after) they’re exploited to inflict the awfulness.

However, cybersecurity experts have ways of dealing with unknown-cyber-quantities and predicting black swans. And in this post I want to talk about one such way: YARA.

GReAT’s Costin Raiu examined Hacking Team’s emails and put together out of practically nothing a YARA rule, which detected a zero-day exploit

Briefly, YARA helps malware research and detection by identifying files that meet certain conditions and providing a rule-based approach to creating descriptions of malware families based on textual or binary patterns. (Ooh, that sounds complicated. See the rest of this post for clarification.:) Thus, it’s used to search for similar malware by identifying patterns. The aim: to be able to say: ‘it looks like these malicious programs have been made by the same folks, with similar objectives’.

Ok, let’s take another metaphor: like a black swan, another water-based one; this time – the sea…

Let’s say a network you (as a cyber-sleuth) are studying (= examining for the presence of suspicious files/directories) is the ocean, which is full of thousands of different kinds of fish, and you’re an industrial fisherman out on the ocean in your ship casting off huge drift nets to catch the fish – but only certain breeds of fish (= malware created by particular hacker groups) are interesting to you. Now, the drift net is special: it has special ‘compartments’ into which fish only get into as per their particular breed (= malware characteristics). Then, at the end of the shift, what you have is a lot of caught fish all compartmentalized, and some of those fish will be relatively new, unseen before fish (new malware samples) about which you know practically nothing, but they’re in certain compartments labeled, say, ‘Looks like Breed X’ (hacker group X) and ‘Looks like Breed Y’ (hacker group Y).

We have a case that fits the fish/fishing metaphor perfectly. In 2015, our YARA guru and head of GReAT, Costin Raiu, went full-on cyber-Sherlock mode to find an exploit for Microsoft’s Silverlight software. You really need to read that article on the end of the ‘case’ link there but, if very briefly, what Costin did was carefully examine certain hacker-leaked email correspondence (of ‘Hacking Team’: hackers hacking hackers; go figure!) published in a detailed news article to put together out of practically nothing a YARA rule, which went on to help find the exploit and thus protect the world from all sorts of mega-trouble.

So, about these YARA rules…

Graduates receive a certificate confirming their new status as a YARA ninja. Previous graduates say it really does help in their professional career

We’ve been teaching the art of creating YARA rules for years. And since the cyberthreats YARA helps uncover are rather complex, we always ran the courses in person – offline – and only for a narrow group of top cyber-researchers. Of course, since March, offline training have been tricky due to lockdown; however, the need for education has hardly gone away, and indeed we’ve seen no dip in interest in our courses. This is only natural: the cyber-baddies continue to think up ever more sophisticated attacks – even more so under lockdown. Accordingly, keeping our special know-how about YARA to ourselves during lockdown looked just plain wrong. Therefore, we’ve (i) transferred the training format from offline to online, and (ii) made it accessible to anyone who wants to do it. For sure it’s paid, but the price for such a course at such a level (the very highest:) is very competitive and market-level.

Introducing! ->

Read on…

Into resource-heavy gaming? Check out our gaming mode.

Nearly 30 years ago, in 1993, the first incarnation of the cult computer game Doom appeared. And it was thanks to it that the few (imagine!) home computer owners back then found out that the best way of protecting yourself from monsters is to use a shotgun and a chainsaw ).

Now, I was never big into gaming (there simply wasn’t enough time – far too busy:); however, occasionally, after a long day’s slog, colleagues and I would spend an hour or so as first-person shooters, hooked up together on our local network. I even recall Duke Nukem corporate championships – results tables in which would be discussed at lunch in the canteen, and even bets being made/taken as to who would win! Thus, gaming – it was never far away.

Meanwhile, our antivirus appeared – complete with pig squeal (turn on English subs – bottom-right of video) to give fright to even the most fearsome of cyber-monsters. The first three releases went just fine. Then came the fourth. It came with a great many new technologies against complex cyberthreats, but we hadn’t thought through the architecture well enough – and we didn’t test it sufficiently either. The main issue was the way it hogged resources, slowing down computers. And software generally back then – and gaming in particular – was becoming more and more resource-intensive by the day; the last thing anyone needed was antivirus bogarting processor and RAM too.

So we had to act fast. Which we did. And then just two years later we launched our legendary sixth version, which surpassed everyone on speed (also reliability and flexibility). And for the last 15 years our solutions have been among the very best on performance.

Alas, leopards are thought to never lose their spots. A short-term issue affecting computer performance turned into a myth – and it’s still believed by some today. Competitors were of course happy to see this myth grow… to mythical proportions; we weren’t.

But, what has any of this K memory-laning got to do with Doom? Well…

Read on…

An early-warning system for cyber-rangers (aka – Adaptive Anomaly Control).

Most probably, if you’re normally office-based, your office right now is still rather – or completely – empty, just like ours. At our HQ the only folks you’ll see are the occasional security guards, and the only noise you’ll hear is the hum of the cooling systems of our heavily-loaded servers given that everyone’s hooked up and working from home.

You’d never imagine that, unseen, our technologies, experts and products are working 24/7 protecting the cyberworld. But they are. But the bad guys are up to new nasty tricks at the same time. Just as well, then, that we have an early-warning system in our cyber-protection collection of tools. But I’ll get to that in a bit…

The role of an IT security guy or girl in some ways resembles that of a forest ranger: to catch the poachers (malware) and neutralize the threat they pose for the forest’s dwellers, first of all you need to find them. Of course, you could simply wait until a poacher’s rifle goes off and run toward where the sound came from, but that doesn’t exclude the possibility that you’ll be too late and that the only thing you’d be able to do is clear up the mess.

You could go full-paranoiac: placing sensors and video cameras all over the forest, but then you might find yourself reacting to any and every rustle that’s picked up (and soon losing sleep, then your mind). But when you realize that poachers have learned to hide really well – in fact, to not leave any trace at all of their presence – it then becomes clear that the most important aspect of security is the ability to separate suspicious events from regular, harmless ones.

Increasingly, today’s cyber-poachers are camouflaging themselves with the help of perfectly legitimate tools and operations.

A few examples: opening a document in Microsoft Office, a system administrator being granted remote access, the launch of a script in PowerShell, and the activation of a data encryption mechanism. Then there’s the new wave of so-called fileless malware, leaving literally zero traces on a hard drive, which seriously limits the effectiveness of traditional approaches to protection.

Examples: (i) the Platinum threat actor used fileless technologies to penetrate computers of diplomatic organizations; and (ii) office documents with malicious payload were used for infections via phishing in the operations of the DarkUniverse APT; and there are plenty more. One more example: the fileless ransomware-encryptor ‘Mailto’ (aka Netwalker), which uses a PowerShell script for loading malicious code directly into the memory of trusted system processes.

Now, if traditional protection isn’t up to the task, it’s possible to try and forbid to users a whole range of operations, and to introduce tough policies on access and usage of software. However, given this, both the users and the bad guys will eventually probably find ways round the prohibitions (just like the prohibition of alcohol was always gotten around too:).

Much better would be to find a solution that can detect anomalies in standard processes and for the system administrator to be informed about them. But what is crucial is for such a solution to be able to learn how to automatically determine accurately the degree of ‘suspiciousness’ of processes in all their great variety, so as not to torment the system administrator with constant cries of ‘wolf!’

Well – you’ve guessed it! – we have such a solution: Adaptive Anomaly Control, a service built upon three main components – rules, statistics and exceptions.

Read on…

Playing hide and seek catch – with fileless malware.

Malicious code… – it gets everywhere…

It’s a bit like a gas, which will always fill the space it finds itself in – only different: it will always get through ‘holes’ (vulnerabilities) in a computer system. So our job (rather – one of them) is to find such holes and bung them up. Our goal is to do this proactively; that is, before malware has discovered them yet. And if it does find holes – we’re waiting, ready to zap it.

In fact it’s proactive protection and the ability to foresee the actions of attackers and create a barrier in advance that distinguishes genuinely excellent, hi-tech cybersecurity from marketing BS.

Here today I want to tell you about another way our proactive protection secures against yet another, particularly crafty kind of malware. Yes, I want to tell you about something called fileless (aka – bodiless) malicious code – a dangerous breed of ghost-malware that’s learned to use architectural drawbacks in Windows to infect computers. And also about our patented technology that fights this particular cyber-disease. And I’ll do so just as you like it: complex things explained simply, in the light, gripping manner of a cyber-thriller with elements of suspense ).

First off, what does fileless mean?

Well, fileless code, once it’s gotten inside a computer system, doesn’t create copies of itself in the form of files on disk – thereby avoiding detection by traditional methods, for example with an antivirus monitor.

So, how does such ‘ghost malware’ exist inside a system? Actually, it resides in the memory of trusted processes! Oh yes. Oh eek.

In Windows (actually, not only Windows), there has always existed the ability to execute dynamic code, which, in particular, is used for just-in-time compilation; that is, turning program code into machine code not straight away, but as and when it may be needed. This approach increases the execution speed for some applications. And to support this functionality Windows allows applications to place code into the process memory (or even into other trusted process memory) and execute it.

Hardly a great idea from the security standpoint, but what can you do? It’s how millions of applications written in Java, .NET, PHP, Python and other languages and for other platforms have been working for decades.

Predictably, the cyberbaddies took advantage of the ability to use dynamic code, inventing various methods to abuse it. And one of the most convenient and therefore widespread methods they use is something called reflective PE injection. A what?! Let me explain (it is, actually, rather interesting, so do please bear with me:)…

Launching an application by clicking on its icon – fairly simple and straightforward, right? It does look simple, but actually, under the hood, there’s all sorts goes on: a system loader is called up, which takes the respective file from disk, loads it into memory and executes it. And this standard process is controlled by antivirus monitors, which check the application’s security on the fly.

Now, when there’s a ‘reflection’, code is loaded bypassing the system loader (and thus also bypassing the antivirus monitor). The code is placed directly into the memory of a trusted process, creating a ‘reflection’ of the original executable module. Such reflection can be executed as a real module loaded by a standard method, but it isn’t registered in the list of modules and, as mentioned above, it doesn’t have a file on disk.

What’s more, unlike other techniques for injecting code (for example, via shellcode), a reflection injection allows to create functionally advanced code in high-level programming languages and standard development frameworks with hardly any limitations. So what you get is: (i) no files, (ii) concealment behind trusted process, (iii) invisibility to traditional protective technologies, and (iv) a free hand to cause some havoc.

So naturally, reflected injections were a mega-hit with developers of malicious code: At first they appeared in exploit packs, then cyber-spies got in on the game (for example, Lazarus and Turla), then advanced cybercriminals (as it’s a useful and legitimate way of executing complex code!), then petty cybercriminals.

Now, on the other side of the barricades, finding such a fileless infection is no walk in the cyber-park. So it’s no wonder really that most cybersecurity brands aren’t too hot at it. Some can hardly do it at all.

Read on…

Cyber-tales from the dark side: unexpected vulnerabilities, hacking-as-a-service, and space-OS.

Our first month of summer in lockdown – done. And though the world seems to be opening up steadily, we at K decided to take no chances – remaining practically fully working-from-home. But that doesn’t mean we’re working any less effectively: just as well, since the cybercriminals sure haven’t been furloughed. Still, there’ve been no major changes to the global picture of threats of late. All the same, those cyberbaddies, as always, have been pulling cybertricks out of their hats that fairly astonish. So here are a few of them from last month.

A zero-day in ‘super-secure’ Linux Tails 

Facebook sure knows how to spend it. Turns out it spent a very six-figure sum when it sponsored the creation of a zero-day exploit of a vulnerability in the Tails OS (= Linux, specially tuned for heightened privacy) for an FBI investigation, which led to the catching of a pedophile. It was known for some time beforehand that this deranged paranoiac used this particular – particularly secure – operating system. FB’s first step was to use its strength in mapping accounts to connect all the ones the criminal used. However, getting from that cyber-victory to a physical postal address didn’t work out. Apparently, they ordered development of an exploit for a video-player application. This choice of software made sense as the sex-pest nutcase would ask of his victims’ videos and would probably watch them on the same computer.

It’s been reported that developers at Tails weren’t informed about the vulnerability exploited, but then it turned out that it was already patched. Employees of the company are keeping shtum about all this, but what’s clear is that a vulnerability-to-order isn’t the best publicity. There does remain some hope that the exploit was a one-off for a single, particularly nasty low-life, and that this wouldn’t be repeated for a regular user.

The takeaway: no matter how super-mega-secure a Linux-based project claims to be, there’s no guarantee there are no vulnerabilities in it. To be able to guarantee such a thing, the whole basic working principles and architecture of the whole OS need overhauling. Erm, yes, actually, this is a cheeky good opportunity to say hi to this ).

Hacking-as-a-service 

Here’s another tale-from-the-tailor-made-cyber-nastiness side. The (thought-to-be Indian) Dark Basin cybercriminal group has been caught with its hand in the cyber-till. This group is responsible for more than a thousand hacks-to-order. Targets have included bureaucrats, journalists, political candidates, activists, investors, and businessmen from various countries. Curiously, the hackers from Delhi used really simple, primitive tools: first they simply created phishing emails made to look like they’re from a colleague or friend, cobbled together false Google News updates on topics interesting to the user, and sent similar direct messages on Twitter. Then they sent emails and messages containing shortened links to credential-phishing websites that look like genuine sites, and that was that – credentials stolen, then other things stolen. And that’s it! No complex malware or exploits! And btw: it looks like the initial information about what a victim is interested in always came from the party ordering the cyber-hit.

Now, cybercrime-to-order is popular and has been around for ages. In this case though the hackers took it to a whole other – conveyor – level, outsourcing thousands of hits.

Source

Read on…