i-Antitrust: time to give you your choice back, folks!

Fighting injustice. It’s just what we do – and keep doing. And that includes fighting major, large-scale injustice…

For example, in 2017, we managed to reach an agreement with Microsoft that encouraged it to stop giving unfair advantages to its own antivirus product. Sure, Microsoft is a modern-day Goliath. But we’re a modern-day David! And we need to be. For someone has to stand up to the giants now and again when they start throwing their weight around unfairly. Not doing so would mean users wind up with less choice.

Then last year saw us having to don the boxing gloves again for another dispute – again on an antitrust issue, but this time with another Goliath: Apple. Fast forward nearly a year – and I have two bits of news for you on this…

But first – quick rewind: some background.

 

Early on – halcyon daze…

Back in 2008, on the back of its extraordinary successes with its iPhones, Apple opened its App Store. And to fill out its ‘shelves’, it invited independent developers to use it as a platform to sell their for-iOS software. Those independent developers jumped right in, bringing with them thousands of apps (fast-forward 12 years and there are now literally millions). Users all over the planet were happy with all that choice, both Apple and the independent developers made tidy profits, all was well, there was peace and harmony, and it looked like everyone would live happily ever after.

But… business is business. At the end of the day Apple exists – like all commercial companies – to make a profit first and foremost. So it started branching out a bit. It created other iThings, all sorts of services, and a lot more besides. Yet still Apple yearned for more. Which was when it turned its gaze toward the markets of iOS applications made by independent developers in its own App Store.

Fast-forward to 2020.

I have a lot of respect for Apple. The company created a successful business model that’s much envied and much imitated. I neither envy nor imitate it, and I don’t agree fully with much of its policy (first and foremost – regarding cybersecurity), but that doesn’t mean I respect it any less (even though I personally don’t use any Apple products). We’ve been cooperating with Apple many years, in various areas, and until recently this was a partnership of equals.

Like tens of thousands of other independent developers, we create useful iOS apps – apps that increase the overall attractiveness of the platform. Together with Apple we had some profitable mobile business going on, but it was the users who benefitted most (as they were supplied with ever-more useful apps). Everyone had it good. Then, at the end of 2018, Apple announced its crusade against independent developers with the release of its Screen Time.

Competition is good, because competition works for the good of the user. In this case, more apps, better apps, more varied apps – more choice (and a developer not falling asleep at the top of the App Store listings)! But for competition to exist there needs to be a level playing field, i.e., fair rules. For everyone. Yet that level playing field – and competition with it – has been destroyed by Apple. Let me tell you how.

iStory that’s hard to believe.

Screen Time entered a mature market in which dozens of independent developers already operated. The App Store offered a great many apps providing parental controls, time management and other related tasks. And it’s here where the craziness begins.

Apple unexpectedly monopolized a wide range of critical functions, by simply turning them off for other developers!

So, like, how, for example, is a parental control app supposed to get by without configurable profiles, the ability to filter URL addresses, application control, and full fledged geolocation? That’s right: it can’t! But it can if it’s an Apple parental control app – for none of this critical functionality was limited in any of its own apps! It’s one rule for Apple’s apps, another for all the rest.

Now, of course, this audaciously odd-ball move was made under a smokescreen of ‘concerns’ about security and privacy; however (also ‘of course’) – these concerns were seen right through real quick to reveal their bogusness.

Next, Apple started banning developers from the App Store, delaying approval of new software builds, and rolling out new unacceptable requirements and conditions. Some apps were shut down, while others had their functionality restricted – rendering them useless. But some independent developers decided to fight back. Including us. Developers came together to form an association with the aim of working with Apple to try and secure fair rules for all, while some filed complaints with regional antitrust authorities and began a public campaign in the press and on social media.

Then, in June 2019, Apple looked like it had hit the brakes and even gone into reverse. However, actually, it was purely a tactical maneuver to feign an expression of goodwill, and which in no way helped solve the problem of equal rights for all – including Apple itself.

Then it released iOS 13… – with yet further restrictions to hit the ecosystem even harder!

Let me give you an example of how the ‘innovations’ of iOS reflected on our parental control app Kaspersky Safe Kids.

First, Apple loads and activates Screen Time automatically on devices upon installation of the new version of the iOS – even if the user already has onboard a similar application. Don’t know about you, folks, but that, to me, doesn’t have much of a ring of ‘free competition’ to it. Looks more like just the opposite: with a ring of intrusion, aka thrusting, aka foisting, aka gatecrashing the party, i.e. – uninvited.

Second, new features on iOS 13 now permit a child to easily delete Safe Kids (i.e., a complete cancelling out of the very meaning of ‘parental control’), and also view websites via Safari (it has become impossible to hide it) instead of via the built-in safe browser that permits filtration of undesirable content. No, really folks!

Third, changes to the policy of accessing the geolocation of a device have taken away parents’ ability to track their child’s location! (No. I am not making this up. And all in the name of security – remember?!)

But wait – here’s what really takes the proverbial biscuit. Are you sitting down?…

All features that have become forbidden to independent developers remain completely ok and wholesome and accessible to… – ta-daa – Apple!

iAudaciousness on this scale simply couldn’t go unnoticed.

Encouragingly, the issue hasn’t gone unnoticed. It’s been resonating at the very highest legislative levels around the world. In the U.S. Senate it was suggested to forbid Apple and other large companies from placing their own apps in their own marketplaces, since they, by default, will create preferences for their own products.

In Russia antitrust proceedings have been initiated. In the EU they’re still at the pre-investigation phase. Indeed, slowly but surely the negative consequences of this lowering of competition are coming to the surface. Even from the user side – Screen Time is taking a lot of flak for its functionality shortcomings (even with its functional superiority given that its competitors have all had their functionality curtailed!). Some independent developers see the only way of getting round the issue to be to urge users to move over to Android if they want to keep their kids safe.

And now for that news I said I’d be telling you…

I’m not sure yet if it’s good news or not, but at least some movement must be a good thing – and we’ve been trying to fight for equal opportunities for everyone. This spring, the Federal Antimonopoly Service of Russia will deliver its verdict on our claim regarding the abuse by Apple of its dominant position and the creation of unlawful competitive advantages for Screen Time. Almost all arguments and evidence in the proceedings have already been given and submitted. For us it’s been a very long, complex process (details – here), which has taken up much time, effort and money energy. But we’ve explained our position well, and I have Hope that the decision will be in our favor. Fingers crossed…

When Jobs was in charge – there was nothing like this.

Do you know what this crusade of Apple’s against independent developers gets me thinking about? A fight of the iOS ecosystem against the App Store ecosystem! The former gradually absorbs the juiciest, most profitable markets of the latter. And it looks all the more unsavory given that it is thanks to the App Store that the iOS platform has risen to now make up the basis of the business of the company. Without it, Apple would have had just another failed project – the kind of which there have been many in the history of the IT business.

It all reminds me a little of the infamous letter of Steve Jobs that announced the ‘holy war‘ against Google; in particular one sentence within it: ‘Tie all our products together, so we further lock customers into our ecosystem’.

Probably only Mr. Jobs himself knows exactly what he meant by that. But though he was originally against third-party apps for the iPhone (he later changed his mind), I’ve no doubt whatsoever that among his greatest expectations were those he vested in independent developers: to have their inspiration and resources help create for Apple the best ecosystem. And one thing’s for sure, Jobs wouldn’t have allowed Apple to transform itself into a self-important dictator and turn on the very developers that helped it and subject them to out-and-out discrimination.

I’ve already said this above, but I’ll say it again: I respect Apple. And I have a feeling that there are no issues in our relations we can’t resolve. Apple could opt for a sensible compromise and reconsider the unfair rules of the game. This would make its platform even stronger by permitting independent developers to supply to it full-fledged apps so as to serve the needs of its millions of users optimally.

Finally, please support us in this struggle to secure your right to choose exactly what you want, not what one large corporation decides is best for you. And stay tuned. I’ll be back with news re the FAS’s verdict once it arrives…

Cyber-news from the dark side: Er, who said you could sell my data?

January 28 is my aunt Olga’s birthday. It also happens to be Data Privacy Day. And my aunt Olga still isn’t aware! But she should be! For digital data is the currency of the new millennium. Accumulated knowledge of trillions of clicks and transactions – it’s a gold mine for any business. And multimillion-dollar businesses – lots of them – are based on the sale of these cyber-resources.

Global IT companies have more access to personal data than do countries. As a result, this topic is extremely important; it’s also toxic.

And, wherever there’s money – there are always bad guys. Cyber-bad-guys getting up to no good with folks’ data are forever multiplying in numbers. But even respectable companies may get up to no good with folks’ data too, and they seem to get away with – mostly. But more on that later…

Now, I’d like to ask a simple question – one to which, at least in global IT, there is no answer yet: ‘What is good and what is bad?’ I mean: where is the line between universal human morals and business ethics? Where is that fine line?

Alas, the question of cyber-ethics and cyber-morals is a very ambiguous one. Meanwhile, I can assure you that with the introduction of 5G and further sharp increases in the number of IoT devices, our data will be collected all the more. And more, and more…

Now for some detail: broken down into the main, most-pressing, interesting matters:

Lawyers, lawmakers, journalists, politicians, pundits, social commentators, philosophers… – not one of them can answer this question: ‘Who does data belong to?’ To users? To governments? To businesses? It would nice to think that users’ personal data belongs to those users themselves; at least up until when they may decide to voluntarily share it: when they fill in a form on a website, enter their name, telephone number and email to register for a newsletter, or thoughtlessly place a check in an app without reading through the small print of a lengthy legal agreement. Formally, from that moment on we give certain third parties the legal ability to handle our data, analyze it, sell it and whatever else is written (but rarely read) in the respective agreement. So does that mean that from that moment the data belongs to those third parties, too?

Much of the problem lies in the fact that the term ‘personal data’ is very vague and ephemeral – not only from the standpoint of the user but also from the legal one. Laws often can’t keep up with technological development. Nevertheless, on the whole over recent years the tendency has been clear: new laws being passed on the protection of personal data and the updating of existing legislation. In parallel, people’s attitudes toward personal data and privacy have become a lot more serious – something that of course I’m very happy to see.

Enough of my ‘intro’; let’s move on to the main dish…

Last week there was quite the scandal reported in the press involving Avast, one of the major players in the AV market.

Vice published an expose detailing how Avast has for years been giving data of its users that it collects to one of its subsidiaries – Jumpshot – which in turn then sells it to third-party companies. Those third-party companies thus got access to information on the online behavior of users: what websites were visited, movements from sites to sites, GPS coordinates of users of Google Maps, YouTube viewing histories, and lots more besides. And though the data wasn’t associated with specific individuals, IP addresses or emails – in other words it was anonymous – the data did come with identifiers, which keep working up until when a user may delete their Avast antivirus from their computer

Of course, this is nothing short of scandalous from an ethical point of view. We here at K have never allowed such a thing to happen, and never would; and we firmly believe that any earnings made from data of your users is simply beyond the pale.

The epilogue of this sorry tale was a formal apology from Avast’s CEO, in an announcement about the termination of Jumpshop. In my view, that was the only appropriate thing to do. I understand it mustn’t have been easy, and there will have been big financial losses, but still. Well done for doing the right thing in the end.

For us, the matter of data storage and its usage has long been a priority. Back in 2017 we launched our Global Transparency Initiative, moved our data processing for European users (plus other countries) to Zurich, since then have opened two more Transparency Centers, and are soon to open two more. Projects like this aren’t cheap; but we feel we simply must set new standards of openness and a serious attitude to personal data.

More details about our principles of data processing, about how our cloud-based KSN works, anonymization of data, and other important things you can find here. But I just want to add, addressing all our users, that, rest assured: we never make any compromises with our conscience – ever.

Often, the collection and sale of data is carried out by free antivirus software, covering things like surveillance of users for advertising purposes and the trade in their confidentiality, all to make money. As you’ll know, we also have a free version of our AV, based on the same protection-tech as our other, paid-for products, whose effectiveness is constantly confirmed in independent tests. And though the functionality of the free version is rather stripped down, it’s still a piece of AV we’re very proud of, delivering users solid and reliable protection and leaking no personal data for advertisers. Users deserve the best protection – without annoying adverts and privacy trading. But I’ve been saying that years.

Something else I’ve been talking about for years is my own paranoid very serious attitude to my own personal data. One more time: I only ever give it out when it is wholly necessary, which I recommend you do too. I understand it’s difficult to fully realize the importance of this, when its so intangible and when the ‘price’ of our data is impossible to estimate. Just remember – every click, every site you visit – someone (rather – something), somewhere is making a record of it, and it never gets deleted. So come on folks, lets get serious about our digital footprint; and more serious about how we view the companies and products to which you entrust your personal – private – data.

PS: We recently launched a useful site with detailed recommendations for protecting your personal digital life. Here you can find the most important privacy settings for popular social networks, online services and operating systems. Have a look!

Cybernews: If Aramco had our Antidrone…; and honeypots to make IoT malware stop!

Hi folks!

Recently there was a Cyber News from the Dark Side item of oh-my-Gulf proportions. You’ll no doubt have heard about it as it was all over the news for days just recently. It was the drone attack on Saudi Aramco that took out millions of barrels of crude per day and caused hundreds of millions of dollars in damage.

Alas, I’m afraid this is only the beginning. Remember those drones bringing Heathrow – or was it Gatwick? – to a standstill a while back? Well this is just a natural progression. There’ll be more, for sure. In Saudi, the Houthis claimed responsibility, but both Saudi and the US blame Iran; Iran denies responsibility. In short – same old saber-rattling in the Middle East. But that’s not what I want to talk about here – that’s geopolitics, which we don’t do, remember? ) No, what I want to talk about is that, as the finger-pointing continues, in the meantime we’ve come up with a solution to stop drone attacks like this one on Aramco. Soooo, ladies and gents, I hereby introduce to the world… our new Antidrone!

So how does it work?

The device works out the coordinates of a moving object, a neural network determines whether it’s a drone, and if it is, blocks the connection between it and its remote controller. As a result the drone either returns back to where it was launched, or it lands below where it is up in the sky when intercepted. The system can be stationary, or mobile – e.g., for installation on a motor vehicle.

The main focus of our antidrone is protection of critically important infrastructure, airports, industrial objects, and other property. The Saudi Aramco incident highlighted how urgently necessary such technology is in preventing similar cases, and it’s only going to become more so: in 2018 the world market for drones was estimated at $14 billion; by 2024 it’s forecast to be $43 billion!

Clearly the market for protection against maliciously-minded drones is going to grow too – fast. However, at the moment, our Antidrone is the only one on the Russian market that can detect objects by video using neural networks, and the first in the world to use laser scanning for tracking down the location of drones.

Read on…

Enter your email address to subscribe to this blog
(Required)

If I had a dollar for every time I’ve been asked this question in 30 years…

Hi folks!

Can you guess what question I’m asked most of all during interviews and press conferences?

It started being asked back in the 1990s, quickly becoming the feared question that used to make me want to roll my eyes (I resisted the temptation:). Then after a few years I decided to simply embrace its inevitability and unavoidability, and started to improvise a bit and add extra detail to my answers. And still today, though my answers have been published and broadcast in probably all the mass media in the whole world – often more than once – I am asked it over and over, again and again. Of late though, it’s like I’ve come full circle: when I’m asked it I actually like to remember those days of long ago!

So, worked it out yet?

The question is: ‘What was the first virus you found?’ (plus questions relating to it, like when did I find it, how did I cure the computer it had infected, etc.).

Clearly, an important question, since, if it weren’t for it infecting my computer all those years ago: I may not have made a rather drastic career change; I may not have created the best antivirus in the world; I may not have raised one of the largest private companies in cybersecurity, and a lot more besides. So yes, a fateful role did that virus play – that virus that was among the early harbingers of what was to follow: billions of its ‘descendants’, then, later, cybercrime, cyberwarfare, cyber-espionage, and all the cyber-bad-guys behind it all – in every corner of the globe.

Anyway – the answer finally, perhaps?…

The virus’s name was Cascade.

But, why, suddenly, all the nostalgia about this virus?

Read on…

Threat Intelligence Portal: We need to go deeper.

I understand perfectly well that for 95% of you this post will be of no use at all. But for the remaining 5%, it has the potential to greatly simplify your working week (and many working weekends). In other words, we’ve some great news for cybersecurity pros – SOC teams, independent researchers, and inquisitive techies: the tools that our woodpeckers and GReAT guys use on a daily basis to keep churning out the best cyberthreat research in the world are now available to all of you, and free at that, with the lite version of our Threat Intelligence Portal. It’s sometimes called TIP for short, and after I’ve said a few words about it here, immediate bookmarking will be mandatory!

The Threat Intelligence Portal solves two main problems for today’s overstretched cybersecurity expert. First: ‘Which of these several hundred suspicious files should I choose first?’; second: ‘Ok, my antivirus says the file’s clean – what’s next?’

Unlike the ‘classics’ – Endpoint Security–class products, which return a concise Clean/Dangerous verdict, the analytic tools built into the Threat Intelligence Portal give detailed information about how suspicious a file is and in what specific aspects. And not only files. Hashes, IP addresses, and URLs can be thrown in too for good measure. All these items are quickly analyzed by our cloud and the results on each handed back on a silver platter: what’s bad about them (if anything), how rare an infection is, what known threats they even remotely resemble, what tools were used to create it, and so on. On top of that, executable files are run in our patented cloud sandbox, with the results made available in a couple of minutes.

Read on…

i-Closed-architecture and the illusion of unhackability.

The end of August brought us quite a few news headlines around the world on the cybersecurity of mobile operating systems; rather – a lack of cybersecurity of mobile operating systems.

First up there was the news that iPhones have been getting attacked for a full two years (!) via a full 14 vulnerabilities (!) in iOS-based software. To be attacked, all a user had to do was visit one of several hacked websites – nothing more – and they’d never know anything about it.

But before all you Android heads start with the ‘nah nana nah nahs’ aimed at the Apple brethren, the very same week the iScandal broke, it was reported that Android devices had been targeted by (possibly) some of the same hackers who had been attacking iPhones.

It would seem that this news is just the next in a very long line of confirmations that no matter what the OS, there may always be vulnerabilities that can be found in it that can be exploited by certain folks – be they individuals, groups of individuals, or even countries (via their secret services). But there’s more to this news: it brings about a return to the discussion of the pros and cons of closed-architecture operating systems like iOS.

Let me quote a tweet first that ideally describes the status of cybersecurity in the iEcosystem:

In this case Apple was real lucky: the attack was discovered by white-hat hackers at Google, who privately gave the iDevelopers all the details, who in turn bunged up the holes in their software, and half a year later (when most of their users had already updated their iOS) told the world about what had happened.

Question #1: How quickly would the company have been able to solve the problem if the information had gone public before the release of the patch?

Question #2: How many months – or years – earlier would these holes have been found by independent cybersecurity experts if they had been allowed access to the diagnostics of the operating system?

To be frank, what we’ve got here is a monopoly on research into iOS. Both the search for vulnerabilities and analysis of apps are made much more difficult by the excessive closed nature of the system. The result is almost complete silence on the security front in iOS. But that silence does not actually mean everything’s fine; it just means that no one actually knows what’s really going on in there – inside those very expensive shiny slabs of aluminum and glass. Even Apple itself…

This state of affairs allows Apple to continue to claim it has the most secure OS; of course it can – as no one knows what’s inside the box. Meanwhile, as time passes – yet no independent experts can meaningfully analyze what is inside the box – hundreds of millions of users are just lying in wait helpless until the next wave of attacks hits iOS. Or, put another way – in pictures…:

Now, Apple, to its credit, does put a lot of time and money into increasing security and confidentiality with regard to its products and ecosystems on the whole. Thing is, there isn’t a single company – no matter how large and powerful – can do what the whole world community of cybersecurity experts can combined. Moreover, the most bandied-about argument for iOS being closed to third-party security solutions is that any access of independent developers to the system would represent a potential vector of attack. But that it just nonsense!

Discovering vulnerabilities and flagging bad apps is possible with read-only diagnostic technologies, which can expose malicious anomalies upon analysis of system events. However, such apps are being firmly expelled from the App Store! I can’t see any good reason for this beside fear of losing the ‘iOS research monopoly’… oh, and of course the ability to continue pushing the message that iOS is the most secure mobile platform. And this is why, when iUsers ask me how they’re supposed to actually protect their iDevices, I have just one simple stock answer: all they can do is pray and hope – because the whole global cybersecurity community just ain’t around to help ).

Cyber-news: nuclear crypto mining.

Hi folks!

The i-news section is back with a bang after the summer holidays. Straightaway there’s some hot industrial cybersecurity news.

In case anybody missed my posts about how I spent this summer, here you go. Meanwhile, how some of the personnel at the South Ukraine Nuclear Power Plant spent their summer was reported in recent crime-related news. Ukraine’s Security Service (SBU) recently terminated cryptocurrency mining at the power plant’s restricted access facilities. This, erm, extra-curricular activity resulted in the leak of top-secret information about the power plant’s physical security. This is not only pretty depressing but also downright scary.

source

According to expert forecasts, the ICS market is set to reach $7 billion by 2024. Attacks on critical infrastructure are increasingly hitting the headlines. The recent Venezuela blackout, for example, immediately looked suspicious to me, and just a couple of days later it was announced that it was caused by a cyberattack.

This July, in collaboration with ARC Advisory Group, we published a lengthy report on the state of things in the industrial cybersecurity sphere. It’s a good read, with lots of interesting stuff in there. Here is a number for you to ponder on: in 2018, 52% of industrial cybersecurity incidents were caused by staff errors, or, in other words, because of the notorious human factor. Behind this number is a whole host of problems, including a shortage of professionals to fill key jobs, a lack of technical awareness among employees, and insufficient cybersecurity budgets. Go ahead and read the report – it’s free :)

Attention all those interested in industrial cybersecurity: you still have a few days (till August 30) to sign up for our annual Kaspersky Industrial Cybersecurity Conference 2019. This year, it’s being held from September 18-20 in Sochi, Russia. There’ll be presentations by over 30 international ICS experts, including yours truly. So, see you soon in sunny Sochi to talk about some serious problems and ways to deal with them!

A honeytrap for malware.

I haven’t seen the sixth Mission Impossible movie – and I don’t think I will. I sat through the fifth – in suitably zombified state (returning home on a long-haul flight after a tough week’s business) – but only because one scene in it was shot in our shiny new modern London office. And that was one Mission Impossible installment too many really. Nope – not for me. Slap, bang, smash, crash, pow, wow. Oof. Nah, I prefer something a little more challenging, thought-provoking and just plain interesting. After all, I have precious little time as it is!

I really am giving Tom Cruise and Co. a major dissing here, aren’t I? But hold on. I have to give them their due for at least one scene done really rather well (i.e., thought provoking and plain interesting!). It’s the one where the good guys need to get a bad guy to rat on his bad-guy colleagues, or something like that. So they set up a fake environment in a ‘hospital’ with ‘CNN’ on the ‘TV’ and have ‘CNN’ broadcast a news report about atomic Armageddon. Suitably satisfied his apocalyptic manifesto had been broadcast to the world, the baddie gives up his pals (or was it a login code?) in the deal arranged with his interrogators. Oops. Here’s the clip.

Why do I like this scene so much? Because, actually, it demonstrates really well one of the methods of detecting… unseen-before cyberattacks! There are in fact many such methods – they vary depending on area of application, effectiveness, resource use, and other parameters (I write about them regularly here) – but there is one that always seems to stand out: emulation (about which I’ve also written plenty here before).

Like in the film, the emulator launches the object being investigated in an isolated, artificial environment, which encourages it to reveal its maliciousness.

But there’s one serious downside to such an approach – the very fact that the environment is artificial. The emulator does its best to make that artificial environment as close to a real environment of an operating system, but ever-increasingly smart malware still manages to differentiate it from the real thing, and the emulator observes how the malware has recognized it, so then has to regroup and improve its ’emulation’, and on and on in a never-ending cycle, which regularly opens the window of vulnerability on a protected computer. The fundamental problem is that the functionality of the emulator tries its best to look like a real OS, but never quite does it perfectly to be the spitting image of a real OS.

On the other hand, there’s another solution to the task of behavioral analysis of suspicious objects – analysis… on a real operating system – one on a virtual machine! Well why not? If the emulator never quite fully cracks it, let a real – albeit virtual – machine have a go. It would be the ideal ‘interrogation’ – conducted in a real environment, not an artificial one, but with no real negative consequences.

Read on…

We SOCked it 2 ’em – and passed the SOC 2 audit!

Last year I told you how, as part of our Global Transparency Initiative, we had plans to undergo an independent audit to receive SOC 2 certification. Well, finally, we can announce that we did undergo this third party audit… and passed! Hurray! And it wasn’t easy: it took a lot of work by a great many of our K-folks. But now that’s all behind us, and I’m very proud that we’ve done it!

So what does this mysterious SOC abbreviation stand for, and (whatever it may be) why is it needed?

Ok. The abbreviation stands for Service Organization Controls, and SOC 2 is a report based on the ‘Trust Services principles and criteria’ of the American Institute of CPAs (AICPA) [CPA: Certified Public Accountants], which evaluates an organization’s information systems relevant to security, availability, processing integrity, and confidentiality/privacy. Put another way, this is a (worldwide recognized) standard for audits of information risk control systems. Its main aim is to provide information on how effective a company’s control mechanisms are (so other companies can assess any risks associated with working therewith).

We decided to seek SOC 2 to be able to confirm the reliability of our products and prove to our customers and partners that our internal processes correspond to the highest of international standards and that we’ve nothing to hide. The audit for us was conducted by one of the Big Four accounting firms (I can’t tell you which as per the respective contract’s terms and conditions, in case you were wondering). Over the past year different K-departments have been working closely with the auditors sharing with them all the information they’ve needed, and that includes R&D, IT, Information Security, and our internal audit team.

The final report, which we received this week, confirms the soundness of the internal control mechanisms used for our automatic AV database updates, and also that the process of developing and launching our antivirus databases is protected against unauthorized access. Hurray!

And if you’re a customer, partner or state regulator, please get in touch if you’d like to see a copy of the report.

That’s all for today folks, but I’ll be back tomorrow with a quick rewind back to STARMUS and some more detail of the presentations thereat.

Meanwhile, privyet, from…

Cyber-news from the dark side – cyber-hypocrisy, an eye for a Mirai, GCHQ-watching-you, and keeping BlueKeep at bay.

Hi folks!

Let’s kick off with some good news….

‘Most tested, most awarded’ – still ).

Just recently, the respected independent test lab AV-Comparatives released the results of its annual survey. Taking place at the end of 2018, the survey, er, surveyed 3000 respondents worldwide. Of the 19 questions asked of each, one was ‘Which desktop anti-malware security solution do you primarily use?‘. And guess which brand came top in the answers for Europe, Asia, and South/Central America? Yes: K! In North America we came second (and I’m sure that’s only temporary). In addition, in Europe we were chosen as the most frequently used security solution for smartphones. We’re also at the top of the list of companies whose products users most often ask to test, both in the ‘home’ segment and among antivirus products for business. Great! We like tests, and you can see why! Btw – here’s more detail on the independent tests and reviews that our products undergo.

“Thou hypocrite, first cast out the beam out of thine own eye;
and then shalt thou see clearly to cast the speck out of thy brother’s eye.”
Matthew 7:5

In May, yet another backdoor with features reeeaaal useful for espionage was discovered. In whose tech was the backdoor found? Russia’s? China’s? Actually – Cisco‘s (again)! Was there a hullabaloo about it in the media? Incessant front-page headlines and discussion about threats to national security? Talk of banning Cisco equipment outside the U.S., etc.? Oh, what, you missed it too?! Yet at the same time, Huawei’s international lynching is not only in full swing – it’s in full swing without such backdoors, and without any convincing evidence thereof whatsoever.

source

Read on…