Independent AV testing in 2014: interesting results!

At KL we’re always at it. Improving ourselves, that is. Our research, our development, our products, our partnerships, our… yes – all that. But for us all to keep improving – and in the right direction – we all need to work toward one overarching goal, or mission. Enter the mission statement…

Ours is saving the world from cyber-menaces of all types. But how well do we do this? After all, a lot, if not all AV vendors have similar mission statements. So what we and – more importantly – the user needs to know is precisely how well we perform in fulfilling our mission – compared to all the rest…

To do this, various metrics are used. And one of the most important is the expert testing of the quality of products and technologies by different independent testing labs. It’s simple really: the better the result on this or that – or all – criteria, the better our tech is at combatting cyber-disease – to objectively better save the world :).

Thing is, out of all the hundreds of tests by the many independent testing centers around the world, which should be used? I mean, how can all the data be sorted and refined to leave hard, meaningful – and easy to understand and compare – results? There’s also the problem of there being not only hundreds of testing labs but also hundreds of AV vendors so, again, how can it all be sieved – to remove the chaff from the wheat and to then compare just the best wheat? There’s one more problem (it’s actually not that complex, I promise – you’ll see:) – that of biased or selective test results, which don’t give the full picture – the stuff of advertising and marketing since year dot.

Well guess what. Some years back we devised the following simple formula for accessible, accurate, honest AV evaluation: the Top-3 Rating Matrix!.

So how’s it work?

First, we need to make sure we include the results of all well-known and respected, fully independent test labs in their comparative anti-malware protection investigations over the given period of time.

Second, we need to include all the different types of tests of the chosen key testers – and on all participating vendors.

Third, we need to take into account (i) the total number of tests in which each vendor took part; (ii) the % of ‘gold medals’; and (iii) the % of top-3 places.

What we get is simplicity, transparency, meaningful sifting, and no skewed ‘test marketing’ (alas, there is such a thing). Of course it would be possible to add into the matrix another, say, 25,000 parameters – just for that extra 0.025% of objectivity, but that would only be for the satisfaction of technological narcissists and other geek-nerds, and we’d definitely lose the average user… and maybe the not-so-average one too.

To summarize: we take a specific period, take into account all the tests of all the best test labs (on all the main vendors), and don’t miss a thing (like poor results in this or that test) – and that goes for KL of course too.

All righty. Theory over. Now let’s apply that methodology to the real world; specifically – the real world in 2014.

First, a few tech details and disclaimers for those of the geeky-nerdy persuasion:

  • Considered in 2014 were the comparative studies of eight independent testing labs (with: years of experience, the requisite technological set-up (I saw some for myself), outstanding industry coverage – both of the vendors and of the different protective technologies, and full membership of AMTSO) : AV-Comparatives, AV-Test, Anti-malware, Dennis Technology Labs, MRG EFFITAS, NSS Labs, PC Security Labs and Virus Bulletin. A detailed explanation of the methodology – in this video and in this document.
  • Only vendors taking part in 35% or more of the labs’ tests were taken into account. Otherwise it would be possible to get a ‘winner’ that did well in just a few tests, but which wouldn’t have done well consistently over many tests – if it had taken part in them (so here’s where we filter out the faux-test marketing).

Soooo… analyzing the results of the tests in 2014, we get……..

….Drums roll….

….mouths are cupped….

….breath is bated….

……..we get this!:

Independent testing 2014:  the results

On the x-axis is the number of tests in which a vendor took part. On the y-axis – the percentage of top-3 places of each vendor out of all tests. The diameter of the circles – the number of gold medals.

Well, well, well. Look at that folks.

(Briefly – for naturally cynical: before you say ‘ah, but of course’, have another read of the methodology as I describe it above. This isn’t pseudo test marketing. I repeat: This is not pseudo test marketing!)

Those gold-medal winning results leave me just one thing to do: Congratulate our R&D and all the rest of the KLers who helped us win the championship! Also to congratulate our partners, customers, contractors, friends, girlfriends, all family and relatives, my producer and director, and of course god. I congratulate you all on this outstanding, deserved and striking victory! Keep up the good work, and keep on saving the world! Hurray!

But that’s not all! Still to come: the dessert – for the more discerning connoisseurs of the AV industry…

The method I’ve described above has been applied for three years already. It works. One reason why it works in particular is its ability to deal with the ‘all washing powder brands are the same’ critique. Yep, turns out the opinion that all AV is just about the same in terms of the level of protection and functionality is still alive and kicking! So here are some curious observations to come out of this evaluation:

  • There are now more tests (on average in 2013 vendors took part in 50 tests, in 2014 – 55), and so winning became more difficult (the average for being in the top-3 of the tests has been reduced from 32% to 29%).
  • Three vendors fell considerably in their showings in the top-3: Symantec (-28%), F-Secure (-25%), and Avast (-19%).
  • Three other vendors did just the opposite: top-3 showings increasing for Trend Micro (+16%), ESET (+12%), and Panda (+11%).
  • Curiously, the positions of vendors relative to one other don’t change that much. Only two vendors improved their top-3 placings by more than five percentage points. This shows that the quality of protection is determined not only by signatures, but on the years spent carefully investing in and building up innovative tech; stealing signatures gets you nowhere.
  • And for the real AV-test-gourmets: the first seven top-3 winners took twice as many prizes than the remaining 13 (240 top-3 hits against 115) and took 63% of all gold medals. Oh, and 19% of all gold medals were taken by one single vendor – guess which? :)

Finally… the main conclusion:

Tests are one of the most important criteria for choosing protection. After all, most users will believe an authoritative independent source than a pretty booklet, which, let’s face it, always claims to be the best. All vendors understand this well, but each deals with the problem based on its technological ability. Some do tend to fret a lot about quality, constantly perfecting protection, while some manipulate with results: a single certificate of compliance in a year, the medal proudly fastened onto the lapel, and they think they’re contenders. No. Sorry. Not so. We need the whole picture, please, to be able to judge not only on certain test results but also the degree of participation.

Therefore, users are recommended not to take choosing protection lightly – and not to allow the wool to be pulled over their eyes.

How can you identify which AV performs better in tests? See the top-3 rating from @kasperskyTweet

That’s it for this week folks! Bye for now…

READ COMMENTS 1
Comments 1 Leave a note

    Nimita

    Hey thanks a lot for sharing this. I must say it is quite useful. I came across tindependent testing services and it has some really very interesting stuff. It’s worth having a look.

Leave a note