Anti-virus defences even shakier than feared

Security firms attack 'flawed' tests


Updated Anti-virus technologies may be even more ineffective than feared, if a controversial new study is to be believed.

A study by web intelligence firm Cyveillance found that, on average, vendors detect less than 19 per cent of malware attacks on the first day malware appears in the wild. Even after 30 days, detection rates improved to just 61.7 per cent, on average.

Anti-virus vendors have criticised the methodology of the study as hopelessly flawed not least because it only looked at signature-based detection of malware.

Cyveillance argues its research shows that users ought to practice safe computing - such as avoiding unknown or disreputable websites and increasing security settings on their web browser - as a way of minimising security risks rather than relying on up to data anti-virus to protect them.

The security research outfit criticises "signature-based" anti-virus technologies; which is fair enough but rather ignores the point that vendors have long adopted generic detection of malware strains, and are introducing crowd-based architectures as a means of providing protection from the ever-increasing volume of malware threats.

Luis Corrons, technical director of Panda Security, which was not tested as part of the research, said that Cyveillance had only tested one component of anti-malware protection. The tests ignored anything except anti-virus signatures - despite the fact this is only one layer of the protection offered by modern security / anti-malware suites.

"As far as I’ve seen, they have only tested static signature detection capabilities, Corrons told El Reg. "This is the very first technology ever implemented in an antivirus."

"It is good to detect known malware, but it is clearly not enough, and every serious antivirus vendor knows that, and even some of us recognize it in public, Panda has been saying this for years. That’s why most of the major vendors have been developing proactive technologies: behavior analysis/blocking, cloud-based detections, etc.," Corrons concluded.

David Harley, senior research fellow at anti-virus firm Eset, which was tested, said Cyveillance conclusions that anti-virus solutions alone do not adequately protect individuals and enterprises is reasonable but its test methodology is flawed. For one thing Cyveillance looked at just 1,708 samples, a minute fraction of the tens of thousands of malicious binaries that pass through anti-virus labs every day.

"You can't convincingly claim statistical precision with a data set of 1,708 samples, Harley explained. "You certainly can't rank comparative performance meaningfully on that basis unless you can demonstrate accurate weighting for prevalence, and there's no indication of that in the report. Harley said Cyveillance may have looked at on-access scanner performance and failed to carry out "true dynamic or whole product testing", repeating a problem of other tests where "testers draw big conclusions from tests that only look at a single detection behaviour".   "Anti-virus products miss a lot of malware.However, the exact number or proportion of threats missed is a bit harder to calculate (or even guess at) than Cyveillance seems to think," Harley concluded. ®

Similar topics


Other stories you might like

  • Everything you wanted to know about modern network congestion control but were perhaps too afraid to ask

    In which a little unfairness can be quite beneficial

    Systems Approach It’s hard not to be amazed by the amount of active research on congestion control over the past 30-plus years. From theory to practice, and with more than its fair share of flame wars, the question of how to manage congestion in the network is a technical challenge that resists an optimal solution while offering countless options for incremental improvement.

    This seems like a good time to take stock of where we are, and ask ourselves what might happen next.

    Congestion control is fundamentally an issue of resource allocation — trying to meet the competing demands that applications have for resources (in a network, these are primarily link bandwidth and router buffers), which ultimately reduces to deciding when to say no and to whom. The best framing of the problem I know traces back to a paper [PDF] by Frank Kelly in 1997, when he characterized congestion control as “a distributed algorithm to share network resources among competing sources, where the goal is to choose source rate so as to maximize aggregate source utility subject to capacity constraints.”

    Continue reading
  • How business makes streaming faster and cheaper with CDN and HESP support

    Ensure a high video streaming transmission rate

    Paid Post Here is everything about how the HESP integration helps CDN and the streaming platform by G-Core Labs ensure a high video streaming transmission rate for e-sports and gaming, efficient scalability for e-learning and telemedicine and high quality and minimum latencies for online streams, media and TV broadcasters.

    HESP (High Efficiency Stream Protocol) is a brand new adaptive video streaming protocol. It allows delivery of content with latencies of up to 2 seconds without compromising video quality and broadcasting stability. Unlike comparable solutions, this protocol requires less bandwidth for streaming, which allows businesses to save a lot of money on delivery of content to a large audience.

    Since HESP is based on HTTP, it is suitable for video transmission over CDNs. G-Core Labs was among the world’s first companies to have embedded this protocol in its CDN. With 120 points of presence across 5 continents and over 6,000 peer-to-peer partners, this allows a service provider to deliver videos to millions of viewers, to any devices, anywhere in the world without compromising even 8K video quality. And all this comes at a minimum streaming cost.

    Continue reading
  • Cisco deprecates Microsoft management integrations for UCS servers

    Working on Azure integration – but not there yet

    Cisco has deprecated support for some third-party management integrations for its UCS servers, and emerged unable to play nice with Microsoft's most recent offerings.

    Late last week the server contender slipped out an end-of-life notice [PDF] for integrations with Microsoft System Center's Configuration Manager, Operations Manager, and Virtual Machine Manager. Support for plugins to VMware vCenter Orchestrator and vRealize Orchestrator have also been taken out behind an empty rack with a shotgun.

    The Register inquired about the deprecations, and has good news and bad news.

    Continue reading

Biting the hand that feeds IT © 1998–2021