Wednesday, November 26, 2008

the benefits of scanning in the cloud

now i know a good chunk of the general security industry has been poo-pooing the cloud recently, and normally av is the security industry's favourite whipping boy, so maybe this is just a case of two bad tastes that taste bad together... that being said, the concept has significant promise to take back several tactical advantages that av hasn't had in, well, forever...
  1. signature generation to client update time is reduced/minimized

    usually a good thing and this time it isn't at the expense of q/a because rather than cutting a corner that affects quality they're cutting the corner of updating the client in the first place... instead they'll be updating the cloud which they have direct control over (unlike the client) and then the client doesn't need to detect that it's out of date and try to update itself (and hope updating hasn't foolishly been disabled)... this potentially gives vendors time to do more q/a on their signatures and so reduce the bad signature release rate (though it's already pretty low)...

  2. under-reporting of new samples is reduced/minimized

    under-reporting is the single biggest advantage that targeted attacks have... without the law of large numbers favouring someone noticing there's something fishy about a particular file, that file doesn't get submitted for analysis and nobody gets signature-based detection capabilities for it... in cloud-based scanning, however, just about everything (except those things the user feels are too sensitive to transmit, and as such are probably not malware) should get submitted to the cloud so that is no longer an issue...

  3. greater situational awareness/intelligence than ever before

    data on detections can be correlated and analyzed, etc. providing the potential for virtually every client to become a sensor in a giant honeynet... geographic and demographic trends/patterns in the attacks have the potential to be more easily seen with so much more real-world data, and those are things that can be used to better predict who's at increased risk or maybe even help to pinpoint the source(s) of the attacks...

  4. conventional malware q/a should be entirely thwarted

    the ease with which current malware evades known-malware scanning is on the verge of becoming history... the basic methodology for evasion is to iteratively produce samples and run them past a slew of scanners to see if it's different enough to avoid them all (or enough of them to be valuable)... this worked in the past because malware authors could do this without anti-malware vendors being any the wiser... with a cloud-based scanner you can't fully scan a new malware sample without the vendor getting a copy (either the sample is submitted to the servers controlled by the vendor, or the sample is not submitted and the malware author gets incomplete/inaccurate detectability results) and thus letting the cat out of the bag about not only what your new malware looks like but possibly also what the heck you're up to ("hello, police, i'd like to report a large number of new malicious programs being generated at the following IP address")...

  5. scanner reverse engineering is almost completely nullified

    before what we now know of as malware q/a existed, the more clever malware authors were believed to have reverse engineered various scanners looking for information they could use to make sure their malware would better avoid detection... and even today, those who deal in vulnerabilities (either for the betterment of security or for malicious gain) will analyze scanners looking for flaws that an attacker could take advantage of... with the scanning engine no longer residing on the client computer the only kind of analysis anyone without source code access can do is black-box analysis (and if a botnet can detect attacks and protect itself a cloud-based scanner should conceivably be able to as well)... in this way the scanning algorithm becomes as inscrutable as any server-side polymorphic engine...

0 comments: