Monday, April 28, 2008

adapting to malware quality assurance

as previously mentioned, malware quality assurance is an attack against heurstics... the idea is to make the malware sufficiently dissimilar to other malware that most heuristic engines find little of anything to be suspicious about...

and unfortunately, it works... at the time of writing, av-comparatives.org's latest retrospective test show that most of the results are grouped around a 40% detection rate... there are some outliers (in both directions) but 40% seems to be the general ballpark for most products when it comes to detecting new/unknown malware...

that's not really a score to be proud of... maybe once upon a time when new malware was comparatively rare, the remaining 60% of those few new pieces of malware for the small window of opportunity during which they remained new/unknown weren't really a big deal... at the current rate of new malware creation 60% miss rate is a big deal... users need to adapt and vendors need to adapt too...

users can adapt by taking any number of steps to prevent new/unknown programs (malware or otherwise) from executing or getting significant system privileges... this includes things like running as a limited user rather than an administrator, using behaviour blocking/HIPS software, using application whitelisting, using sandboxing, etc...

vendors can adapt by making those techniques/technologies easier to find, easier to understand, and easier to use... but there's one more thing... at one point i thought malware qa was pretty much the last nail in the coffin of heuristics but it's occurred to me that there's one thing vendors might be able to try to breathe some life back into heuristics...

polymorphism (sort of)... by which i mean changing/tweaking the heuristic algorithm frequently enough so as to make the results of malware qa less useful... the premise of malware qa is that if the malware is undetected right now it will stay undetected until the malware gets found by someone-somewhere, then gets sent to an anti-malware vendor for analysis, gets analyzed, gets added to the signature database, and that signature database update gets distributed to the potential victim population... create new malware fast enough and that window of opportunity, small though it may be in the ideal defender case, is still big enough... if instead that window of opportunity was significantly less predictable, malware quality assurance wouldn't offer the same kind of assurance it does now...

that's a high-level thought, though... i don't pretend to know how feasible it is to implement (i'm hoping that some engines have parameters originally intended to adjust it's sensitivity to various conditions and that could be randomized)... i do know that this would likely increase the number of false alarms, and probably worse still make those false alarms equally unpredictable, so maybe it's a bad idea but it's an idea none-the-less and i offer it for free to anyone who wants to try it...

0 comments: