Thursday, August 24, 2006

ethics of virus creation

one thing that the current controversy over the consumer reports av test brings to light is that people really don't understand the ethical considerations involved...

if you create a virus and no one ever hears about it or encounters it there is no ethical problem... it's analogous to that old question of if a tree falls in the forest and no one is around to here it, does it make a sound?... sound being merely an interpretation of vibrations in a medium, without anyone or anything to measure and/or interpret those vibrations there is no sound (of course this assumes an incredibly empty forest)...

if someone simply hears about it then there is a problem as it lends credence to the argument put forth by those who are not careful or even slightly responsible with the viruses they create that what they're doing is OK...

if they encounter it as an advertised virus sample (ie. something to be handled with care to avoid the live infection scenario) then it is up to that person to treat it carefully and responsibly... and if that person should share it with someone else that person must also treat it carefully and responsibly or else it might escape... the more people it gets shared with the greater the likelihood of it's escape, thus creating the live infection scenario... this is a problem because of the impact such an escape can have and the fact that once you share it with someone you basically have no control over whether that escape happens or not...

if someone does encounter it as a live infection then there is a problem, because viruses cost time and money to get rid of, they damage the integrity of all they infect, and they can (either intentionally or not) destroy data and render services and/or resources inaccessible/inoperable... worse still, live infections can live on for a long time after their release - unlike an exploit that stops being a threat after the people using it move on to something else, a virus will just keep going and going without intentional assistance... the monkey virus, for example, was in the wild for over a decade despite the fact that anti-virus software was able to detect and remove it for almost that entire time...

what, then, if the purpose for making the virus outweighs the risks? vesselin bontchev covered this hypothetical condition in his paper Are "Good" Viruses Still a Bad Idea? and the conclusion was that no matter what the virus was intended to do, the function could be just as easily be performed by non-replicative code and thus without many of the risks inherent with self-replicative code...

vesselin didn't specifically cover anti-virus testing as one of his examples in that paper, but then again the viruses used in virus detection tests don't actually carry out any function good or bad, they just sit there waiting to either be detected or not...

obviously viruses are needed in order to carry out virus detection testing, but do we need to create new viruses for the task? viruses are being created by the bad guys at an ever increasing rate and there's no sign of that ever stopping so there's certainly no shortage of viruses to use, not even if you're constraining yourself to just look at new viruses... furthermore, while the risk of virus escape is pretty much the same whether it's a virus captured from the wild or a virus you create in a lab, the implications of the escape of a virus you create are much worse... accidental escape of a real world virus means you're responsible for contributing to the spread of an existing threat while accident escape of a virus you create means you're responsible for the spread and creation of a brand new threat... that is the very definition of being part of the problem rather than being part of the solution...

of course critics will say "but you can take precautions against accidental release so it's really not an issue"... this is foolish, preventative measures are never 100% effective - the av industry, inspite of the extreme care they take, have still had accidents... nevermind the fact that proper virus detect tests don't use first generation samples, they often replicate the samples to 3 or more generations to ensure they actually are viruses - a high risk activity... add to that the fact that for the test to be good science it has to be exactly reproducible, which means other people have to be able to get samples of the viruses - at which point the precautions would be moot because the viruses would no longer be under your complete control and never would be again...

given that and the fact that tests using slightly old anti-virus products against viruses that appeared after those products were released give essentially the same results (ie. the av products don't fare well at all) as tests using viruses you create in a lab, we once again have the same principle that vesselin's conclusion gave us - that making viruses represents an unnecessary risk, and not just a risk to ones own computers or data but a risk for society at large... creating new risks and deciding unilaterally that society should be subject to those risks so that you can achieve some goal (especially when that goal can be achieved without those risks) is quite clearly unethical...

taking a small.dog for a walk

earlier today (well, ok, my clock says it was technically yesterday) i was perusing through the various RSS feeds in my blogroll when i happened upon this article on f-secure's blog about a downloader trojan called "small"...

in and of itself that's not really interesting to me, but what i did find a bit novel was that it's variant identifier spelled out the word "dog"...

variant id's, for those who don't know, are a kind of alphabetic number that represents something sort of similar to a version number... for example, the first instance of virus XYZ would be XYZ.A and the second one would be XYZ.B and so on until XYZ.Z at which point the following one would be XYZ.AA and it would continue like that...

that means small.dog is the 3101th variant in the (not so) small family... that's a lot of variants...

back when the decision to use a base 26 number system for variant id's was made i don't think they envisioned any one family having quite so many variants in it - i mean, there was stoned.empire.monkey.a and stoned.empire.monkey.b, but if there were stoned.empire.monkey.dog we'd have a bit of a confusing puzzle on our hands because aside from being at the end there's nothing to indicate to a layperson that it's not part of the given name... and just imagine what other words are possible - it should be obvious that since we're at small.dog right now we passed small.ass a long time ago and small.dick is yet to come...

which, after all that meandering, brings me to my point... although the anti-virus industry takes a certain care to choose appropriate names, to some extent the variant id system can be gamed to produce inappropriate id's (and most people won't see the distinction between the name and the id in such cases) by brute force...

dunno if it matters at this point, since most of the really interesting words are going to require orders of magnitude more variants, but who knows, maybe it'll happen? i don't think that 15 years ago they ever expected to reach ass...

Monday, August 21, 2006

reactions to the consumer reports virus creation effort

there's been surprisingly little attention paid to the fact that SANS internet storm center distributes malware but when news broke about consumer reports creating new viruses the shit definitely hit the fan...

authentium, mcafee, kaspersky labs, and eset all came out and expressed their disapproval and disappointment at consumer reports' irresponsible actions... sunbeltblog also came out with a great analysis of what's so wrong about what consumer reports did...

of course not everyone agrees... larry seltzer actually tries comparing writing viruses with writing exploits, apparently ignorant of the fact that, while exploits can demonstrate the existence of software flaws and therefore aid in their correction, viruses demonstrate no such flaws or anything else of comparable benefit to society... security curve also questions what all the commotion is about (and i've tried to share what i know there)...

that said, i think the most insightful comment (yes, prepare yourselves folks, i'm going to agree with someone again) came from david harley in his response to rob slade's securiteam blog entry... i'll paraphrase here - 1) you can test heuristics without creating new viruses, 2) people (even others in the security field) still don't understand av technology, and 3) people don't trust the av industry/community...

it's easy enough to get across the idea that you can test heurstics without creating new viruses, but the ignorance and mistrust are much bigger problems that really need to be addressed... more attention needs to be paid to the various social dimensions of the virus problem or these kinds of things will keep happening...

Thursday, August 17, 2006

surprise! offensive computing is, well, offensive

from a website i can't link to for reasons detailed here:
Offensive Computing was formed by Valsmith and Danny Quist as a resource for the computer security community. The primary emphasis here is on malware collections and analysis for the purpose of improving people's abilities to defend their networks. There is a noticeable lack of public sources of malware and malware analysis available. Those that were available were either for sale or limited to a small number of users. We provide resources such as live copies of malicious software, md5sums to search on and analysis of the malware to the general public.
so, assuming they're legit (and that's quite the assumption to make where cult of the dead cow members are concerned - think "back orifice", the RAT [silent installs for remote control software is not a good thing to do, boys and girls] they created that basically put them on the map years ago) we have yet another group of people who think sharing malware publicly is good and have clearly not considered how realistic their expectations are about the supposed benefits OR the costs to society at large OR the lessons to be learned from other attempts to do roughly the same thing...

the supposed benefits
proponents of these kinds of projects usually trot out noble ideas like full disclosure, open source, and collaboration... if only things could work out the way they planned... full disclosure, as i've stated before, only has a positive cost/benefit trade-off when the underlying problem can be fixed (which generally isn't the case with malware)... openness is great in some contexts, but not when dealing with dangerous materials, that much should be patently obvious... finally, the vast majority of ordinary users will never directly participate in the collaboration or even know it exists and the established experts already have better (read: less naive/negligent) channels through which they can get the same materials...

normal people do not use malware to help them defend their networks from malware - they use security software to defend their networks, security software written by other people, generally a relatively small group of people (small in comparison to the number of people who use it)... this isn't going to change - it will never be the case that the magority of the population will involve itself in the technical minutiae of synthesizing solutions to specific malware problems... how does free public access to live malware actually help these people who are trying to defend their networks? what impact does having such access have on the quality or effectiveness of the software that they're actually using to defend their networks with when they aren't the ones making that software?

clearly having such access does not help these users and has no impact on the quality of the tools they use - all that really matters is that the people who make the tools have access, and those people have access even without a project like offensive computing (otherwise how'd they get by up to now?)... the people actually building the tools have samples and they have contacts with other people that have samples and that they share a mutual trust with - that is how it works in the anti-virus community and that is how it should work in the wider anti-malware industry (if it doesn't, ask your anti-malware provider why they don't co-operate with other anti-malware providers - if they give you bullshit about competition on an issue of public safety like this, well it's just that, bullshit - there are plenty of other ways to compete that don't compromise or otherwise handicap the process of providing the public at large with the tools necessary to keep them safe - vote with your wallet)...

i suppose the argument could be made that public access helps those just breaking into the anti-malware market, but in reality there's all kinds malware already readily available to such people so they can build their malware databases organically... at the same time they can build their reputations and trust relationships with others in the anti-malware community so that by the time they need access to malware they can't easily find themselves they'll have people they can turn to...

that just leaves the people who can't or won't build those trust relationships as being the real beneficiaries of a project like this...

the cost to society
it's important, whenever examining some proposal to improve security (as offensive computing does), to not blindly look only at the promise such a proposal has - you also have to look at how the system can be gamed... in this case it's fairly simple - it can be used to put malware in the hands of bad guys of course (and it's clear why that's bad) but it also can put malware in the hands of lazy/careless people, incompetents, looky-loos, and all manner of other folks who have no business handling malware - the second type of new age virus writer as described in sarah gordon's paper generic virus writer 2, the one you may have working in your IT department right now, is exactly this sort of person...

does putting malware into the hands of these people benefit security? are we (or our computers, data, or privacy) safer by giving malware to the people most likely to do something stupid or malicious with it? of course not...

and these are exactly the same sort of people most likely to seek out and use such a project - they're interested in the samples, this is a cheap and easy way to get them, and they don't have to sacrifice anything they actually value (like principles about responsible malware handling)...

all this boils down to more variants being made, more malware being 'deployed', and a facilitation of the collaboration going on between malware creators by doing away with the innovation bottleneck of conventional participatory collaboration and replacing it with a new and less constrained model...

lessons that could have been learned
i suppose i could mention that the pro-malware community in general and the vx in particular have a long history of making their 'wares freely (as in speech, and often as in beer) available to the public with everything from bbses to cd compilations to usnet newsgroups to irc chatrooms to web pages... they do it not because it helps the good guys (in fact the good guys often help to get malware trading sites shut down) but because it helps other bad guys like themselves... however, those projects aren't intended to help the good guys so it's really not comparable...

i suppose i could also mention the sites for sharing exploit code... superficially they seem like they'd be comparable to this offensive computing project, however, as i've said before, exploit disclosure and malware disclosure are 2 very different things - the cost/benefit analysis of disclosing software defects and how they can be exploited comes up positive for us while the cost/benefit analysis of disclosing malware does not, so this too is really not comparable...

"so then what project is comparable?" you might ask - well how about rootkitDOTcom? they make a form of malware freely available on that site with the stated goal of helping security researchers tackle the 'rootkit' (*cough* stealthkit *cough*) problem... so let's look at how well that's worked out so far - over the past couple of years the stealthkit problem has gotten worse, not better... they're more widely used, they're more widely sought after, and they're getting more and more sophisticated... on that basis alone it would seem like rootkitDOTcom is failing to acheive it's supposed goals...

but the rootkitDOTcom example goes beyond simple failure to do good, the most damning thing is how much BAD it's done... i've described in the past how one of the site's founders made a stealthkit available on the site and how that stealthkit (unaltered from the compiled binary available on the site) then went on to become one of the most widely deployed stealthkits in the world... it's not like this was malware that was captured in the wild and whose success in the wild when it was freely available on a rather high profile site could be explained away as coincidence... it's also not like this malware was self-replicating so it's success can't be blamed on that either... it started on that site and the bad guys used that site, took that stealthkit and used it against countless computer users...

there is no question that the bad guys can do this or that they have done this in the past or that they will do it again in the future - it's a forgone conclusion and offensive computing is falling right into their hands... they go on to say that:
This site does NOT encourage or condone the spreading or propagation of viruses or worms. Thats exactly what this site is designed to help defend against.

The intent of providing live copies of malware is so that the community can collaborate on identifying and analyzing them in order to develop snort signatures and other defenses.
well, their intent may be good but the road to hell is paved with good intentions... they may not condone the spreading or propagation of viruses or worms but in practise i can guarantee you they'll wind up facilitating it... tens of thousands of live malware samples freely available is just too good a target... the av community knows full well (from experience) what can happen by sharing samples with just one wrong person - that's why they've developed the stringent policy they now follow... sharing malware with everyone will invariably lead to the bad guys misusing that malware and making the entire project part of the problem rather than part of the solution...

Wednesday, August 09, 2006

the blue pill leaves a foul aftertaste

yes, i know i've blogged about it before a couple of times already but time has passed and events have unfolded...

black hat is over, joanna rutkowska's presentation is complete and the media just lapped it up... microsoft expressed interest (and why wouldn't they when it bypassed protection mechanisms in their latest and greatest and most secure OS ever?) and probably others too...

in fact, at least one (perhaps more) anti-virus vendor has expressed interest in obtaining more information and that's when everything changed...

see, it was made clear way back in june that the blue pill wouldn't be available for download by the public and i thought to myself well gee that's a good thing... i mean it's clear that if it were freely (as in speech) available that the bad guys would adapt it and use it for their own purposes (if/when 64bit amd platforms become a significant hardware base)... it sounded very promising ethically, it seemed like the people holding the cards (coseinc) were going to be responsible (a stark contrast with so much of what goes on in the stealthkit/rootkit domain)...

so imagine my surprise and disappointment to read that in order for anti-virus companies to get additional information they'll have to pay money... yes, that's right, av companies are expected to pay for access to malware... as if malware creators don't already have enough of a financial incentive these days... by paying for malware, anti-virus companies would be giving malware creators (academic or otherwise) more reasons to create even more malware... that is not something av companies should ever be contributing to as it makes them part of the problem rather than part of the solution...

it's not like the malware creators were simply discovering an existing flaw, the potential for malware doesn't depend on flaws and joanna rutkowska made it clear that the blue pill doesn't depend on any flaws so the growing (and contraversial) practice of paying vulnerability researchers for vulnerability information (on the basis that they've done useful work for the vendor whose product they found flaws in by finding those flaws so they can be fixed) doesn't apply...

thankfully the folks at authentium did the right thing... i hope more do the same and come out publicly against the practice of paying for malware... and for those that don't, just remember what happened to the reputation of a certain someone who bought virus collections way back when...

Monday, August 07, 2006

what is a troll?

a troll is someone (usually working alone) you encounter in online forums (usenet, message boards, blogs, etc.) who makes contentious or downright inflamatory posts with the express purpose of getting a reaction out of one or more people...

trolls can be a rather pernicious problem, especially in unmoderated forums where there are no direct controls to keep them in line... they change the signal-to-noise ratio and interfer with the normal operation and enjoyment of the forum (even community driven support forums) if they are allowed to practice their trolling ways... however, even in the absense of direct controls on their behaviour such as one would encounter in an unmoderated forum there are still some effective ways of dealing with them...

a troll's primary motivation is entertainment - they don't care about what they're arguing about or whether it's an obvious lost cause, so long as they can get their mark(s) to react and in so doing give them the satisfaction of knowing they successfully manipulated the mark and caused that reaction to occur they're happy... not caring about what they're arguing about or in fact anything else is a rather important part of being a successful troll: not only does caring distract one from the act of trolling, it opens the troll up to being counter-trolled and thus becoming a mark themselves - something that is humiliating to a troll...

while it may occasionally be possible to counter-troll a troll, it generally takes a while, allows the damage to be done, and is no guarantee the troll will stop - it's actually more effective to attack their primary motivation... they can't derive enjoyment from trolling if people don't react to their bait and if they can't derive enjoyment they will move on to another forum in search of easier prey... they are human after all and as lazy as humans are they won't want to work hard if they don't have to and they don't have to because there is always easier prey out there... strange as it may sound, this is one problem where ignoring it really will make it go away, hence the oft repeated advice of don't feed the trolls...

while trolls are generally solitary (occassionally finding a kindred spirit here an there) there is a rare variation where a group of trolls will actually form a pack and troll together... pack trolls in sufficient numbers can have the same deleterious effect on a forum by trolling a single individual as a solitary troll has when trolling a group... however, despite this apparent performance advantage, pack trolls are individually failures as trolls... the deleterious effects to the signal-to-noise ratio caused by a pack of trolls are mostly the result of the messages from the trolls themselves whereas a single successful troll is able to get the group of marks to do most of the damage for him/her... additionally, in forming a social group tightly knit enough to stay together they invariably form connections to their cohorts and in so doing reveal something they care about (even if only a little bit) and open themselves up to counter-trolling and inevitable humiliation... when they realize their weakness their only real recourse is to insulate themselves from the rest of the pack but by doing so they break the ties that hold the pack together (which is why troll packs are rare)...

counter-trolling an entire pack of trolls is a fools errand... as always, don't feed the trolls...

back to index

Tuesday, August 01, 2006

understanding anti-malware intelligence

a recent post on the internet storm center's handler's diary by their CTO, johannes ullrich, tries to apply military strategy to computer security...

i say tries because it goes horribly wrong when he calls signature based anti-virus systems outdated...

signature based anti-virus systems, or more generally known-malware scanners are capable of detecting (and often removing) the vast majority of malware in existence (despite what has been said recently about their performance on a very small subset of that malware) - only the malware that is too new to qualify as known is really outside it's reach... what's more it has the power to do so before control is ever turned over to that malware, thus preventing the malware from getting control/gaining an advantage... turning one's back on known malware scanning ammounts to turning one's back on knowing your enemy as known-malware scanners represent knowledge of the enemy (or at least one aspect of the enemy) codified into a programmatic form for ease of distribution and deployment...

my own preference for strategic military thinking is sun tzu, who is perhaps most famous precisely for his thoughts on knowing the enemy:
Hence the saying: If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.
unlike the popularized misquote of "know your enemy" he's actually balancing knowledge of the enemy with knowledge of oneself - and by extension one's limitations and weaknesses (in order that one may know "when to fight and when not to fight")... not understanding the limitations of one's security measures prevents one from being able to effectively mix technologies and techniques so that the strengths of one can mitigate the weaknesses of another - basically preventing one from reaching any reasonable state of preparedness, which is a key to any effective strategy...

so what was johannes getting at, i wonder... well, for one thing he was quoting an entirely different military strategist - a one carl von clausewitz - but was clausewitz as cavalier about the importance of intelligence as johannes? it doesn't seem that way... as you can read here, although he talks at length about the innaccuracies of the intelligence one may have on hand, ultimately owing to the failings in those collecting and reporting it (intelligence itself has weaknesses and limitations and he tries to impress on the reader the importance of common sense and experience as a corrective measure), still maintains at the outset that the information we have about the enemy is "the foundation of all our ideas and actions"...

so then perhaps it's just johannes ullrich that underestimates the importance of intelligence in the formation of strategies - but how can that be since in the same post he's advocating information sharing which itself furthers the goal of gathering and using intelligence...

i think it must come down to the knowing oneself half of the intelligence equation... the notion that known malware scanning is a bad idea or outdated or the like has become quite popular and it seems to me that this often forgotten principle is to blame... knowing the strengths and weaknesses of the weapons in your arsenal (or that you could have in your arsenal), appreciating what they can and cannot do, and realizing what they represent strategically and how to deploy them tactically - these are the things people don't seem to understand, not even the CTO of the ISC...

known-malware scanners aren't outdated; they have obvious weaknesses that dictate one's strategy be supplemented with more generic techniques, but they also have considerable strength against a huge (and ever growing) body of malware... for every security defense you deploy there exists a counter-measure, but once the malware implementing that counter-measure becomes known (as all but the most narrowly targetted malware eventually does) it should no longer be able to sneak past known-malware-based defenses... known-malware scanning is weak against the counter-measure of novelty, but that's a counter-measure that expires...

in more simple terms: known-malware scanners are a form of information sharing between anti-malware experts and the rest of the world... throw that information away if you want, but at least realize what you're doing when you're doing that...