Friday, December 31, 2010

expectations for 2011 and beyond

first, this is not prediction or forecast post. this is only a tribute.

no, seriously i hate those posts, they are annoying and i can't imagine being full enough of myself to actually try to prognosticate on what the future might bring.

that doesn't mean i don't have certain expectations for the future, however (though i can't really pin down time frames like those fortune-telling bloggers can).

as far as attackers go i expect that i'm going to disappoint you by not saying what i expect to be the next big thing. long time readers know i can be sensitive about giving the bad guys ideas and i certainly don't want to direct them towards new and annoying avenues of attack.

of course even if i did give them ideas, i'd still expect to mostly see more of what we've already seen - especially more of the things we started seeing this year. attackers seem to change in response to 5 basic influencers
  1. changes in user behaviour: this can be either changes that are meant to thwart attacker (which happen at a truly glacial pace) or adoption of technologies (like twitter for example) that provide attackers with new opportunities.
  2. new efforts by the security industry or authorities to thwart attackers: literally anything that disrupts the status quo for attackers fits in here. reputation systems that treat new unknown things as suspicious would be one example. new cooperative efforts to take down malware gangs would be another.
  3. changes to the computing platform itself: this is pretty strongly related to user adoption of new technologies, but i felt with the way the dominant computing platform seems to be shifting away from personal computers and towards mobile computing devices, the opportunities this would afford attackers deserve be highlighted.
  4. changes to the connectivity of devices: there's little doubt about how big an impact the broad adoption of the internet had on self-replicating malware like viruses and worms and later on distributed malicious computing like botnets. as connectivity continues to change and frankly increase between all sorts of devices it stands to reason new opportunities will present themselves to attackers.
  5. motivational evolution: first it was fame, then fortune, and now we are starting to see a shift towards power being the motivating force behind attacks. there may even be something that comes after the fame/fortune/power triad but that would be too much like making a prediction.
all of those things happen at a pretty slow gradual pace, however, which is why i'm not expecting huge upheavals in the modus operandi of attackers. #2 is probably the only one with the potential to really be punctuated.

now while i may not be keen on giving the bad guys ideas, giving the good guys ideas i'm not nearly so shy about.

i expect to see facebook do something about all the scams. the scam pages and apps are turning facebook into an untrustworthy environment, and in an untrustworthy environment people are less apt to share, which means they're less apt get a real benefit out of facebook, which in turn means they're less apt to use it. i can't imagine how facebook could possible afford to just sit back and let that happen so i expect them to take some kind of action - i have no clue if it will be effective, however.

now that sandboxing and whitelisting are catching on (and in fact 1 well known company seems to have implemented all 3 of my 3 preventative paradigms; oh heck, let's not be coy, kudos to kaspersky internet security - i'm not a customer but at least somebody seems to have either been listening to me or thinking along the same lines) i expect that people will gradually start adopting these technologies in larger numbers (the sandboxes will probably have an advantage since they're getting embedded inside client apps) and maybe even start to realize that these technologies also are limited just like blacklists are. and THEN, maybe i'll have reason to start talking more about strategies for when prevention fails. we can only hope.

speaking of hope, now that at least one vendor has covered the 3 preventative paradigms in some fashion, would it be too much to hope that vendors start looking at the other parts of a proper defensive strategy? prevention is only the first part of the PDR (prevent, detect, recover) triad (which itself seems to me to be incomplete).

back to expectations, i expect to continue to see more examples of authority being exercised - both in official and unofficial capacities - in order to thwart and even arrest attackers. i hope (oh, am i diverging again?) to see greater appreciation for the fact that legislation on it's own has little value. rules mean little if they aren't enforced and enforcement requires detection of violations, attribution, and often (where official authorities are concerned at least) cross-jurisdictional cooperation. i expect at least someone will be highlighting the importance these things played in whatever successes we have and hopefully (there i go again) more attention will be paid to them.

i expect to see some more individual or community-based assistance given to those who exercise authority, probably in the form of detection and/or attribution, much like brian krebs has famously done on more than one occasion.

i also expect, unfortunately, to see people continuing to whine about how AV software isn't effective at anything anymore.  i expect i will continue to make jokes about driving screws with hammers in response.

i expect to see the heterogeneous nature of the threat landscape continue to be underestimated by such verbiage as "today's threats" and "yesterday's threats" (as if yesterday's threats weren't threats anymore).

i expect to hear more about stuxnet. maybe even something that doesn't stretch the limits of credulity (a worm, spreading stealthily for over a year, only managed to hit it's target after it's notoriety reached it's peak???).

i expect i'm going to be holding more people's feet to the fire over marketing bullshit and snake oil peddling.

finally, because these aren't predictions, i expect at least some of these expectations will not be met - at least not in the short term of the upcoming year.

Friday, December 24, 2010

getting the wrong message across

it's that time of year again, jack frost nipping at your nose and chestnuts roasting on an open fire. and while we have that fire handy, lets hold some feet to it, shall we?

see there was a post about our favourite type of malware (the virus) published on the panda security support blog by javier guerrero díaz that seems to have a number of issues that need addressing. let's jump right in.

to start with there's the issue of terminology misuse:
In fact, we still use today the term “virus” to refer to any type of malware in general, when reality shows that, except for the occasional surge, the number of viruses in circulation is much lower than that of Trojans, for example.
the public has already started to pick up the use of the term malware as an umbrella term, replacing it's previous misuse of the term virus. while javier did hint at the inaccuracy of calling all malware viruses, it would have been better to not suggest that "we" (meaning the folks at panda, including himself) still misuse terminology that way. it makes it seem ok to be sloppy with the terms (something which ultimately leads to confusion amongst those who don't know better). i would hope that technically oriented folks were more precise in their word choice.

next was some over-generalization about worms:
Computer viruses differ from other malware specimens like Trojans or worms in that the latter do not need a host to spread.
not all worms are free from the requirement of a host. win32/ska (also known as the happy99 worm) for example must infect the wsock32.dll in order to send itself over email.

there was also some over-generalization about the complexity of viruses:
Also, this characteristic makes them more complex to develop as a computer virus must know the internal structure of the file it tries to infect in order to be able to install on it.
not all viruses need to know the internal structure of the file they're infecting. overwriting infectors (which destroy the original file rather than trying to preserve it) and companion viruses (which don't actually alter the original file at all) have no such need, nor i think do macro viruses.

on top of complexity, there was also some over generalization about the scope of virus infection:
Finally, given that viruses affect all executable files on the system...
not all viruses affect all executable files on the system. some (perhaps many) are much more selective. lehigh, for example, only infected command.com. quite a few affect files that most people would not consider executable (macro viruses for example go after documents instead of executables).

i understand that the post was intended for those less familiar with the subject of viruses and malware, but the problem with over simplification is that there's no agreed upon degree to which things should be simplified. the consequence of this is that everyone presents different 'facts' and that confuses the people you're trying to explain things to. i genuinely believe it's possible to explain things to people in such a way that they can understand you without sacrificing technical accuracy. it takes effort, and i'm certainly not going to suggest that i succeed in reaching this goal in all circumstances, but at least i don't give up trying. if we accept the sacrifice then we have to accept that people will never really understand what we're talking about because we don't give them the power to do so.

finally there is the market-speak that makes me cringe every time i see it:
Any Panda Security solution will keep your computer free from viruses and other malware.
panda's *tools* (if it's really a solution, what problem does it solve?) will not keep users' systems virus free. they may keep them mostly virus/malware free, but there will always be exceptions capable of slipping through.

i've long despised the use of the term "solution" to describe things that are better presented as tools. it's a trick used by marketing to make people believe they're getting the impossible dream - perfect protection. to see these words written by someone in R&D makes me think somebody's been drinking the marketing koolaid.

worse than that, however, is the reference to keeping systems virus/malware free, without qualification or caveat. this is one of the hallmarks of snake-oil in the anti-malware industry; and guess what, when i went searching through my archives looking for examples of this i found one - involving panda! is there something in the water? is it a language thing? do i have to go looking through my archives for the intersection of panda and snake-oil to see if there's a pattern emerging?

Thursday, December 23, 2010

short thought on sandboxing

jeremiah grossman recently penned a guest post for zdnet extolling the virtues of sandboxing. i've made no secret about the fact that i'm also a fan of sandboxing (though i'm not entirely on board with jeremiah's depiction of it with regards restricting things - that verges too close to behaviour blocking for liking) but the sandboxing jeremiah was referring too was the kind that is built into applications as a feature.

not too long ago posted about sandboxes being added to all sorts of apps and wondered (well, suggested) that such sandbox sprawl might not be the best way to go about things. jeremiah's observation that adding sandboxes to apps changes the game from a one exploit show to a two exploit show made me realize another reason why relying on the application's own sandbox is less than ideal - the attacker knows exactly which sandbox they have to escape from.

by contrast, with a separate stand alone sandbox, an attacker wouldn't necessarily know which sandbox is involved and would then need to develop escape exploits for multiple sandboxes and try the shotgun approach, firing them all at once and hoping for the best.

i do believe i'll be sticking with the stand alone sandbox. it seems to have the tactical advantage.

Tuesday, December 21, 2010

who knows what the future may bring?

who knows what the future may bring? well lots of people seem to think they do, and bruce schneier even goes so far as to predict what security will look like 10 years from now. much like long term weather forecasts, he is almost certainly wrong - at least i hope he is, because the picture he paints is distinctly dystopian.

no, that's not just an interpretation - a future where we the users are viewed as parasites living off the life-blood of corporations is not a happy shiny place to live. i can certainly see where he's coming from, though, as the beginnings of that are already visible with such schools of thought as the one that refers to users as product (i.e. we aren't facebook's customers, we're they're product). we are being increasingly objectified and devalued by corporate interests. the entertainment industry (and let's not forget their associated lobby, as the group is now as much a political force as they are a corporate one) is certainly leading the anti-consumer charge in the quest to justify their sense of corporate entitlement - but unlike bruce (who is himself part of the corporate machine) i have faith that society will eventually tip the scales back towards our favour.

we've already seen a time when businesses had all the power and the little guy was at their mercy. it happened during the industrial revolution. we fought back. we won. we outnumber them and they can't exist without us (while human history proves we can exist without them). to call us, rather than blood-sucking corporations, the parasites is to ignore nature in favour of business. that kind of backwards world view was not then and is not now a natural one and nature is something you cannot beat.

but beyond my faith in humanity, i also think schneier is wrong because he's misunderstanding the signs he's reading. for example, referring to iphones as special purpose computers instead of general purpose ones and citing them as evidence of the demise of the general purpose computer demonstrates that bruce hasn't the foggiest notion of what the distinction between a special purpose and general purpose computer really is. what we may well be witnessing is the end of the personal computer in favour of the mobile computing device, but that is an entirely different matter with entirely different repercussions. for one thing, a world without general purpose computers is a world without the world wide web. it is a world without iphone apps, a world without game consoles, a world without software. the iphone may exist in apple's walled garden, but i can (and do) get the same limitations on my PC using application whitelisting. that doesn't turn my PC into a special purpose computer any more than it does the iphone - it just makes it locked down.

so long as the computer is technically capable of running arbitrary code (which is exactly what happens when you install an iphone app or visit a website that has javascript or flash or any of the other wonderful interactive technologies out there) it is a general purpose computer (it satisfies what fred cohen referred to as the generality of interpretation). a world without general purpose computers is very, very hard to imagine. bruce, thinking the difference between special and general purpose computing can be illustrated as the difference between an iphone and a PC, sees a world that technically isn't much different from our own. but the difference between special and general purpose computers is more accurately illustrated as the difference between a cheap simple hand-held calculator and the fancier more expensive programmable variety. a world where computing devices are as inflexible as cheap hand-held calculators is a strange world indeed. you might think that there must be some sort of middle ground between the two that would allow for something more (and then surely the iphone inhabits that middle ground) but ed felton covered the fallacy of the almost general purpose computer a long time ago.

without the elimination of general purpose computing you cannot eliminate user choice. you cannot eliminate the emergence of technologies that empower us to throw off the yolk of corporate interests. the linuxes and firefoxes of the world will continue into the future, and the more anti-consumer that corporations become, the more consumers will choose those alternatives. we are not and never will be the parasites in the relationship with business. we are not facebook's product, we are their patrons. the advertisers are not their customers, they're more like the hotdog vendors at a stadium; they only make money so long as we show up and buy something and eventually we will stop showing up at the facebook stadium (just as we stopped showing up to friendster and myspace) and they'll have to chase us to our new favourite spot like the parasites they are.

Thursday, December 16, 2010

the transparency delusion

prompted by lenny zeltser's recent post on usability (which itself may be a response to my previous post) and with an actual usability study on 2 pieces of security software [PDF](specifically 2 password managers) still fresh in my mind i've decided to take another look at the issue of usability and more importantly transparency.

the usability study i referred to makes an excellent point about security only paying lip-service to usability, and i don't think they mean because the security software they studied had too many clicks to get through each function or because the menus were non-intuitive. the study was a wonderful object lesson for just how badly things can go wrong when transparency is taken too far - and why. in the case of the software in the study, transparency didn't just make the software harder to use, it actually lead to compromised security.

the key problem of transparency is that it robs the user of important information necessary for the formulation and maintenance of a mental model of what's going on. as a result, the user invariably forms an incomplete/inaccurate mental model, which then leads them to make the wrong decisions when user decision-making is required (at some point a user decision is always required - you can minimize them but you can never eliminate them); not to mention making it more difficult to realize when and how the security software has failed to operate as expected (they all fail occasionally) and so robbing them of the opportunity to react accordingly.

the usability study in question serves as an adequate example of how transparency can go wrong for password managers, but what about more conventional security software like firewalls or scanners? mr. zeltser used the example of a firewall that alerts the user whenever an application tries to connect to the internet. let's turn that around - what if the firewall was 'intelligent' in the way mr. zeltser is suggesting? what if it never alerted the user because all of the user's applications happened to be in some profile the firewall vendor cooked up to prevent the user from facing so-called unnecessary prompts? and what if one day that firewall fails to load properly (i.e. windows thinks it's loaded but the process isn't really doing anything)? will the user know? will s/he be able to tell something is wrong? it seems pretty obvious that when something that never gave feedback on it's operation all of a sudden stops operating, there will be no difference in what the user sees and so s/he will think nothing is wrong.

how about a scanner? let's consider a transparent scanner that makes decisions for you. you never see any alerts from it because it's supposedly 'intelligent' and doesn't need input from you. what happens then is that you formulate an incorrect model, not just what the scanner is doing (because you have no feedback from the scanner to tell you what it's doing), but also an incorrect model of how risky the internet is (because your scanner makes the decisions for you). you come to believe the internet is safe; you know it's safe because you have AV, but any specifics beyond that are a mystery to you because you're just an average user. one day you download something and attempt to run it but nothing happens. you try again and again and nothing happens. then you realize that your AV may be interfering with the process, and since you've come to believe the internet is safe instead of risky you decide that your AV must be wrong by interfering with things so you disable it and try again. congratulations, your incorrect mental model (fostered by lack of feedback in the name of transparency) has resulted in your computer becoming infected.

we shouldn't beat too hard on the average users here, though. i have to confess that even i have been a victim of the effects of transparency. a few years ago, when i was starting to experiment with application sandboxing for the first time, i tried a product called bufferzone. in fact, i tried it twice, and both times i failed to formulate an accurate mental model of how it was operating. bufferzone tried to meld the sandbox and the host system together so that the only clue you had that something was sandboxed was the red border around it. not just running processes either, files on your desktop could have red added to their icons to indicate they were sandboxed. but since i was new to sandboxing at the time i didn't appreciate what that really meant; and as a result, each time i removed bufferzone i was left with a broken firefox installation and had to reinstall.

when we talk about transparency in government, we're talking about being able to see what's going on. for some reason, however, when we talk about transparency in security software we're talking about not seeing anything at all - we're talking about invisibility. invisible operation can only be supported if we can make the software intelligent enough to make good security decisions on our behalf. lenny zeltser offer's the church-turing thesis in support of this possibility but i'd like to quote turing here:
"It was stated ... that 'a function is effectively calculable if its values can be found by some purely mechanical process.' We may take this literally, understanding that by a purely mechanical process one which could be carried out by a machine. The development ... leads to ... an identification of computability † with effective calculability" († is the footnote above, ibid).
security decisions necessarily involve a user's intent and expectations - neither of which can be found by 'purely mechanical processes', and therefore neither of which can be used by security software making decisions on our behalf. the decisions made by software must necessarily ignore what you and i were trying to do or expected to happen. that kind of decision-making isn't even sophisticated enough to handle pop-up blocking very well (sometimes i'm expecting/wanting to see the pop-up) so i fail to see how we can reasonably expect to abdicate our decision-making responsibilities to an automaton of that calibre.

transparency in security software is not a pro-usability goal, it is an agenda put forward by the lazy who feel our usability needs would be better addressed if we all could be magically transported back to a world where we didn't have to use security software any more. designing things so that you don't actually have to use them doesn't make them more usable, it's just chasing after a pipe-dream. true usability would be better served by facilitating the harmonization of mental models with actual function, and that requires (among other things) visibility not transparency/invisibility.