Monday, April 28, 2008

Six Dumbest Ideas

Your job, as a security practitioner, is to question - if not outright challenge!

This is a really old article, written by a very well respected security professional back in 2005. Although certainly some points are bang on the button, there’s good chunks of this that simply don’t stand up today. We definetely need to change the six dumbest ideas in Computer Security.

1 - Default Permit. Yes, is certainly correct, although how many times have you actually seen people do this, especially today? This might have been the case in 1999 (perhaps even 2005 when this article was written, but I doubt it), today however pretty much everything respects this ideal - especially firewalls that Marcus points out. The thing missing I suppose is where these default permits are - taking firewalls as an example, ingress default permits are as dead as a dodo now, but egress permits are still wide open from my experience, so there’s a valid point there.

2 - Enumerating Badness. Ah, the old white-list vs black-list. Whenever was this really a good idea? I get the point about anti-virus companies "enumerating badness" with their virus/malware libraries, but this is a bad example

3 - Penetrate and Patch. Once again, a good point, but misses that fact that software is complex and you are never going to get things 100% right. Relying on penetrate and patch for your whole security, bad, having to use it when holes are inevitably found, good. I just love the quote "Unless your system was supposed to be hackable then it shouldn't be hackable" in the article - this sort of assumes the "enumerating badness" argument in that we’ll either design software that can’t be hacked (limit the functionality to things we know work correctly), or we’ll protect the system from "bad" things happening. This assumes that we can tell "good" from "bad" (in contrast from #2, "bad" from "good"), which over time as research gets done we have to shift out knowledge. In the article the example of network being designed not to be hacked is a reasonable one, but it just doesn’t, IMO, work as well in software.

4 - Hacking is Cool. Ah, my favorite point in the whole article. This is the dumbest thing written here. If you think "hacking" is learning a bunch of exploits, then you are seriously mistaken. Hacking, in it’s traditional sense, in learning and understanding a system and then going about making it do things it was never designed to do. Running metaspolit is not hacking. Executing some "script" is not hacking. If (as some of the commenter’s on Digg point out) if wasn’t for "hackers" trying things out on their own systems and telling everyone their findings then we’d have systems (like WEP, web forums, stack/buffer overflow "guards", etc) that we "thought" were secure, but really weren’t. You actually do want to "give the hackers stock options, buy the books they write about their exploits, take classes on "extreme hacking kung fu" and pay them tens of thousands of dollars to do "penetration tests" against your systems" because they have knowledge and insights that are rare, that are often not in your organization, and have different views that the people that built/maintain/operate the system. Replacing "hacking is cool", with "engineering is cool" might very well be the way of the future (look up enrolment rates in computer science/engineering degrees to be very disheartened about this "future), but we need hackers to keep pushing the state-of-the-art and the boundary of good vs bad forward.

5 - Educating Users. I’d sort of agree here - one of the things Michael Howard said ages ago was if we had totally secure systems, the attackers would simply go after the users. This point however assumes we can solve all the issues with technology, which clearly we can’t. Attachments is one issue that is a difficult balance between usability (and the ability for people to do the work they have to do, simply), and security. If the balance is incorrect, it either pisses people off, or they work around it. Phishing is another good example where technology just isn’t working (or at least not as well as people would like), but education seems to work. I’d love to think that technology can solve any problem - I’m an engineer after all and sort of have that view built into me. The reality is though that some problems are intractable, at least for now, and we still have to educate users. Otherwise, why do we have driving licenses when we should just wait for the cars that drive themselves? Which is a good link to the final point….

6 - Action is Better Than Inaction. Network guy to CEO - "We’re under attack". CEO to network guy "Hold on, let me think about it for a while". I agree with the "don’t install the latest wizz-bang device/software", but evaluate, gather feedback, trial and slowly deploy argument, but the "do nothing" approach" is just myopic. If it’s possible for you to wait, then by all means wait and do it "properly". In the mean-time though, users are complaining that they can’t do xyz, like connect to the network via their laptops and show the boss something during the 2 minutes they might only have in the hallway. Security guys often find it far to easy to say "no" and cite concerns than to say "yes" for the benefit of the users and figure out what the risk is and try to mitigate it as much as possible.

No comments: