My second day at OOPSLA consisted of two security related workshops. The first one as entitled “Security Patterns and Secure Software Architecture” and was presented by Munawar Hafiz. Security patterns seems to be an interesting topic and will perhaps be an important tool for security professionals.
The second tutorial was “Software Security: Building Security in” by Gary MacGraw. Although I have read a couple of his books, I found the tutorial very interesting. Many good insights (and some funny ones) were touched upon.
Perimeter defense does not work
One of the important things is that making an insecure application secure by putting up a firewall in front of it is flawed for many reasons. Trying to shield the application from the world is kind of the opposite of what we want to do. We want to be on the net, hence we should make the applications secure accordingly. Furthermore, it will not prevent attacks from insiders.
Security people’s job is to say “no”!
I can honestly say that this has crossed my mind one or few times. Although I am often involved with security, my job is not to say no, so hence I can deduce that I am not a “security person”. 🙂
Does the choice if programming language matter for security?
According to Gary MacGraw, it certainly does. For security, keep away from C and C++. In general, you should select a language that offers type safety. (This excludes C and C++). Furthermore, a question is if a statically typed language is better for security than a dynamically typed language. In my opinion, the answer here is not clear, although the presenter would probably go for a statically typed language.
More code – more bugs
This holds true for any bugs, not only those related to security. Hence, this is a fact that all people dealing with software development should be aware of. When your codebase grows, your number of bugs grow. Simple as that. You should honestly work on reducing your codebase as much as possible.
Security is not a feature or a function
Security is more of a quality aspect rather than a feature. I think this makes very much sense, and in fact I think that is reflected in Microsoft’s notion of “trustworthy computing”.
A good starting point for doing a security review of the architecture is to start with a one-page logical overview of the application as a starting point for discussion. I guess this is not only a practice related to security.
Penetration testing has limited use
I guess this can be summarized as follows: Penetration testing cannot verify that your code is good. It can only verify that your code stinks. The thought here is that penetration testing can only discover the uttermost serious problems, and running penetration testing is alone not enough to conclude that your application is secure.
Attackers use same tools as software people do
Attackers use compilers, decompilers, static analysis tools, coverage tools the same way as software developers do. Hence, developers who are familiar with these tools should learn how attackers use them so that they can defend against it. Network people would not know a compiler if it bit them.
Time and state will be biggest problem in the future
Today, proper input handling is regarded as the biggest security problem in applications. Looking into the crystal ball, time and state will probably move up to be the most important one since we see applications getting more distributed all the time and keeping track of time and state across locations will be more important.