Tag Archives: Security

HTTPS is here

During the last few months, I have written several blog posts in my company’s blog about how to secure a site with HTTPS. I started off talking about how to encrypt an Azure web site with Let’s encrypt, and then continued on to discuss how to try to prevent the browser being tricked into making non-HTTPS requests to the server. Finally, I talked about how to narrow the range of certificate issuers we want the browser to trust for our site in order to prevent ill-behaving issuers to make our site insecure, using so-called certificate pinning.

Quite recently, there has been discussions on how HTTPS is gaining traction, and that HTTPS is becoming the norm. Your web site should use it, too.

Is the Internet security battle lost?

According to this New York Times article, researchers at Stanford University vote in favor of starting all over, redesigning the Internet. I wonder if that is the way to go? At the same time, they suggest an evolutionary approach:

“They argue that their new strategy is intended to allow new ideas to emerge in an evolutionary fashion, making it possible to move data traffic seamlessly to a new networking world.”

The Internet has indeed been evolutionary, how can one prevent ending up in the same mess once again?

Windows CardSpace anyone?

I was at a presentation about Windows CardSpace a couple of days ago. Beautiful technology it might be, but I cannot help questioning the adoption of CardSpace in the real world. I cannot say I have ever come across any site that supports it. Have you? (If so, please let me know). On the other hand, OpenId seems to get quite a bit of momentum being supported by some of the big Internet companies out there (Yahoo!, Google, AOL to name a few).

OK, CardSpace and OpenId do not offer exactly the same solution, and are in some respects not comparable. Biggest difference would be OpenId’s reliance of passwords as authentication mechanism (which is one of the reasons for its lack of phishing attack protection), while CardSpace solves this problem using cryptography. However, there are a lot of similarities:

  • Both offer a distributed model that accepts various Identity providers (the user can choose from a number of IdPs)
  • Both address the challenge with maintaining several user account/password for different Internet services

“OpenId is no good because it isn’t secure”

When asking the presenter about the adoption of CardSpace versus the adoption of OpenId, this was his response. I think that this is a gross oversimplification that serves no other purpose than spreading FUD about security.

First of all, if OpenId is good enough for Yahoo! and the like, it will probably be good enough for 80% of the Internet sites out there. I can think of a lot more sites out there that require “less security” than Yahoo! out there, than sites  that require a higher security level.

Secondly, security is not binary (secure – not secure). There are different levels of security. Saying that one solution is secure and another one isn’t, is being ignorant towards the field of security. Basically, security (as everything else) come at a cost. In the case of CardSpace, the cost is maintenance of your cards and the corresponding public/private key infrastructure. I do not know CardSpace in detail, but a main challenge here I suspect will be exactly the same as for other public/private key based solutions: how do you bring your keys with you? For instance, if you created a card in a CardSpace on your workstation at work, how do you bring them with you when you want to log in from your home computer or from an Internet café? Having them on a USB stick would probably be a choice, but even that limits the usage quite a lot. Passwords, on the other hand, you carry with you in your head (at least, that’s the idea ;)).

OOPSLA’07 – Security

My second day at OOPSLA consisted of two security related workshops. The first one as entitled “Security Patterns and Secure Software Architecture” and was presented by Munawar Hafiz. Security patterns seems to be an interesting topic and will perhaps be an important tool for security professionals.

The second tutorial was “Software Security: Building Security in” by Gary MacGraw. Although I have read a couple of his books, I found the tutorial very interesting. Many good insights (and some funny ones) were touched upon.

Perimeter defense does not work

One of the important things is that making an insecure application secure by putting up a firewall in front of it is flawed for many reasons. Trying to shield the application from the world is kind of the opposite of what we want to do. We want to be on the net, hence we should make the applications secure accordingly. Furthermore, it will not prevent attacks from insiders.

Security people’s job is to say “no”!

I can honestly say that this has crossed my mind one or few times. Although I am often involved with security, my job is not to say no, so hence I can deduce that I am not a “security person”. 🙂

Does the choice if programming language matter for security?

According to Gary MacGraw, it certainly does. For security, keep away from C and C++. In general, you should select a language that offers type safety. (This excludes C and C++). Furthermore, a question is if a statically typed language is better for security than a dynamically typed language. In my opinion, the answer here is not clear, although the presenter would probably go for a statically typed language.

More code – more bugs

This holds true for any bugs, not only those related to security. Hence, this is a fact that all people dealing with software development should be aware of. When your codebase grows, your number of bugs grow. Simple as that. You should honestly work on reducing your codebase as much as possible.

Security is not a feature or a function

Security is more of a quality aspect rather than a feature. I think this makes very much sense, and in fact I think that is reflected in Microsoft’s notion of “trustworthy computing”.

One-pager architecture

A good starting point for doing a security review of the architecture is to start with a one-page logical overview of the application as a starting point for discussion. I guess this is not only a practice related to security.

Penetration testing has limited use

I guess this can be summarized as follows: Penetration testing cannot verify that your code is good. It can only verify that your code stinks. The thought here is that penetration testing can only discover the uttermost serious problems, and running penetration testing is alone not enough to conclude that your application is secure.

Attackers use same tools as software people do

Attackers use compilers, decompilers, static analysis tools, coverage tools the same way as software developers do. Hence, developers who are familiar with these tools should learn how attackers use them so that they can defend against it. Network people would not know a compiler if it bit them.

Time and state will be biggest problem in the future

Today, proper input handling is regarded as the biggest security problem in applications. Looking into the crystal ball, time and state will probably move up to be the most important one since we see applications getting more distributed all the time and keeping track of time and state across locations will be more important.

Norwegian sites leaking information

Norwegian tabloid Dagbladet revealed yesterday that several commercial and non-commercial sites can be exploited to perform identity theft.

In Norway, all persons get assigned a unique number (‘fødselsnummer in Norwegian), similar to the US Social Security Number. Altough law restrictions apply, several sites use this number for uniquely identifying a person.

In this particular case, a hacker created a tool that could reveal identity information by collecting information from several sites, including the following steps:

  • Generate a random identifier. The format and the algorithm for creating one is publicly known.
  • Use site 1 to test whether the generated identifier is in use. This is possible because site 1 uses the number as user name. The logon procedure acts differently depending on whether the user name exists.
  • Use site 2 to get personal details about the person to which the generated identifier belongs. (Surname, given name, address)

This is of course possible because the sites are designed poorly and leak information (OWASP Top Ten vulnerability #6). Second mistake is that site number two use the unique number for authentication.

HttpOnly broke my Selenium tests

On my current project (running .NET 2.0), I have been using Selenium to test various security related aspects of the application. (Could Selenium be used for security testing?, Selenium with support for cookie-management) I have been happily using Firefox for running my tests, but today I tried to run the tests in Internet Explorer 7. Without success.

The thing is that I have been using Selenium to verify login related functionality, so for instance a test could be something like these:

  • Test that a user can successfully log in by providing correct username and password
  • Test that a user’s cookie session is ended when logging out

In order to successfully run these tests, I had to manipulate cookies in my tests:

  • To prevent tests from interfering with each other, I had to remove any session cookies in between tests
  • Test for existence of session cookies

When running my tests in Firefox, this worked well. I could perform operations on the .ASPXAUTH cookie, which is the cookie that .NET uses to identify an authenticated session. When running IE 7 it breaks. The reason for this, is that Microsoft has created a new attribute on cookies called ‘HttpOnly’ that .NET uses, and the Set-cookie HTTP header looks for instance like this:

Set-Cookie: .ASPXAUTH=bisxfb45rbiclmjmqu4aa345893763387328743238736; path=/; HttpOnly

IE 6 SP1 (and apparently also IE 7) makes such cookies inaccessible by JavaScript, as explained here: Mitigating Cross-site Scripting With HTTP-only Cookies.
Hence, my Selenium tests were unable to test for and to manipulate these cookies.

I have a mixed feeling about this. Everything that helps security makes me happy. However, everything that makes my application hard to test is baaaaaad. And I mean really bad. I think that the technical solution that Microsoft has come up with here is good – it really makes sense. Why should JavaScript be able to manipulate session cookies like these anyway? I cannot think of any good use case for that. However, this is proprietary stuff that Microsoft has come up with and is not an agreed standard. Makes my life as a developer harder. Not good.