Category Archives: Microsoft technologies

ASP.NET: Delegate identity from a web application to a back end web application

One of the things that seem very simple on a Powerpoint presentation, but are not that simple in practice, is having a web user’s identity forwarded from a calling web application to another web application when using Kerberos.

The case is as follows: I have an intranet application A which uses Integrated Windows Authentication to authenticate the user. During processing of a request from a web users, application A then makes an HTTP request to intranet application B. Application B requires the web user to be authenticated to process the request. The often most attractive solution for solving this is what Microsoft refers to as identity delegation. Simple in a Powerpoint presentation, but alas, not so simple in practice.

First of all, there are a number of preconditions in the computing environment configuration that need to be fulfilled. I found a very good summary of gotchas in this respect here. In my case, the points 2 and 6 was  missing (I knew about the other once beforehand). So, when all configuration stuff set up, then the only thing left is the code and configuration in the application A.

Basically, you need to make the application impersonate the web user (meaning that it will run with the credentials of the web user). There are two ways to do this. If you wish the entire request to run as the web user, you can insert an <identity impersonate=”true” /> element under <system.web> in the application’s web.config. Or, if you wish only the request to application B to run as the web user, you can do this programmatically:

using System.Security.Principal;
...
WindowsIdentity identity = (WindowsIdentity)HttpContext.Current.User.Identity();
using (identity.Impersonate())
{
    // ... code to call application B goes here ...
}

Then, the next task is to call application B itself. You can do this by creating a web request:

HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://www.somethingcompletelydifferent.com");
request.ImpersonationLevel = System.Security.Principal.TokenImpersonationLevel.Delegation;
request.UseDefaultCredentials = true;
...
HttpWebResponse response = request.GetResponse();
...

The important things to notice here is that we set the ImpersonationLevel property to “Delegation” and that we set the UseDefaultCredentials property to “true”. So, it together, we get:

using System.Security.Principal;
...
WindowsIdentity identity = (WindowsIdentity)HttpContext.Current.User.Identity();
using (identity.Impersonate())
{
    HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://www.somethingcompletelydifferent.com");
    request.ImpersonationLevel = System.Security.Principal.TokenImpersonationLevel.Delegation;
    request.UseDefaultCredentials = true;
    ...
    HttpWebResponse response = request.GetResponse();
    ...
}

You can then test that it works in application B by checking the name in HttpContext.Current.User.Identity.Name.

Spring.NET: programatically add objects to the existing (XML) application context

My experience is that Spring.NET configuration files tend to grow very large. As far as I can figure, there are two principal problems that arise from this:

  1. The configuration files get difficult to read and maintain
  2. It gets easier to introduce errors in the configuration because of its size

In general, I am in favour of keeping configuration files as small as possible. I often work with web applications that can (quite) easily be redeployed to the production environment, hence I always ask the question “will this value ever change between environments or deployments” when considering introducing a new configuration part.

Now, the Spring XML configuration usually serves two main purposes; to wire together the application, and to provide values that should be possible to change between deployments of the application or for different environments. The first purpose, I would argue does not necessarily need to be in the XML configuration. Rather, if this is done in code, we get the benefit that the compiler will tell us right away if there are typos or missing references. If this wiring is in the XML configuration file, such errors will not surface until the application starts.

So, the question that I had, was how Spring context wiring could be combined in code and in XML. I found one way of doing it, but it is only applicable to singleton objects.

Say, for instance that we have an object “something” that we wish to have configured in XML:

  <object id="something" type="SpringTest.Something, SpringTest" singleton="false"/>

Then, we have a class that we want to initialize in code:

class Foo
{
    public Foo() { }
    private Something _s;
    Something S
    {
        set { _s = value; }
        get { return _s; }
    }
}

Now, we see that Foo has a dependency on Something; it needs an instance of Something to be injected. We can use the Spring context to do this after we have created the instance of Foo:

IApplicationContext context = ContextRegistry.GetContext();
Foo f = new Foo();
context.ConfigureObject(f, "fooPrototype");

But Spring does not yet know that the Foo instance needs to be injected Something. Hence, we need to tell Spring that by creating what I would call a “prototype” or “template” object configuration:

<object id="fooPrototype" type="ContextTestProject.Foo, ContextTestProject">
   <property name="S" ref="something"></property>
</object>

The final step is then to register our newly created object in the Spring context:

XmlApplicationContext xmlContext = context as XmlApplicationContext;
xmlContext.ObjectFactory.RegisterSingleton("foo", f);

After this, the Foo instance is available for the application in the Spring context.

What is “Oslo”?

On NDC a couple of days ago, I went to a session where David Chappell talked about Microsoft’s forthcoming “Oslo”. He went to great lengths to not reveal too much, as Microsoft is keeping everything very secret.  In fact, he spent more time explaining what “Oslo” is not than what it actually is.

Figuring actually what is intended to be is not easy. However, from the presentation, we know that “Oslo” is more of a “technology” or “platform” rather than a product. It will consists of the following parts:

  • The Repository. It is a storage space that has schemas that defines its data types. Actually what type of information it is supposed to or limited to, is not known. However, examples include things such as process definitions, workflow definitions, IT infrastructure information, and SLAs.
  • The Visual Editor. This is a general purpose tool that allows for editing of content in the repository. General purpose meaning that it can be used for different types of data. However, not all communication with the repository need to go through this tool. Special purpose applications or tools can connect and interact with the repository directly.
  • Extensions to Windows Workflow Foundation (WF). I am not sure exactly what kind of extensions we will see, but I can guess that it would mean extra activity components.
  • The process server. Basically, the WF does not define any host process for running workflow, and the way I figure, the process server implement such a process. It will contain a component called Lifecycle manager that can manage many process host instances (I guess for Load balancing, failover, etc.). The process server will also contain the ability to run BizTalk stuff. A question that comes to mind is whether the process server is “Biztalk for managed code” – built with the capabilities of WF and WCF? Time will show.

So what is the common denominator for all this? I am not sure. I can’t help it, but one word that keeps popping up in my mind is “governance”. Will this be “Microsoft’s tool for IT governance”?

Anyways, the time perspective of this is not known. When will this be available? All we know, is that Microsoft is planning to deliver this in three releases. Will it be in 2009?

DataSets – thanks, but no thanks

For reasons previously unclear to me, I have not really felt comfortable with ADO.NET DataSets. With regards to topics like testability, object orientation, and encapsulation they always left a bitter taste in my mouth. Furthermore, I have not come across any really good use for them, which nourished my mistrust even more. (I am not saying that there aren’t any good uses, though). So, the other day I started to look deeper into the matter to try to find some more solid arguments.

The first clue I got from David Veeneman‘s article “ADO.NET for the Object-Oriented Programmer – Part One“, where he claims that “ADO.NET doesn’t work with object designs because it’s not supposed to work with objects!” and that the best way to use ADO.NET in an object-oriented design is not to use it. Basically, using ADO.NET “all the way” – including DataSets – will result in a data-driven application rather than an object-oriented application. But I want object-oriented…

In my application, I would like to have my data in business objects, and not in DataSets. Basic concept of encapsulation. I want to place data and operations on that data in my class, hiding the nitty-gritty details from the outside world. As Jeremy D. Miller points out, you cannot embed any real logic in a DataSet, and you have to be careful about duplication of logic. Another point he makes, which I think is very important, is that DataSets are clumsy to use inside an automated tests in terms of test setup. This is exactly the same as I have experienced. Easy testability is something you should look for in your application/library/technology/gizmo.

So, is there any time DataSet should be used? According to Scott Mitchell, (in his article ‘Why I Don’t Use DataSets in My ASP.NET Applications‘) Data sets should only be used in

  1. In a desktop, WinForms application
  2. For sending/receiving remote database information or for allowing communication between disparate platforms

Scott then goes on to conclude that he generally recommends using DataReaders in web applications rather than DataSets. As he points out, you might be tempted to use DataSets to cache data from the database, but argues that you’d probably be better off storing custom objects instead as it is more efficient, and removes the tight coupling to database tables.

Lo, the conclusion. If I were you, I would hesitate using DataSets for anything than small, data driven applications. Here’s why:

  • Results in a non-object oriented application, which in turn hurts lowers cohesion, tightens coupling, and hurts encapsulation
  • Testability suffers. DataSets are very awkward to test in unit tests. Forget about TDD.
  • Tight coupling between database design and the rest of the applications. Makes change cumbersome.
  • YAGNI – DataSets offer a lot of functionality that you probably don’t need. It boils down to design/develop-time efficiency vs. run-time efficiency (pointed out in More On Why I Don’t Use DataSets in My ASP.NET Applications (also by Scott Mitchell). Personally, I think that when the application grows larger, the design/develop-time efficiency is more or lest lost, and maintainability problems with DataSets set in.

ADSI Edit does not support simple LDAP bind

Came as a surprise to me. I came to suspect this as I was unsuccessful in connecting with ADSI EDit, while successful using LDP. I got a confirmation in this post on Microsoft Technet.

I have been unsuccessful of finding any documentation on this, while the documentation does state that “ADAM ADSI Edit does not support Secure Sockets Layer (SSL) connections”. I would be OK if it also pointed out the limitation when it comes to binds…