Category Archives: System integration

Adding WCF REST services to existing ASP.NET web application

If you want to create a new WCF services application with REST support, the WCF REST Templates are brilliant. However, if you have an existing ASP.NET application from which you want to expose REST services, there are a few manual steps you need to take to get it up and running:

Add assembly references

Add references to the following assemblies in your existing web project:

  • System.ServiceModel
  • System.ServiceModel.Activation
  • System.ServiceModel.Web

Create service class

Create a new service class where you will implement the service:

[ServiceContract]
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
public class LetterService
{
    [WebGet(UriTemplate = "")]
    public List<string> GetList()
    {
        return new List<string>{"a", "b", "c"};
    }
}

Register service route

In Global.asax.cs, define a route to the service:

void Application_Start(object sender, EventArgs e)
{
    RouteTable.Routes.Add(new ServiceRoute("letter", new WebServiceHostFactory(), typeof(LetterService)));
}

Enable ASP.NET compatability

Add the following to web.config:

<configuration>
   <system.serviceModel>
      <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/>
   </system.serviceModel>
</configuration>

…and you are good to go! The service will be available on http://<server>/letter

Optional: enable help

In order to get a nice help page for clients connecting to the service, add the following under the system.serviceModel element in web.config:

<standardEndpoints>
   <webHttpEndpoint>
      <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true" />
   </webHttpEndpoint>
</standardEndpoints>

Then, help will be available on http://<server>/letter/help

The mother lode for IIS, Kerberos and IWA information

I just came across Ken Schaefer’s blog, and I found that he has posted a series of excellent posts concerning various aspects of getting Integrated Windows Authentication / Kerberos to work on IIS:

Simply a great source of information!

The future of portal frameworks

Five years ago there was a lot of hype around portal technologies. (You know of which I speak; portlets, JSR-168, WSRP, etc.).  In 2004, Gartner listed technologies such as JSR-168(Portlet Specification), JSR-170 (Content Repository for Java™ Technology API) and WSRP (Web Services for Remote Portlets) at the peak of their hype cycle in their “Hype Cycle for the Portal Ecosystem“. My impression is that since then those technologies have kind of faded away.

In my personal experience, I have not  seen any really successful portal technology deployments. (Of course, I am not implying that there are none such in existence, only I haven’t seen them). I have participated in my share of portal technology implementation projects, but none of them, I my opinion, did live up to their promise. Now seeing that there is less and less talk about portals (except from some software vendors that still are eager to sell their products that they have invested a lot in), I am questioning the future of portal frameworks as we know them.

What value do portals add?

In a discussion around web 2.0 technologies, a corporate communications person in a large international telecommunications company stated something like (I don’t remember his exact words) “portals have not really caught on – after all people prefer to read their email in Outlook instead of settling with the inferior experience in a portal”. This is a good point – many of the corporate portals focus on bringing a subset of functionality of other applications into a portal or workspace. But what is really the value added? After all, Firefox gave us tabbed browsing that has been adopted by all major browser vendors. Switching between your webmail and your (web-based) CRM system is only a Ctrl-tab away… In most scenarios, integration of applications using portal products did not deliver more than that.

A portal will add value when it is a common entry point where you can start when wanting to access information or a certain functionality. A typical example of this would be your corporate intranet. But it does not mean that the functionality necessarily needs to be delivered through that portal. Rather, it should guide or direct you to that application which delivers that functionality – only a click away.

Another area where portals could add value, is if data/functionality from several applications can be combined to add value. This has been a promise from portal frameworks that in my opinion has only to a very limited degree has been delivered. Inter-portlet communication did not happen to any large extent.

What is the cost of portals?

Compared to a plain web server technologies like Java Servlets, ASP.Net and the like, portal frameworks not only typically represent an extra licensing expense, they add quite a bit of complexity. In addition to configuration complexity, there is added complexity associated with customization and development. For instance, development models for portals are heavier and so are the development environments. Furthermore, portal frameworks constrain you in several ways, for instance when changing the look-and-feel or user interface experience. What is relatively easy to change in a standard web platform is much harder to change in a portal.

With the emergence of agile methodologies, disciplines like test-driven development and automated testing has become something that we have learned to appreciate. This has shown to be another area where portal frameworks get in your way as a developer- they are inherently hard to test.

The challengers

With the emergence of web 2.0-technologies, backed up by large Internet companies like Google and Yahoo! that do not sell software but services, the focus seems to be on lightweight technologies like Ajax, mashups, RSS, and REST-based services. Widgets/gadgets like the ones offered by services like Netvibes, iGoogle, Yahoo! and live.com are the soupe du jour. They offer a light-weight development model with a very low startup cost; no specific server-side technology knowledge is required, no specialized IDE or dev tools  required. The big question in my mind is whether these will challenge “traditional” portal technologies as the leading enabling technologies for intranet content aggregation? Quite recently Netvibes chose to open source their JavaScript widget engine. Furthermore, open source projects like Shindig are popping up that aims to offer developers frameworks for widget development.

Portal frameworks also continue to evolve. JSR-168 has been succeeded by JSR-286 and there is a version 2 of WSRP. The question is if this is enough to to make portlets prosper. Personally, I don’t think so as the problems are at a more fundamental, conceptual level than API specifications. But time will show…

In conclusion

“Traditional” portal frameworks have not lived up to their expectations. The use cases for portal technology represent niche functionality, at most. Furthermore, portal technology represent a heavy development model that in many cases will slow you down, compared to other technologies. If you are considering implementing portal technology in your company, you should very carefully investigate cost/benefit. Do not exclusively listen to portal vendors; talk to peer companies that have implemented portals, inquire them about their experiences. Talk to devs. Keep a close eye on emerging technologies. Keywords include widgets, Ajax, RSS, mashups.

What is “Oslo”?

On NDC a couple of days ago, I went to a session where David Chappell talked about Microsoft’s forthcoming “Oslo”. He went to great lengths to not reveal too much, as Microsoft is keeping everything very secret.  In fact, he spent more time explaining what “Oslo” is not than what it actually is.

Figuring actually what is intended to be is not easy. However, from the presentation, we know that “Oslo” is more of a “technology” or “platform” rather than a product. It will consists of the following parts:

  • The Repository. It is a storage space that has schemas that defines its data types. Actually what type of information it is supposed to or limited to, is not known. However, examples include things such as process definitions, workflow definitions, IT infrastructure information, and SLAs.
  • The Visual Editor. This is a general purpose tool that allows for editing of content in the repository. General purpose meaning that it can be used for different types of data. However, not all communication with the repository need to go through this tool. Special purpose applications or tools can connect and interact with the repository directly.
  • Extensions to Windows Workflow Foundation (WF). I am not sure exactly what kind of extensions we will see, but I can guess that it would mean extra activity components.
  • The process server. Basically, the WF does not define any host process for running workflow, and the way I figure, the process server implement such a process. It will contain a component called Lifecycle manager that can manage many process host instances (I guess for Load balancing, failover, etc.). The process server will also contain the ability to run BizTalk stuff. A question that comes to mind is whether the process server is “Biztalk for managed code” – built with the capabilities of WF and WCF? Time will show.

So what is the common denominator for all this? I am not sure. I can’t help it, but one word that keeps popping up in my mind is “governance”. Will this be “Microsoft’s tool for IT governance”?

Anyways, the time perspective of this is not known. When will this be available? All we know, is that Microsoft is planning to deliver this in three releases. Will it be in 2009?

OOPSLA’07: The Future of SOA

Yet another panel at OOPSLA discussed SOA, this one entitled “The Future of SOA: What worked, what didn’t and where is it going from here”. Nothing really new came up, compared to the other discussions.

However worth mentioning was Linda Northrop statement that SOA is not an architecture, rather an architectural style at best. Her observation was that SOAs big promise is interoperability, while it forgets all other architectural aspects. My interpretation of this is that the importance of interoperability has been grossly overemphasized while important architectural issues such as quality of service and security is not properly addressed.

Another insight that I got from the panel was that you should not try to build transactions across services, as this will make the services tightly coupled. This was best formulated by Nicolai M. Josuttis, which suggested that instead of using transactions, compensations should be used.

OOPSLA’07: The Role of Objects in a Service-Obsessed World

At OOPSLA, SOA was the subject of many panels, one of them being entitled “The Role of Objects in a Service-Obsessed World”. The panel moderator, John Tibbetts started out by stating that SOA is the worst case of vendor instigated hysteria he has ever seen. Furthermore, he pointed out that he has never seen any architecture in Service Oriented Architecture, and that in SOA there is only marketecture. This was a statement that has been repeated many times on this conference, and that I very much agree to.

Furthermore, Tibbetts went on to describe what he called a marketecture diagram (often used by commercial software vendors, as opposed to an architecture diagram), which have the characteristics of including boxes for virtues (for instance responsiveness), people, and where adjacent blocks may contain products
with overlapping responsibilities. The latter happen of course when the vendor’s product suite has such overlapping products. There is nothing wrong with marketecture diagrames, they may be used to position products. However, they should not be mistaken with architecture diagrams. As a summary, we should get rid of the A in SOA because there is no architecture there.

Another panelist, Jeroen van Tyn, shared his experience on failed SOA projects and pointed out that he has yet to see a SA being driven by the business. Au contraire, SOA is yet another thing that the technology people are trying to sell to the business. Furthermore, he referred to a survey that showed that 70% of the web services out there are being used within the same application, and the big question is of course why the heck we are doing that! He then went on to state that we need to analyze business needs to find a technical solutions, not saying that we have these services and trying to find a business problem to fit into it.

Ward Cunningham was also on the panel, pointing out that in a world of services automated testing across the entire lifecycle of services is a key success factor. In my opinion this is not only a very important factor, it is also one of the most challenging ones. How can we do effective testing across application and organizational bounderies? Furthermore, he pointed out that when you arrive at a large number of services versioning becomes very important. When you have 25 companies exchanging services, how do you make them move at the same time?

Although not on the panel, Dave Thomas contributed to the discussion with (as always) passionate and colourful statements, one of them being that SOA exists only because it is a game that vendors play as a way to control their customers.

Various statements given during the debate:

  • SOA needs to be business driven. (Hm, I seem to recall hearing this before…)
  • SOA has nothing to do with tools
  • SOA does not make change management go away
  • SOA is not a technical issue, it is a business stands
  • BPM is out. (Eh, was it ever in…?)

OOPSLA’07 – SOA and Web Services

After arriving in Montreal Saturday evening, on my first day at OOPSLA, I attended the Fifth International Workshop on SOA & Web Services Best Practices. The workshop consisted of a couple of keynote talks, presentation of papers, and group work.

Olaf Zimmermann stepped in for Ali Arsanjani, giving a keynote on SOA. One of the things he talked about was three central patterns that are central for SOA, namely

  • Service composition
  • Registry
  • Enterprise Service Bus

So, I guess if you have none of these in your architecture, you are not doing SOA. 😉

The second keynote of the day was given Gregor Hohpe, author of the seminal book Enterprise Integration patterns. He talked about the usage of patterns in general, and in the context of integration and SOA in particular. One of the points he made was that the “WebMethod” approach to making services is flawed in the context of SOA. It is certainly buzzword compliant, but that’s all. (By “WebMethod”, I refer to the approach where you declaratively annotates your existing class to generate a Web Service interface for it)

Gregor went on to talk about design patterns in general, and summarized the aspects of design patterns:

  • They are “Mind sized” chunks of information (attributed to Ward Cunningham)
  • They are used for human-to-human communication
  • They express intent (the “why” vs. the “how”)
  • They are observed from actual experience
  • They do not firm a rule (rather guidance)
  • They are not copy-paste code

He also made a point that sketches are important in patterns. However, it should be emphasized that
sketches should not be mistaken with blueprints. Furthermore, he made a point that patterns could effectively be used to test products/frameworks by testing to see that the product or framework could cover design patterns.

Yet another point made (which I think makes much sense) was that declarative programming brings you further away from the execution model, which makes it hard to understand what’s going on, and harder to debug since the the execution path is chosen at run-time. Certainly something to think about, as we see
the use of declarative programming (through annotations in Java, attributes in .NET, XSLT, and various

rules engines) is growing.Looking at SOA and integration in general, Gregor went on to point out that SOA means event-based, asynchronous programming, or “programming without a call stack”. Furthermore, he warned about trying to “program in pictures”. Looking at pictures to understand the architecture is OK, but trying to program pictures brings on problems like scalability, lack of support of diff and merge, etc.Another part of the workshop consisted of group work, where various topics around SOA and web services were discussed.
One of the most important point I think was that we should strive for simplicity in SOA. As vendors bring on more and more products, we should really look at how we can scale down solutions, and make middleware simpler.

Transferring binary documents in web services – MTOM

I am currently grappling with the challenge of transferring large binary documents in Web Services in an efficient manner. Finally, some of my colleagues came up with the suggestion to use MTOM, which seems to be very promising.

From my initial understanding of the protocol, I would think that MTOM has the  following advantages compared with the approach of including large documents base64 encoded or hex encoded within the SOAP envelope itself:

  • Smaller messages – less bandwith intensive communication
  • No base64 or hex encoding and decoding needed – less CPU resources needed
  • Easier XML parsing since the large document is not included in the XML document (probably both less CPU intensive and memory intensive, depending on the parser)

I would expect that the removal of base64 (or hex) encoding and decoding, together with potentially much swifter XML parsing would significantly lower latency at intermediaries in the value chain. My problem now, is that this is only a gut feeling, rather than hard facts. So, I am preparing to set up a test of MTOM in this respect, with different document sizes to try to measure the differences.

Further reading:

To be continued…