Powershell gotchas: getting multiline string literals correct

The gotcha here is that the opening token @' or @" has to be at the end of the line, and the closing token, '@ or "@, has to be a the beginning of a line.

Wrong:

@'1
2
3
'@
Unrecognized token in source text.
At ...ps1:1 char:1
+  <<<< @'1
    + CategoryInfo          : ParserError: (:) [], ParseException
    + FullyQualifiedErrorId : UnrecognizedToken
@'
1
2
3'@
The string starting:
At ...:12 char:6
+ $s =  <<<< @'
is missing the terminator: '@.
At ...ps1:34 char:3
+ #> <<<<
    + CategoryInfo          : ParserError: (1
2
3'@
:String) [], ParseException
    + FullyQualifiedErrorId : TerminatorExpectedAtEndOfString
Correct:
@'
1
2
3
'@
1
2
3

Share

Powershell gotchas: redirect to file encodes in Unicode

The other day, I wrote a Powershell script that would manipulate a Windows command prompt file. After some troubleshooting (note to self: RTFM) I found Powershell file redirection encodes in Unicode, more specifically UCS-2 Little Endian. The problem in this case is that cmd.exe does not understand Unicode. So, the follwing yields a not runnable file:

@'
@echo off
echo Wi nøt trei a høliday in Sweden this ÿer?
'@ > '.\Opening titles (unicode).cmd'
& '.\Opening titles (unicode).cmd'
Output: '■@' is not recognized as an internal or external command,
operable program or batch file.
The trick is to use the Out-File Cmdlet instead and explicitly set the encoding. So, the following yields runnable code:
@'
@echo off
echo Wi nøt trei a høliday in Sweden this ÿer?
'@ | Out-file '.\Opening titles (Ascii).cmd' -Encoding ASCII
& '.\Opening titles (Ascii).cmd'
Output: Wi n?t trei a h?liday in Sweden this ?er?
In this case, it executes. However, the non-ASCII characters are not properly displayed. To achieve that as well, I used the following:
@'
@echo off
echo Wi nøt trei a høliday in Sweden this ÿer?
'@ | Out-file '.\Opening titles (OEM).cmd' -Encoding OEM
& '.\Opening titles (OEM).cmd'
Output: Wi nøt trei a høliday in Sweden this ÿer?

Share

Powershell: another alternative to ternary operator

In C# (as well as in Java and C++), you can use the ternary operator (?:) for a shorthand notation for conditionally assigning a value:

var index = (a == "a") ? 1 : 2;

In Powershell, such an operator is (to my knowledge) not available. One solution is described here. As an alternative without having to create a function and alias, I suggest:

$index = @{$true=1;$false=2}[$a -eq 'a']

Mmmmm, one-liner FTW :)

Share

ASCII vs ISO-8859-1

Note to self:

var ascii7BitEncoding = Encoding.ASCII; // ASCII
var ascii8BitEncodingIsh = Encoding.GetEncoding("iso-8859-1"); // Extended ASCII for almost all practical purposes

Where Extended ASCII differs from ISO-8859-*, according to Wikipedia:

One notable way in which ISO character sets differ from code pages is that the character positions 128 to 159, corresponding to ASCII control characters with the high-order bit set, are specifically unused and undefined in the ISO standards, though they had often been used for printable characters in proprietary code pages, a breaking of ISO standards that was almost universal.

Share

Enabling Web Transforms when debugging ASP.NET apps

Web Transformations is a great feature. When a web application is packaged or published, the Web.config file is transformed dependent on which Visual Studio / MSBuild configuration you are running, enabling environment-specific settings to change accordingly.

However, to my mind there is a shortcoming. Since the transformation happens during packaging or publishing of the application, transformation rules do not take effect when debugging a web application (typically by hitting F5 in Visual Studio) because the application is served from the project directory. Here’s how to solve this: essentially, we can change the the project setup so that the transformation happens right after the project build step. Here’s how:

Step 1: Unload the project in Visual Studio

In the Solution Explorer in Visual Studio, right-click on the project and select Unload project from the context menu:

Step 2: Rename Web.config on the file system

In the Windows Explorer, browse to the project directory, and rename Web.config to Web.template.config:

Step 3: Rename Web.config in the project file

What we will do here, is to rename the existing Web.config to Web.template.config in the project file (.csproj or .vbproj). This will be the file that we keep in the version control system, and the final Web.config will instead be generated for us when building the application.

In the Solutuion Explorer in Visual Studio, right-click on the project and select Edit <project name>:

Search for “web.config” in the project file to locate where the Web.config and transformation files definitions. It looks something like so:

<Content Include="Web.config" />
<Content Include="Web.Debug.config">
   <DependentUpon>Web.config</DependentUpon>
</Content>
<Content Include="Web.Release.config">
   <DependentUpon>Web.config</DependentUpon>
</Content>

In the project file, for instance on the line after the one with Web.config, add the following:

<None Include="Web.template.config" />

Then, change the text of the DependentUpon element to read Web.template.config instead of Web.config. Finally, change the Content elements for for Web.Debug.config and Web.Release.config to None elements. This prevents these files from being included in the deployment package. They don’t need to be, as the transformations have already been run. All in all, the snippet from above should now read:

<Content Include="Web.config" />
<None Include="Web.template.config" />
<None Include="Web.Debug.config">
   <DependentUpon>Web.template.config</DependentUpon>
</None>
<None Include="Web.Release.config">
   <DependentUpon>Web.template.config</DependentUpon>
</None>

Step 4: Running transformation when building

Now that we have a reference to Web.template.config in the project file, the next step is to run the transformation during the build step. Add a custom target that runs the transformation and then include it at the begining of the build task using the {{BuildDependsOn}} property. The resulting XML snippet looks like so:

<PropertyGroup>
  <BuildDependsOn>
    CustomWebConfigTransform;
    $(BuildDependsOn);
  </BuildDependsOn>
</PropertyGroup>
<Target Name="CustomWebConfigTransform">
   <TransformXml> source="Web.template.config"
      transform="Web.$(Configuration).config"
      destination="Web.config" />
</Target>

Save the file and reload the project into the Visual Studio solution. Then, in the Solution Explorer, observe that the Web.config file now is denoted as missing, and that the Web.template.config is present with the two transformation files beneath it in the file tree:

Step 5: Test running transformations during build

In this step, we will test that the transformations will be run when launching the application by hitting F5 in Visual Studio. First, we need to create a configuration example in Web.Debug.config. Remove all the gunk that Microsoft so helpfully has filled the file with during project creation, and insert one simple appSettings entry, making the Web.Debug.config look like this:

<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
  <appSettings>
    <add key="Configuration" value="Debug" xdt:Transform="Insert"/>
  </appSettings>
</configuration>

Take care to include the xdt XML namespace definition. Without it, the transformations will not be run.

Open up an ASPX page, for instance Default.aspx, and enter a label that will display the value of our confiugration setting:

<p>
   Configuration: <asp:Label ID="Configuration" runat="server" />
</p>

Next, enter this in the code behind:

protected void Page_Load(object sender, EventArgs e)
{
   Configuration.Text = ConfigurationManager.AppSettings["Configuration"];
}

Hit F5 to run the application and verify that it in fact is able to read from the transformed Web.config:

Step 6: Disable transformations during packaging and publishing

Since msbuild by default runs the transformations, we need to disable this to prevent the transformations to be applied twice. To do this, unload the project from the solution as we did in step 1, and then edit the project file in Visual Studio. For each of the configurations that you have in your solution, disable the built-in transformation runs by adding the TransformWebConfigEnabled element and setting it to false. Like so:

<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
   <TransformWebConfigEnabled>False</TransformWebConfigEnabled>
   ...
</PropertyGroup>

Summary / other tweaks

What we have achieved, is that the transformations are applied when building the application, making the transformed file available when running the application locally from Visual Studio. Another tip with regards to Web Transformations is to use the SlowCheetah Visual Studio Add-in that Scott Hanselman has described very well here. It will give you a nice transformation preview on the context menu in Visual Studio which is extremely helpful.

Share

NLog: writing log entries to Azure Table Storage

In August last year, I blogged about how to get Log4Net log entries written to Azure Table Storage. In this article, I will show how the same thing can be easily achieved using NLog.

The concepts in NLog is very similar to Log4Net. More or less, replace the word “appender” in Log4Net lingo with “target”, and you’re game.

First, let’s create a class for log entries:

public class LogEntry : TableServiceEntity
{
    public LogEntry()
    {
        var now = DateTime.UtcNow;
        PartitionKey = string.Format("{0:yyyy-MM}", now);
        RowKey = string.Format("{0:dd HH:mm:ss.fff}-{1}", now, Guid.NewGuid());
    }
    #region Table columns
    public string Message { get; set; }
    public string Level { get; set; }
    public string LoggerName { get; set; }
    public string RoleInstance { get; set; }
    public string DeploymentId { get; set; }
    public string StackTrace { get; set; }
    #endregion
}

Next, we need to do is to create a class that represents the table storage service. It needs to inherit from TableServiceContext:

public class LogServiceContext : TableServiceContext
{
    public LogServiceContext(string baseAddress, StorageCredentials credentials) : base(baseAddress, credentials) { }
    internal void Log(LogEntry logEntry)
    {
        AddObject("LogEntries", logEntry);
        SaveChanges();
    }
    public IQueryable<LogEntry> LogEntries
    {
        get
        {
            return CreateQuery<LogEntry>("LogEntries");
        }
    }
}

Finally, as far as code is concerned, a class that is a custom NLog target that gets called when the NLog framework needs to log something:

[Target("AzureStorage")]
public class AzureStorageTarget : Target
{
    private LogServiceContext _ctx;
    private string _tableEndpoint;
    [Required]
    public string TableStorageConnectionStringName { get; set; }
    protected override void InitializeTarget()
    {
        base.InitializeTarget();
        var cloudStorageAccount =
            CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(TableStorageConnectionStringName));
        _tableEndpoint = cloudStorageAccount.TableEndpoint.AbsoluteUri;
        CloudTableClient.CreateTablesFromModel(typeof(LogServiceContext), _tableEndpoint, cloudStorageAccount.Credentials);
        _ctx = new LogServiceContext(cloudStorageAccount.TableEndpoint.AbsoluteUri, cloudStorageAccount.Credentials);
    }
    protected override void Write(LogEventInfo loggingEvent)
    {
        Action doWriteToLog = () =>
        {
            try
            {
                _ctx.Log(new LogEntry
                {
                    RoleInstance = RoleEnvironment.CurrentRoleInstance.Id,
                    DeploymentId = RoleEnvironment.DeploymentId,
                    Timestamp = loggingEvent.TimeStamp,
                    Message = loggingEvent.FormattedMessage,
                    Level = loggingEvent.Level.Name,
                    LoggerName = loggingEvent.LoggerName,
                    StackTrace = loggingEvent.StackTrace != null ? loggingEvent.StackTrace.ToString() : null
                });
            }
            catch (DataServiceRequestException e)
            {
                InternalLogger.Error(string.Format("{0}: Could not write log entry to {1}: {2}",
                    GetType().AssemblyQualifiedName, _tableEndpoint, e.Message), e);
            }
        };
        doWriteToLog.BeginInvoke(null, null);
    }
}

So, to make it work, we need to register the target with the NLog framework. This is done in the NLog.config file:

<?xml version="1.0"?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <extensions>
    <add assembly="Demo.NLog.Azure" />
  </extensions>
  <targets>
    <target name="azure" type="AzureStorage" tableStorageConnectionStringName="Log4Net.ConenctionString" />
  </targets>
  <rules>
    <logger name="*" minlevel="Info" writeTo="azure" />
  </rules>
</nlog>

For information about how to set your ServiceDefinition.csdef and ServiceConfiguration.cscfg files, see my previous post.You can find the code for this example on GitHub. Suggestions for improvement are very welcome.

Share

Getting user information programmatically from Windows Identity Foundation

You can use the Windows Identity Foundation SDK to replace the authentication scheme of your ASP.NET web application. Most notably, this is useful for making your application claims-aware, which allows it to seamlessly play together with solutions like Active Directory Federation Services or the Windows Azure AppFabric Access Service. This is useful for a various number of SSO and federated authentication scenarios.

Basically, what you do is that you switch off the built-in authentication in ASP.NET like forms-based authentication, and let WIF act as a proxy in front of your application. WIF uses the authorization settings in web.config (/configuration/system.web/authorization and /configuration/location/system.web/authorization elements) and authenticates the user before the ASP.NET application receives the request. See WS-Federated Authentication Module Overview for details

So, when the application receives the request, the user is already authenticated, which is fine. However, there are times when the application needs to know who the user is, or getting access to the other claims that were provided from the identity service. Luckily, this is available on the HTTP context. Say, for instance, if you wish to find the email address of the logged-in user, you can do it like so:

protected void Page_Load(object sender, EventArgs e)
{
    var claimsIdentity = Context.User.Identity as IClaimsIdentity;
    foreach (var claim in claimsIdentity.Claims)
    {
        if (claim.ClaimType == "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress")
        {
            var email = claim.Value;
        }
    }
}

If the user was authenticated using claims, Context.User.Identity will be an IClaimsIdentity which will contain a number of claims about the user. We can then iterate over these claims to find the one that we want.

The claim types are denoted using XML namespaces, which are a little bit cumbersome to work with. So, to make it easier to access, we iterate over the list of claims, and make all the claims available on a common object:

public static class IdentityExtension
{
    public static DynamicUser GetUserFromClaims(this HttpContext context)
    {
        var claimsIdentity = context.User.Identity as IClaimsIdentity;
        if (claimsIdentity == null) throw new FailedAuthenticationFaultException();
        return new DynamicUser(claimsIdentity.Claims);
    }
}
public class DynamicUser : DynamicObject
{
    private readonly ClaimCollection _claims;
    public DynamicUser(ClaimCollection claims)
    {
        _claims = claims;
        Id = ClaimsValue("nameidentifier")[0];
    }
    public string Id { get; private set; }
    public bool IsInRole(string role)
    {
        return ClaimsValue("role").Any(c => c.Equals(role, StringComparison.OrdinalIgnoreCase));
    }
    private string[] ClaimsValue(string claimSuffix)
    {
        return _claims.Where(c => c.ClaimType.EndsWith(claimSuffix)).Select(c => c.Value).ToArray();
    }
    public override bool TryGetMember(GetMemberBinder binder, out object result)
    {
        var claimValue = ClaimsValue(binder.Name.ToLowerInvariant());
        if (claimValue.Length == 0)
        {
            return base.TryGetMember(binder, out result);
        }
        if (claimValue.Length == 1)
        {
            result = claimValue[0];
        }
        else
        {
            result = claimValue;
        }
        return true;
    }
}

Then, we can access the claim values using a prefix of the actual XML namespace of the claim type:

protected void Page_Load(object sender, EventArgs e)
{
    var user = Context.GetUserFromClaims();
    var id = user.Id;
    var username = user.Name; // "John Doe"
}

When hooking up the application to Windows Azure AppFabric Access Control Service (ACS), which claims that the application will receive depends on your rule group configuration. Here are some examples:

Claim typeShorthandExample value
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier user.Id NHsTR1UXFI9xYk1xIRUNfucZ4/a5OWiILHlNyNEXozP=
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name user.Name John Doe
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress user.Emailaddress john.doe@hotmail.com

If you set up role claim rules in ACS, you can even get information about the user’s roles so that you can do authorization in the code:

protected void Page_Load(object sender, EventArgs e)
{
    var user = Context.GetUserFromClaims();
    var isAdmin = user.IsInRole("administrator"):
}

Share

Log4Net: writing log entries to Azure Table Storage

Earlier, I blogged about how one can leverage Azure Diagnostics to easily set up Log4Net for your application running in Azure, and how to customize the log entries for the Azure environment.

An alternative to doing this two-step process of first writing to the local disk and then transfer the logs to Azure blob storage, is to write the log entries directly to Azure table storage (or in principal, to Azure blob storage for that matter). This is what I will do here.

Each log entry that the application writes will be a single row in a table in the Azure Table Storage. The log message itself and various meta data about it will be inserted into separate columns in the table. In order to achieve this, we first create a class that represents each entry in the table:

public class LogEntry : TableServiceEntity
{
    public LogEntry()
    {
        var now = DateTime.UtcNow;
        PartitionKey = string.Format("{0:yyyy-MM}", now);
        RowKey = string.Format("{0:dd HH:mm:ss.fff}-{1}", now, Guid.NewGuid());
    }
    #region Table columns
    public string Message { get; set; }
    public string Level { get; set; }
    public string LoggerName { get; set; }
    public string Domain { get; set; }
    public string ThreadName { get; set; }
    public string Identity { get; set; }
    public string RoleInstance { get; set; }
    public string DeploymentId { get; set; }
    #endregion
}

Note that the PartitionKey is the current year and month, and the RowKey is a combination of the date, time and a GUID. This is done to make the querying of the log entries efficient for our purpose. So, the next thing we need to do is to create a class that represents the table storage service. It needs to inherit from TableServiceContext:

internal class LogServiceContext : TableServiceContext
{
    public LogServiceContext(string baseAddress, StorageCredentials credentials) : base(baseAddress, credentials) {}
    internal void Log(LogEntry logEntry)
    {
        AddObject("LogEntries", logEntry);
        SaveChanges();
    }
    public IQueryable<LogEntry> LogEntries
    {
        get
        {
            return CreateQuery<LogEntry>("LogEntries");
        }
    }
}

The method that we will actually use in our code is the Log method which takes a LogEntry instance and persists it to table storage. What we need next, is to create a new Appender for Log4Net which interacts with the table store to store the log entries:

public class AzureTableStorageAppender : AppenderSkeleton
{
    public string TableStorageConnectionStringName { get; set; }
    private LogServiceContext _ctx;
    private string _tableEndpoint;
    public override void ActivateOptions()
    {
        base.ActivateOptions();
        var cloudStorageAccount =
            CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(TableStorageConnectionStringName));
        _tableEndpoint = cloudStorageAccount.TableEndpoint.AbsoluteUri;
        CloudTableClient.CreateTablesFromModel(typeof(LogServiceContext), _tableEndpoint, cloudStorageAccount.Credentials);
        _ctx = new LogServiceContext(cloudStorageAccount.TableEndpoint.AbsoluteUri, cloudStorageAccount.Credentials);
    }
    protected override void Append(LoggingEvent loggingEvent)
    {
        Action doWriteToLog = () => {
            try
            {
                _ctx.Log(new LogEntry
                {
                    RoleInstance = RoleEnvironment.CurrentRoleInstance.Id,
                    DeploymentId = RoleEnvironment.DeploymentId,
                    Timestamp = loggingEvent.TimeStamp,
                    Message = loggingEvent.RenderedMessage,
                    Level = loggingEvent.Level.Name,
                    LoggerName = loggingEvent.LoggerName,
                    Domain = loggingEvent.Domain,
                    ThreadName = loggingEvent.ThreadName,
                    Identity = loggingEvent.Identity
                });
            }
            catch (DataServiceRequestException e)
            {
                ErrorHandler.Error(string.Format("{0}: Could not write log entry to {1}: {2}",
                    GetType().AssemblyQualifiedName, _tableEndpoint, e.Message));
            }
        };
        doWriteToLog.BeginInvoke(null, null);
    }
}

In the code above, the actually writing to the log is done asynchronically in order to prevent the log write to slow down the request handling. We are now done with all the coding. What is left is to use our new AzureTableStorageAppender. Here is the log4net.config:

<log4net>
  <appender name="AzureTableStoreAppender" type="Demo.Log4Net.Azure.AzureTableStorageAppender, Demo.Log4Net.Azure">
    <param name="TableStorageConnectionStringName" value="Log4Net.ConenctionString" />
  </appender>
  <root>
    <level value="DEBUG" />
    <appender-ref ref="AzureTableStoreAppender" />
  </root>
</log4net>

Notice the TableSTorageConnectionStringName attribute of the param element in the configuration. It corresponds to the property of the same name in the AzureTableStorageAppender. Furthermore, take take notice that it’s value is 'Log4Net.ConnectionString', which corresponds to a custom configuration setting that we will add to ServiceDefinition.csdef file:

<ServiceDefinition ...>
  <WebRole ...>
    <ConfigurationSettings>
      <Setting name="Log4Net.ConenctionString"/>
    </ConfigurationSettings>
    ...
  </WebRole>
</ServiceDefinition>

We also need to give the Log4Net.ConfigurationString setting a value in the ServiceConfiguration.cscfg file. It should be a connection string that points to the storage account to use for storing the log entries. In this example, let’s use the development storage:

<ServiceConfiguration ...>
  <Role ...>
    <ConfigurationSettings>
      <Setting name="Log4Net.ConenctionString" value="UseDevelopmentStorage=true"/>
    </ConfigurationSettings>
  </Role>
</ServiceConfiguration>

…and that’s it. You should now find the log entries in the table storage:

You can find the code for this example on GitHub. Suggestions for improvement are very welcome.

Share

Customizing Log4net log entries on Azure

I have earlier blogged about how to use Log4Net on Azure compute. With this solution the log files from the various running instances gets transferred to a common container on Azure blob store. When I mine the log data, I usually merge all the log files together an run text utils like grep or sed on them.

One challenge when merging the log files together is that we then lose the information about which instance the different log entries came from. In order to fix this, we can customize the log entries to that we keep this information.

The first we need to do, is to create a new layout class that inherits from PatternLayout:

using log4net.Layout;
namespace Demo.Log4Net.Azure
{
    public class AzurePatternLayout : PatternLayout
    {
        public AzurePatternLayout()
        {
             // TODO: add converters
        }
    }
}

Next, we need to register this class in the log4net configuration:

<log4net>
  <appender ...>
    <layout type="Demo.Log4Net.Azure.AzurePatternLayout, Demo.Log4Net.Azure">
      <conversionPattern ... />
    </layout>
  </appender>
  ...
</log4net>

So now that we have a new PatternLayout class, we can add some logic into it for adding Azure-specific log information into the entries. To do so, we first need a new Converter class:

using System.IO;
using log4net.Util;
using Microsoft.WindowsAzure.ServiceRuntime;
namespace Demo.Log4Net.Azure
{
    internal class AzureInstanceIdPatternConverter : PatternConverter
    {
        protected override void Convert(TextWriter writer, object state)
        {
            writer.Write(RoleEnvironment.CurrentRoleInstance.Id);
        }
    }
}

Now, we register the new AzureInstanceIdPatternConverter in the constructor of AzurePatternLayout:

public class AzurePatternLayout : PatternLayout
{
    public AzurePatternLayout()
    {
        AddConverter("roleinstance", typeof(AzureInstanceIdPatternConverter));
    }
}

Then we can change the conversionPattern element of the Log4Net configuration to use the new Azure environment information:

<layout type="Demo.Log4Net.Azure.AzurePatternLayout, Demo.Log4Net.Azure">
  <conversionPattern value="%date [%roleinstance] [%thread] %-5level %logger [%appdomain] - %message%newline" />
</layout>

….which will make the log entries look something like this:

Custom log entries

(The screenshot shows instance ids generated by DevFabric, not an instance in the cloud)

Share

Using Log4Net in Azure Compute

Log4Net is a popular logging framework, and if you have an existing application that you wish to move to Azure compute, you probably want to avoid rewriting your application to use another logging framework. Luckily, keeping Log4Net as your logging tool in Azure is certainly possible, but there are a few hoops you have to jump through to get there.

There are several ways to achieve this goal. I decided to rely as much as possible on a feature provided in Azure Compute that allows for automatically synchronizing certain directories on the instance’s local file system to Azure blob storage. Using this approach, there are only very few changes that need to be done in the application, and indeed none of the existing code needs to be altered.

Baseline: an existing application which uses Log4Net

In order for this example to work, we need an application that we what to move to the cloud:

  1. Start off with File->New->Project... in Visual Studio and use the “ASP.NET Web Application” template
  2. Add Log4Net capabilities to the application. This can be done by adding a reference to Log4Net using Nuget, and then configuring it like Phil Haack has described here.

Then, run the web application locally in Visual Studio to assert that the logging works.

Enabling the application for Azure

Starting off with the simple ASP.NET web application we created in the previous section, do the following:

  1. Right-click on the solution in Visual Studio to select Add->New Project. Use the Windows Azure Project template, and do not add any roles in the dialog box initially.
  2. Set the newly created cloud project as the startup project in the solution.
  3. Right-click on the Roles-folder of the newly created Azure project and select Add->Web role project in solution... to add the web application project as an Azure Web role.

Adding an Azure Compute web role to an existing ASP.NET solution

Now, press F5 to run the application in the local Azure development environment (DevFabric) to see that it works (functionally, not logging-wise)

So, we are done with the prerequisites. Now to the interesting parts!

Setting log directory for Azure

The first issue we will grapple with is the fact that in Azure Compute, the application effectively runs in a sandbox with limited access to the file system, which means that the “standard” approach logging to a file does not work. Basically, the Azure compute role has only access to a certain subdirectory of the file system and the exact location needs to be retrieved by the application at runtime.

In order to retain the existing logging in the application, locating the path to the role’s designated area on the disk can be solved by subclassing one of the appenders that Log4Net provides out of the box. I chose the RollingFileAppender because it provides the ability to split the log into several files. This is beneficial from an operations perspective. Here’s what the custom appender looks like:

using System.Diagnostics;
using System.IO;
using log4net.Appender;
using Microsoft.WindowsAzure.ServiceRuntime;
namespace Demo.Log4Net.Azure
{
    public class AzureAppender : RollingFileAppender
    {
        public override string File
        {
            set
            {
                base.File = RoleEnvironment.GetLocalResource("Log4Net").RootPath + @"\" 
                    + new FileInfo(value).Name + "_"
                    + Process.GetCurrentProcess().ProcessName;
            }
        }
    }
}

What happens here, is that when the configuration is read when the logging framework initializes, it calls our method to set the log file name. This corresponds to the file element in the XML configuration for the appender:

<log4net>
  <appender>
    <param name="File" value="app.log" />
    ...
  </appender>
  ...
</log4net>

What happens here, is that the application asks the role environment for the whereabouts of the local resource called “Log4Net”. This resource is a directory that we specify designated for containing our logs and needs to be specified in the ServiceDefinition.csdef file:

<ServiceDefinition>
  <WebRole name="WebRole1">
    <LocalResources>
        <LocalStorage name="Log4Net" sizeInMB="2048" cleanOnRoleRecycle="true"/> 
    </LocalResources>
  </WebRole>
</ServiceDefinition>

When we have the path of the local resource, it is used to construct an absolute path for the log file. Also note that the current process name is appended to the filename. This is done because if you run the application as a WebRole in “Full IIS” mode in Azure, the web application and the RoleEntryPoint code run in different processes. (If you look at blog entries on the Internet for Azure information, you should have in mind that the “Full IIS” mode was introduced with the Azure SDK version 1.3 in late 2010, and information that predates this might not be valid for the current Azure version.) This means that if there are log entries in the RoleEntryPoint as well as in the rest of your application, two processes would potentially try to keep a write lock on the file at the same time. Therefore, we use one log file for each process. Note that this is not a relevant topic for Worker roles. For more on the execution model, take a look here.

So, now that the new custom appender is ready, we need to change the Log4Net configuration to use it. Basically, we change the assembly type in the appender configuration section so that the configuration looks like this:

<log4net>
  <appender name="AzureRollingLogFileAppender" type="Demo.Log4Net.Azure.AzureAppender, Demo.Log4Net.Azure">
    <param name="File" value="app.log" />
    <param name="AppendToFile" value="true" />
    <param name="RollingStyle" value="Date" />
    <param name="StaticLogFileName" value="false" />
    <param name="DatePattern" value=".yyyy-MM-dd.lo\g" />
    <layout type="log4net.Layout.PatternLayout">
      <conversionPattern value="%date [%thread] %-5level %logger [%appdomain] - %message%newline" />
    </layout>
  </appender>
  <root>
    <level value="DEBUG" />
    <appender-ref ref="AzureRollingLogFileAppender" />
  </root>
</log4net>

Now it’s time to run the application to see if the logging works. First, deploy to devfabric, and then open the Windows Azure Compute Emulator. Right-click on the running instance, and click on Open local store....

Open the local store for a role instance running in DevFabric

Then navigate to the ‘directory\Log4Net‘ to find the log files:

Log files in local store in DevFabric

Persisting logs to Azure blob storage

The next issue we need to handle, is the fact that the local file system in an Azure role instance is not persistent. Local data will be lost when the application is redeployed (and also when the Role recycles, if you have chosen to do so). Furthermore, the only way to access the local file system is using a Remote Desktop Connection. In theory, you could probably also make the directory a shared drive accessible over the Internet, but you probably would not want to do that. Besides, it will be a headache if you have a lot of instances.

So, the solution that Azure offers to this, is to have a scheduled synchronization of certain of the local resources (directories) to the Azure blob store. What we need to do, is to add the following code to the descendant of RoleEntryPoint:

public class WebRole : RoleEntryPoint
{
    public override bool OnStart()
    {
        var diagnosticsConfig = DiagnosticMonitor.GetDefaultInitialConfiguration();
        diagnosticsConfig.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(5);
        diagnosticsConfig.Directories.DataSources.Add(
                new DirectoryConfiguration
                {
                    Path = RoleEnvironment.GetLocalResource("Log4Net").RootPath,
                    DirectoryQuotaInMB = 2048,
                    Container = "wad-log4net"
                }
        );
        DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", diagnosticsConfig);
        return base.OnStart();
    }
}

…and that’s it. Now you can try to run the application and observe a container called ‘wad-log4net’ will be created in your blob service account that will contain the logs:

Logs in Azure blob store

(I use the AzureXplorer extension for Visual Studio)

The solution shown here targeted an ASP.NET application running as a WebRole, but the setup works equally well for Worker roles.

Share