Tag Archives: cloud

Running Azure emulators in on-site test environment

The Azure compute and storage emulators enable testing and debugging of Azure cloud services on the developer’s local computer. On my current project, I ran into the need to install and run the emulator in the test environment. I will not go into exactly why this was needed, but it could be a possible interim solution to try out the technology before your customer make the decision to establish an environment in Azure. There were quite a few hurdles in the way to do this. I try to summarize them all in this post.

So the basic setup that I will explain in this blog post is to build a package for deployment in the compute emulator on the Team City build server. We will do the deployment using Octopus Deploy:

+-------------+   +--------------+   +---------------+   +---------+----------+
|             |   |              |   |               |   |  Test environment  |
| Dev machine +---> Build server +---> Deploy server +---> (Compute emulator) |
|             |   | (Team City)  |   | (Octopus)     |   | (Storage emulator) |
|             |   |              |   |               |   |                    |
+-------------+   +--------------+   +---------------+   +--------------------+

Preparing the test environment

This step consists of preparing the test environment for running the emulators.

Installing the SDK

First of all, let’s install the necessary software. I used the currently (September 2014) latest version of the Microsoft Azure SDK, version 2.4. This is available in the Microsoft web platform installer (http://www.microsoft.com/web/downloads/platform.aspx):
Azure SDK 2.4 in Web Platform installer
Take note of the installation paths for the emulators. You’ll need them later on:

Compute emulator: C:\Program Files\Microsoft SDKs\Azure\Emulator
Storage emulator: C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator

Create a user for the emulator

The first problem I ran into was concerning which user should run the deployment scripts. In a development setting, this is currently logged in user on the computer, but in a test environment case, this is not so.

I decided to create a domain managed service account named “octopussy” for this. (You know, that James Bond movie.) Then, I made sure that that user was a local admin on the test machine by running

net localgroup administrators domain1\octopussy /add

In order for the Octopus Deploy tentacle to be able to run the deployment, the tentacle service must run as the aforementioned user account:
Setting user for Octopus Tentacle service

Creating windows services for storage and compute emulators

In a normal development situation, the emulator run as the logged in user. If you remotely log in to a computer and start the emulators, they will shut down when you log off. In a test environment, we need the emulators to keep running. Therefore, you should set them up to run as services. There are several ways to do this, and I choose to use the Non-sucking service manager.

First, create a command file that starts the emulator and never quits:

storage_service.cmd:

@echo off
"C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator\WAStorageEmulator.exe" start
pause

devfabric_service.cmd:

@echo off
"C:\Program Files\Microsoft SDKs\Azure\Emulator\csrun.exe" /devfabric
pause

Once the commands files are in place, define and start the services:

.\nssm.exe install az_storage C:\app\storage_service.cmd
.\nssm.exe set az_storage ObjectName 'domain1\octopussy' 'PWD'
.\nssm.exe set az_storage Start service_auto_start
start-service az_storage
.\nssm.exe install az_fabric C:\app\devfabric_service.cmd
.\nssm.exe set az_fabric ObjectName 'domain1\octopussy' 'PWD'
.\nssm.exe set az_fabric Start service_auto_start
start-service az_fabric

Change the storage endpoints

In a test environment, you would often like to access the storage emulator from remote machines, for instance for running integration tests. Again, being focused on local development, the storage emulator is only accessible on localhost. To fix this, you need to edit the file

C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator\WAStorageEmulator.exe.config

Per default, the settings for the endpoints are like so:

<services>
    <service name="Blob"  url="http://127.0.0.1:10000/" />
    <service name="Queue" url="http://127.0.0.1:10001/" />
    <service name="Table" url="http://127.0.0.1:10002/" />
</services>

The 127.0.0.1 host references should be change to the host’s IP address or NetBIOS name. This can easily be found using ipconfig. For example:

<services>
    <service name="Blob"  url="http://192.168.0.2:10000/" />
    <service name="Queue" url="http://192.168.0.2:10001/" />
    <service name="Table" url="http://192.168.0.2:10002/" />
</services>

I found this trick here.

What we can do now, is to reach the storage emulator endpoints from a remote client. To do this, we need to set the DevelopmentStorageProxyUri parameter in the connection string, like so:

UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://192.168.0.2

In some circumstances, for instance if accessing the storage services using Visual Studio server explorer, you cannot use the UseDevelopmentStorage parameter in the connection string. Then you need to format the connection string like this:

BlobEndpoint=http://192.168.0.2:10000/;QueueEndpoint=http://192.168.0.2:10002/;TableEndpoint=http://192.168.0.2:10001/;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==

Open endpoints in firewall

If you run your test environment on a server flavor of Windows, you might have to open up the storage emulator TCP ports in the firewall:

netsh advfirewall firewall add rule name=storage_blob  dir=in action=allow protocol=tcp localport=10000
netsh advfirewall firewall add rule name=storage_queue dir=in action=allow protocol=tcp localport=10001
netsh advfirewall firewall add rule name=storage_table dir=in action=allow protocol=tcp localport=10002

Create a package for emulator deployment

So, now that we have the test environment set with the Azure emulators, let’s prepare our application for build and deployment. My example application consists of one Worker role:

Solution with Worker Role

One challenge here is that if you build the cloud project for the emulator, no real “package” is created. The files are laid out as a directory structure to be picked up locally by the emulator. The package built for the real McCoy is not compatible with the emulator. Also, in order to deploy the code using Octopus Deploy, we need to wrap it as a Nuget package. The OctoPack tool is the natural choice for such a task, but it does not support the cloud project type.

Console project as ‘wrapper’

To fix the problem with creating a Nuget package for deployment to the emulator, we create a console project to act as a “wrapper.” We have no interest in a console application per se, but only in the project as a vehicle to create a Nuget package. So, we add a console project to our solution:

Added wrapper project to solution

We have to make sure that the WorkerRole project is built before our wrapper project, so we make sure the build order in the solution is correct:

Build order

Customizing the build

What we want, is the build to perform the following steps:

  1. Build WorkerRole dll
  2. Build Worker (cloud project) – prepare files for the compute emulator
  3. Build Worker.Wrapper – package compute emulator files and deployment script into a Nuget package

We have the two steps already covered with the existing setup. So what we need to do in the third step is to copy the prepared files to the build output directory of the Wrapper project, and then have OctoPack pick them up from there.

To copy the files, we set up a custom build step in the Wrapper project:

<PropertyGroup>
    <BuildDependsOn>
    CopyCsxFiles;
    $(BuildDependsOn);
  </BuildDependsOn>
  </PropertyGroup>
  <PropertyGroup>
    <CsxDirectory>$(MSBuildProjectDirectory)\..\Worker\csx\$(Configuration)</CsxDirectory>
  </PropertyGroup>
  <Target Name="CopyCsxFiles">
    <CreateItem Include="$(CsxDirectory)\**\*.*">
      <Output TaskParameter="Include" ItemName="CsxFilesToCopy" />
    </CreateItem>
    <ItemGroup>
      <CsConfigFile Include="$(MSBuildProjectDirectory)\..\Worker\ServiceConfiguration.Cloud.cscfg" />
    </ItemGroup>
    <Copy SourceFiles="@(CsxFilesToCopy)" DestinationFiles="@(CsxFilesToCopy->'$(OutDir)\%(RecursiveDir)%(Filename)%(Extension)')" />
    <Copy SourceFiles="@(CsConfigFile)" DestinationFolder="$(OutDir)" />
  </Target>

We copy all the files in the csx directory in addition to the cloud project configuration file.

The next step is then to install OctoPack in the Wrapper project. This is done using the package manager console:

Install-Package OctoPack -ProjectName Worker.Wrapper

We’re almost set. Like I said earlier, the Wrapper project is a console project, but we are not really interested in a console application. So in order to remove all unnecessary gunk from our deployment package, we specify a .nuspec file where we explicitly list the files we need in the package. The .nuspec file name is prefixed with the project name. In this case, Worker.Wrapper.nuspec, and it contains:

<package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">
  <files>
    <file src="bin\release\roles\**\*.*" target="roles" />
    <file src="bin\release\Service*" target="." />
    <file src="PostDeploy.ps1" target="." />
    <file src="PreDeploy.ps1" target="." />
  </files>
</package>

We can now create a deployment package from msbuild, and set for build this artifact on Team City:

msbuild WorkerExample.sln /p:RunOctoPack=true /p:Configuration=Release /p:PackageForComputeEmulator=true
dir Worker.Wrapper\bin\release\*.nupkg

Notice that we set the property PackageForComputeEmulator to true. If not, Msbuild will package for the real Azure compute service in release configuration.

Deployment scripts

The final step is to deploy the application. Using Octopus Deploy, this is quite simple. Octopus has a convention where you can add PowerShell scripts to be executed before and after the deployment. The deployment of a console app using Octopus Deploy consists of unpacking the package on the target server. In our situation we need to tell the emulator to pick up and deploy the application files afterwards.

In order to finish up the deployment step, we create one file PreDeploy.ps1 that is executed before the package is unzipped, and one file PostDeploy.ps1 to be run afterwards.

PreDeploy.ps1

In this step, we make sure that the emulators are running, and remove any existing deployment in the emulator:

$computeEmulator = "${env:ProgramFiles}\Microsoft SDKs\Azure\Emulator\csrun.exe"
$storageEmulator = "${env:ProgramFiles(x86)}\Microsoft SDKs\Azure\Storage Emulator\WAStorageEmulator.exe"

$ErrorActionPreference = 'continue'
Write-host "Starting the storage emulator, $storageEmulator start"
& $storageEmulator start 2>&1 | out-null

$ErrorActionPreference = 'stop'
Write-host "Checking if compute emulator is running"
& $computeEmulator /status 2>&1 | out-null
if (!$?) {
    Write-host "Compute emulator is not running. Starting..."
    & $computeEmulator /devfabric:start
} 

Write-host "Removing existing deployments, running $computeEmulator /removeall"
& $computeEmulator /removeall

PostDeploy.ps1

In this step, we do the deployment of the new application files to the emulator:

$here = split-path $script:MyInvocation.MyCommand.Path
$computeEmulator = "${env:ProgramFiles}\Microsoft SDKs\Azure\Emulator\csrun.exe"

$ErrorActionPreference = 'stop'
$configFile = join-path $here 'ServiceConfiguration.Cloud.cscfg'

Write-host "Deploying to the compute emulator $computeEmulator $here $configFile"
& $computeEmulator $here $configFile

And with that, we are done.

NLog: writing log entries to Azure Table Storage

In August last year, I blogged about how to get Log4Net log entries written to Azure Table Storage. In this article, I will show how the same thing can be easily achieved using NLog.

The concepts in NLog is very similar to Log4Net. More or less, replace the word “appender” in Log4Net lingo with “target”, and you’re game.

First, let’s create a class for log entries:

public class LogEntry : TableServiceEntity
{
    public LogEntry()
    {
        var now = DateTime.UtcNow;
        PartitionKey = string.Format("{0:yyyy-MM}", now);
        RowKey = string.Format("{0:dd HH:mm:ss.fff}-{1}", now, Guid.NewGuid());
    }
    #region Table columns
    public string Message { get; set; }
    public string Level { get; set; }
    public string LoggerName { get; set; }
    public string RoleInstance { get; set; }
    public string DeploymentId { get; set; }
    public string StackTrace { get; set; }
    #endregion
}

Next, we need to do is to create a class that represents the table storage service. It needs to inherit from TableServiceContext:

public class LogServiceContext : TableServiceContext
{
    public LogServiceContext(string baseAddress, StorageCredentials credentials) : base(baseAddress, credentials) { }
    internal void Log(LogEntry logEntry)
    {
        AddObject("LogEntries", logEntry);
        SaveChanges();
    }
    public IQueryable<LogEntry> LogEntries
    {
        get
        {
            return CreateQuery<LogEntry>("LogEntries");
        }
    }
}

Finally, as far as code is concerned, a class that is a custom NLog target that gets called when the NLog framework needs to log something:

[Target("AzureStorage")]
public class AzureStorageTarget : Target
{
    private LogServiceContext _ctx;
    private string _tableEndpoint;
    [Required]
    public string TableStorageConnectionStringName { get; set; }
    protected override void InitializeTarget()
    {
        base.InitializeTarget();
        var cloudStorageAccount =
            CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(TableStorageConnectionStringName));
        _tableEndpoint = cloudStorageAccount.TableEndpoint.AbsoluteUri;
        CloudTableClient.CreateTablesFromModel(typeof(LogServiceContext), _tableEndpoint, cloudStorageAccount.Credentials);
        _ctx = new LogServiceContext(cloudStorageAccount.TableEndpoint.AbsoluteUri, cloudStorageAccount.Credentials);
    }
    protected override void Write(LogEventInfo loggingEvent)
    {
        Action doWriteToLog = () =>
        {
            try
            {
                _ctx.Log(new LogEntry
                {
                    RoleInstance = RoleEnvironment.CurrentRoleInstance.Id,
                    DeploymentId = RoleEnvironment.DeploymentId,
                    Timestamp = loggingEvent.TimeStamp,
                    Message = loggingEvent.FormattedMessage,
                    Level = loggingEvent.Level.Name,
                    LoggerName = loggingEvent.LoggerName,
                    StackTrace = loggingEvent.StackTrace != null ? loggingEvent.StackTrace.ToString() : null
                });
            }
            catch (DataServiceRequestException e)
            {
                InternalLogger.Error(string.Format("{0}: Could not write log entry to {1}: {2}",
                    GetType().AssemblyQualifiedName, _tableEndpoint, e.Message), e);
            }
        };
        doWriteToLog.BeginInvoke(null, null);
    }
}

So, to make it work, we need to register the target with the NLog framework. This is done in the NLog.config file:

<?xml version="1.0"?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <extensions>
    <add assembly="Demo.NLog.Azure" />
  </extensions>
  <targets>
    <target name="azure" type="AzureStorage" tableStorageConnectionStringName="Log4Net.ConenctionString" />
  </targets>
  <rules>
    <logger name="*" minlevel="Info" writeTo="azure" />
  </rules>
</nlog>

For information about how to set your ServiceDefinition.csdef and ServiceConfiguration.cscfg files, see my previous post.You can find the code for this example on GitHub. Suggestions for improvement are very welcome.

Log4Net: writing log entries to Azure Table Storage

Earlier, I blogged about how one can leverage Azure Diagnostics to easily set up Log4Net for your application running in Azure, and how to customize the log entries for the Azure environment.

An alternative to doing this two-step process of first writing to the local disk and then transfer the logs to Azure blob storage, is to write the log entries directly to Azure table storage (or in principal, to Azure blob storage for that matter). This is what I will do here.

Each log entry that the application writes will be a single row in a table in the Azure Table Storage. The log message itself and various meta data about it will be inserted into separate columns in the table. In order to achieve this, we first create a class that represents each entry in the table:

public class LogEntry : TableServiceEntity
{
    public LogEntry()
    {
        var now = DateTime.UtcNow;
        PartitionKey = string.Format("{0:yyyy-MM}", now);
        RowKey = string.Format("{0:dd HH:mm:ss.fff}-{1}", now, Guid.NewGuid());
    }
    #region Table columns
    public string Message { get; set; }
    public string Level { get; set; }
    public string LoggerName { get; set; }
    public string Domain { get; set; }
    public string ThreadName { get; set; }
    public string Identity { get; set; }
    public string RoleInstance { get; set; }
    public string DeploymentId { get; set; }
    #endregion
}

Note that the PartitionKey is the current year and month, and the RowKey is a combination of the date, time and a GUID. This is done to make the querying of the log entries efficient for our purpose. So, the next thing we need to do is to create a class that represents the table storage service. It needs to inherit from TableServiceContext:

internal class LogServiceContext : TableServiceContext
{
    public LogServiceContext(string baseAddress, StorageCredentials credentials) : base(baseAddress, credentials) {}
    internal void Log(LogEntry logEntry)
    {
        AddObject("LogEntries", logEntry);
        SaveChanges();
    }
    public IQueryable<LogEntry> LogEntries
    {
        get
        {
            return CreateQuery<LogEntry>("LogEntries");
        }
    }
}

The method that we will actually use in our code is the Log method which takes a LogEntry instance and persists it to table storage. What we need next, is to create a new Appender for Log4Net which interacts with the table store to store the log entries:

public class AzureTableStorageAppender : AppenderSkeleton
{
    public string TableStorageConnectionStringName { get; set; }
    private LogServiceContext _ctx;
    private string _tableEndpoint;
    public override void ActivateOptions()
    {
        base.ActivateOptions();
        var cloudStorageAccount =
            CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(TableStorageConnectionStringName));
        _tableEndpoint = cloudStorageAccount.TableEndpoint.AbsoluteUri;
        CloudTableClient.CreateTablesFromModel(typeof(LogServiceContext), _tableEndpoint, cloudStorageAccount.Credentials);
        _ctx = new LogServiceContext(cloudStorageAccount.TableEndpoint.AbsoluteUri, cloudStorageAccount.Credentials);
    }
    protected override void Append(LoggingEvent loggingEvent)
    {
        Action doWriteToLog = () => {
            try
            {
                _ctx.Log(new LogEntry
                {
                    RoleInstance = RoleEnvironment.CurrentRoleInstance.Id,
                    DeploymentId = RoleEnvironment.DeploymentId,
                    Timestamp = loggingEvent.TimeStamp,
                    Message = loggingEvent.RenderedMessage,
                    Level = loggingEvent.Level.Name,
                    LoggerName = loggingEvent.LoggerName,
                    Domain = loggingEvent.Domain,
                    ThreadName = loggingEvent.ThreadName,
                    Identity = loggingEvent.Identity
                });
            }
            catch (DataServiceRequestException e)
            {
                ErrorHandler.Error(string.Format("{0}: Could not write log entry to {1}: {2}",
                    GetType().AssemblyQualifiedName, _tableEndpoint, e.Message));
            }
        };
        doWriteToLog.BeginInvoke(null, null);
    }
}

In the code above, the actually writing to the log is done asynchronically in order to prevent the log write to slow down the request handling. We are now done with all the coding. What is left is to use our new AzureTableStorageAppender. Here is the log4net.config:

<log4net>
  <appender name="AzureTableStoreAppender" type="Demo.Log4Net.Azure.AzureTableStorageAppender, Demo.Log4Net.Azure">
    <param name="TableStorageConnectionStringName" value="Log4Net.ConenctionString" />
  </appender>
  <root>
    <level value="DEBUG" />
    <appender-ref ref="AzureTableStoreAppender" />
  </root>
</log4net>

Notice the TableSTorageConnectionStringName attribute of the param element in the configuration. It corresponds to the property of the same name in the AzureTableStorageAppender. Furthermore, take take notice that it’s value is 'Log4Net.ConnectionString', which corresponds to a custom configuration setting that we will add to ServiceDefinition.csdef file:

<ServiceDefinition ...>
  <WebRole ...>
    <ConfigurationSettings>
      <Setting name="Log4Net.ConenctionString"/>
    </ConfigurationSettings>
    ...
  </WebRole>
</ServiceDefinition>

We also need to give the Log4Net.ConfigurationString setting a value in the ServiceConfiguration.cscfg file. It should be a connection string that points to the storage account to use for storing the log entries. In this example, let’s use the development storage:

<ServiceConfiguration ...>
  <Role ...>
    <ConfigurationSettings>
      <Setting name="Log4Net.ConenctionString" value="UseDevelopmentStorage=true"/>
    </ConfigurationSettings>
  </Role>
</ServiceConfiguration>

…and that’s it. You should now find the log entries in the table storage:

You can find the code for this example on GitHub. Suggestions for improvement are very welcome.

Customizing Log4net log entries on Azure

I have earlier blogged about how to use Log4Net on Azure compute. With this solution the log files from the various running instances gets transferred to a common container on Azure blob store. When I mine the log data, I usually merge all the log files together an run text utils like grep or sed on them.

One challenge when merging the log files together is that we then lose the information about which instance the different log entries came from. In order to fix this, we can customize the log entries to that we keep this information.

The first we need to do, is to create a new layout class that inherits from PatternLayout:

using log4net.Layout;
namespace Demo.Log4Net.Azure
{
    public class AzurePatternLayout : PatternLayout
    {
        public AzurePatternLayout()
        {
             // TODO: add converters
        }
    }
}

Next, we need to register this class in the log4net configuration:

<log4net>
  <appender ...>
    <layout type="Demo.Log4Net.Azure.AzurePatternLayout, Demo.Log4Net.Azure">
      <conversionPattern ... />
    </layout>
  </appender>
  ...
</log4net>

So now that we have a new PatternLayout class, we can add some logic into it for adding Azure-specific log information into the entries. To do so, we first need a new Converter class:

using System.IO;
using log4net.Util;
using Microsoft.WindowsAzure.ServiceRuntime;
namespace Demo.Log4Net.Azure
{
    internal class AzureInstanceIdPatternConverter : PatternConverter
    {
        protected override void Convert(TextWriter writer, object state)
        {
            writer.Write(RoleEnvironment.CurrentRoleInstance.Id);
        }
    }
}

Now, we register the new AzureInstanceIdPatternConverter in the constructor of AzurePatternLayout:

public class AzurePatternLayout : PatternLayout
{
    public AzurePatternLayout()
    {
        AddConverter("roleinstance", typeof(AzureInstanceIdPatternConverter));
    }
}

Then we can change the conversionPattern element of the Log4Net configuration to use the new Azure environment information:

<layout type="Demo.Log4Net.Azure.AzurePatternLayout, Demo.Log4Net.Azure">
  <conversionPattern value="%date [%roleinstance] [%thread] %-5level %logger [%appdomain] - %message%newline" />
</layout>

….which will make the log entries look something like this:

Custom log entries

(The screenshot shows instance ids generated by DevFabric, not an instance in the cloud)

Using Log4Net in Azure Compute

Log4Net is a popular logging framework, and if you have an existing application that you wish to move to Azure compute, you probably want to avoid rewriting your application to use another logging framework. Luckily, keeping Log4Net as your logging tool in Azure is certainly possible, but there are a few hoops you have to jump through to get there.

There are several ways to achieve this goal. I decided to rely as much as possible on a feature provided in Azure Compute that allows for automatically synchronizing certain directories on the instance’s local file system to Azure blob storage. Using this approach, there are only very few changes that need to be done in the application, and indeed none of the existing code needs to be altered.

Baseline: an existing application which uses Log4Net

In order for this example to work, we need an application that we what to move to the cloud:

  1. Start off with File->New->Project... in Visual Studio and use the “ASP.NET Web Application” template
  2. Add Log4Net capabilities to the application. This can be done by adding a reference to Log4Net using Nuget, and then configuring it like Phil Haack has described here.

Then, run the web application locally in Visual Studio to assert that the logging works.

Enabling the application for Azure

Starting off with the simple ASP.NET web application we created in the previous section, do the following:

  1. Right-click on the solution in Visual Studio to select Add->New Project. Use the Windows Azure Project template, and do not add any roles in the dialog box initially.
  2. Set the newly created cloud project as the startup project in the solution.
  3. Right-click on the Roles-folder of the newly created Azure project and select Add->Web role project in solution... to add the web application project as an Azure Web role.
Adding an Azure Compute web role to an existing ASP.NET solution

Now, press F5 to run the application in the local Azure development environment (DevFabric) to see that it works (functionally, not logging-wise)

So, we are done with the prerequisites. Now to the interesting parts!

Setting log directory for Azure

The first issue we will grapple with is the fact that in Azure Compute, the application effectively runs in a sandbox with limited access to the file system, which means that the “standard” approach logging to a file does not work. Basically, the Azure compute role has only access to a certain subdirectory of the file system and the exact location needs to be retrieved by the application at runtime.

In order to retain the existing logging in the application, locating the path to the role’s designated area on the disk can be solved by subclassing one of the appenders that Log4Net provides out of the box. I chose the RollingFileAppender because it provides the ability to split the log into several files. This is beneficial from an operations perspective. Here’s what the custom appender looks like:

using System.Diagnostics;
using System.IO;
using log4net.Appender;
using Microsoft.WindowsAzure.ServiceRuntime;
namespace Demo.Log4Net.Azure
{
    public class AzureAppender : RollingFileAppender
    {
        public override string File
        {
            set
            {
                base.File = RoleEnvironment.GetLocalResource("Log4Net").RootPath + @"" 
                    + new FileInfo(value).Name + "_"
                    + Process.GetCurrentProcess().ProcessName;
            }
        }
    }
}

What happens here, is that when the configuration is read when the logging framework initializes, it calls our method to set the log file name. This corresponds to the file element in the XML configuration for the appender:

<log4net>
  <appender>
    <param name="File" value="app.log" />
    ...
  </appender>
  ...
</log4net>

What happens here, is that the application asks the role environment for the whereabouts of the local resource called “Log4Net”. This resource is a directory that we specify designated for containing our logs and needs to be specified in the ServiceDefinition.csdef file:

<ServiceDefinition>
  <WebRole name="WebRole1">
    <LocalResources>
        <LocalStorage name="Log4Net" sizeInMB="2048" cleanOnRoleRecycle="true"/> 
    </LocalResources>
  </WebRole>
</ServiceDefinition>

When we have the path of the local resource, it is used to construct an absolute path for the log file. Also note that the current process name is appended to the filename. This is done because if you run the application as a WebRole in “Full IIS” mode in Azure, the web application and the RoleEntryPoint code run in different processes. (If you look at blog entries on the Internet for Azure information, you should have in mind that the “Full IIS” mode was introduced with the Azure SDK version 1.3 in late 2010, and information that predates this might not be valid for the current Azure version.) This means that if there are log entries in the RoleEntryPoint as well as in the rest of your application, two processes would potentially try to keep a write lock on the file at the same time. Therefore, we use one log file for each process. Note that this is not a relevant topic for Worker roles. For more on the execution model, take a look here.

So, now that the new custom appender is ready, we need to change the Log4Net configuration to use it. Basically, we change the assembly type in the appender configuration section so that the configuration looks like this:

<log4net>
  <appender name="AzureRollingLogFileAppender" type="Demo.Log4Net.Azure.AzureAppender, Demo.Log4Net.Azure">
    <param name="File" value="app.log" />
    <param name="AppendToFile" value="true" />
    <param name="RollingStyle" value="Date" />
    <param name="StaticLogFileName" value="false" />
    <param name="DatePattern" value=".yyyy-MM-dd.log" />
    <layout type="log4net.Layout.PatternLayout">
      <conversionPattern value="%date [%thread] %-5level %logger [%appdomain] - %message%newline" />
    </layout>
  </appender>
  <root>
    <level value="DEBUG" />
    <appender-ref ref="AzureRollingLogFileAppender" />
  </root>
</log4net>

Now it’s time to run the application to see if the logging works. First, deploy to devfabric, and then open the Windows Azure Compute Emulator. Right-click on the running instance, and click on Open local store....

Open the local store for a role instance running in DevFabric

Then navigate to the ‘directoryLog4Net‘ to find the log files:

Log files in local store in DevFabric

Persisting logs to Azure blob storage

The next issue we need to handle, is the fact that the local file system in an Azure role instance is not persistent. Local data will be lost when the application is redeployed (and also when the Role recycles, if you have chosen to do so). Furthermore, the only way to access the local file system is using a Remote Desktop Connection. In theory, you could probably also make the directory a shared drive accessible over the Internet, but you probably would not want to do that. Besides, it will be a headache if you have a lot of instances.

So, the solution that Azure offers to this, is to have a scheduled synchronization of certain of the local resources (directories) to the Azure blob store. What we need to do, is to add the following code to the descendant of RoleEntryPoint:

public class WebRole : RoleEntryPoint
{
    public override bool OnStart()
    {
        var diagnosticsConfig = DiagnosticMonitor.GetDefaultInitialConfiguration();
        diagnosticsConfig.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(5);
        diagnosticsConfig.Directories.DataSources.Add(
                new DirectoryConfiguration
                {
                    Path = RoleEnvironment.GetLocalResource("Log4Net").RootPath,
                    DirectoryQuotaInMB = 2048,
                    Container = "wad-log4net"
                }
        );
        DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", diagnosticsConfig);
        return base.OnStart();
    }
}

…and that’s it. Now you can try to run the application and observe a container called ‘wad-log4net’ will be created in your blob service account that will contain the logs:

Logs in Azure blob store

(I use the AzureXplorer extension for Visual Studio)

The solution shown here targeted an ASP.NET application running as a WebRole, but the setup works equally well for Worker roles.

Azure DevFabric: clean out old deploments

During development of an Azure application, I noticed that my disk kept running full. First time it happended, I thought nothing of it and just ran disk cleanup to clear out some obsolent files. Problem solved. At least, so I thought. Some hours later, I got a disk full warning once again. It turned out that it was caused by old local DevFabric deployments that used up disk space. Even though the deployments have been shut off, the deployment files lingered on. This is how the %USERPROFILE%AppData directory looked like:

What I had to do to fix this, was to run the following command:

csrun /devfabric:clean

Then, the old deployment files were deleted: