Category Archives: Cloud

My PowerShell video course is available!

For the last months, I have been working on my PowerShell 5 recipes video course. It is now finally published!

Why?

Why make a video course? – you might ask. I come from a system development background, having worked with many development platforms in my time (Java, .NET, Lotus Notes, JavaScript, etc.) Back in 2011, after resisted for a while, I decided to learn PowerShell. So I dived in, head first. Why? Because I realized that in order to get a better delivery process for software, we need a broader scope than just the development per se. We need to get a holistic view of the environment the applications will run in, and we need to automate the deployment. This has become more and more evident with the coming of DevOps and is key to continuous delivery.

I am first and foremost a PowerShell user, and I do not consider myself an expert. Maybe a little more advanced than the average user, though. When the opportunity came to create a video course, the temptation to do something new to me as too hard to resist. I hope I have been able to keep the view of the user, resulting in a pragmatic, hands-on, course.

For whom?

The course is thought to be a smorgasbord of relevant topics in PowerShell that is relevant to the daily work of developers as well as IT. For people new to PowerShell, it will get you started by installing, and/or upgrading to the latest versions (even on Linux), customize, and set up the environment for your preferences. For developers, it gives you hands-on guidance to set up automated build and deployment. For DevOps (and IT), it guides you in provisioning Azure resources. For more advanced users, it gives you recipes on how to develop reusable scripts, handle modules and publishing your scripts to repositories.

Enjoy!

Running Azure emulators in on-site test environment

The Azure compute and storage emulators enable testing and debugging of Azure cloud services on the developer’s local computer. On my current project, I ran into the need to install and run the emulator in the test environment. I will not go into exactly why this was needed, but it could be a possible interim solution to try out the technology before your customer make the decision to establish an environment in Azure. There were quite a few hurdles in the way to do this. I try to summarize them all in this post.

So the basic setup that I will explain in this blog post is to build a package for deployment in the compute emulator on the Team City build server. We will do the deployment using Octopus Deploy:

+-------------+   +--------------+   +---------------+   +---------+----------+
|             |   |              |   |               |   |  Test environment  |
| Dev machine +---> Build server +---> Deploy server +---> (Compute emulator) |
|             |   | (Team City)  |   | (Octopus)     |   | (Storage emulator) |
|             |   |              |   |               |   |                    |
+-------------+   +--------------+   +---------------+   +--------------------+

Preparing the test environment

This step consists of preparing the test environment for running the emulators.

Installing the SDK

First of all, let’s install the necessary software. I used the currently (September 2014) latest version of the Microsoft Azure SDK, version 2.4. This is available in the Microsoft web platform installer (http://www.microsoft.com/web/downloads/platform.aspx):
Azure SDK 2.4 in Web Platform installer
Take note of the installation paths for the emulators. You’ll need them later on:

Compute emulator: C:\Program Files\Microsoft SDKs\Azure\Emulator
Storage emulator: C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator

Create a user for the emulator

The first problem I ran into was concerning which user should run the deployment scripts. In a development setting, this is currently logged in user on the computer, but in a test environment case, this is not so.

I decided to create a domain managed service account named “octopussy” for this. (You know, that James Bond movie.) Then, I made sure that that user was a local admin on the test machine by running

net localgroup administrators domain1\octopussy /add

In order for the Octopus Deploy tentacle to be able to run the deployment, the tentacle service must run as the aforementioned user account:
Setting user for Octopus Tentacle service

Creating windows services for storage and compute emulators

In a normal development situation, the emulator run as the logged in user. If you remotely log in to a computer and start the emulators, they will shut down when you log off. In a test environment, we need the emulators to keep running. Therefore, you should set them up to run as services. There are several ways to do this, and I choose to use the Non-sucking service manager.

First, create a command file that starts the emulator and never quits:

storage_service.cmd:

@echo off
"C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator\WAStorageEmulator.exe" start
pause

devfabric_service.cmd:

@echo off
"C:\Program Files\Microsoft SDKs\Azure\Emulator\csrun.exe" /devfabric
pause

Once the commands files are in place, define and start the services:

.\nssm.exe install az_storage C:\app\storage_service.cmd
.\nssm.exe set az_storage ObjectName 'domain1\octopussy' 'PWD'
.\nssm.exe set az_storage Start service_auto_start
start-service az_storage
.\nssm.exe install az_fabric C:\app\devfabric_service.cmd
.\nssm.exe set az_fabric ObjectName 'domain1\octopussy' 'PWD'
.\nssm.exe set az_fabric Start service_auto_start
start-service az_fabric

Change the storage endpoints

In a test environment, you would often like to access the storage emulator from remote machines, for instance for running integration tests. Again, being focused on local development, the storage emulator is only accessible on localhost. To fix this, you need to edit the file

C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator\WAStorageEmulator.exe.config

Per default, the settings for the endpoints are like so:

<services>
    <service name="Blob"  url="http://127.0.0.1:10000/" />
    <service name="Queue" url="http://127.0.0.1:10001/" />
    <service name="Table" url="http://127.0.0.1:10002/" />
</services>

The 127.0.0.1 host references should be change to the host’s IP address or NetBIOS name. This can easily be found using ipconfig. For example:

<services>
    <service name="Blob"  url="http://192.168.0.2:10000/" />
    <service name="Queue" url="http://192.168.0.2:10001/" />
    <service name="Table" url="http://192.168.0.2:10002/" />
</services>

I found this trick here.

What we can do now, is to reach the storage emulator endpoints from a remote client. To do this, we need to set the DevelopmentStorageProxyUri parameter in the connection string, like so:

UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://192.168.0.2

In some circumstances, for instance if accessing the storage services using Visual Studio server explorer, you cannot use the UseDevelopmentStorage parameter in the connection string. Then you need to format the connection string like this:

BlobEndpoint=http://192.168.0.2:10000/;QueueEndpoint=http://192.168.0.2:10002/;TableEndpoint=http://192.168.0.2:10001/;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==

Open endpoints in firewall

If you run your test environment on a server flavor of Windows, you might have to open up the storage emulator TCP ports in the firewall:

netsh advfirewall firewall add rule name=storage_blob  dir=in action=allow protocol=tcp localport=10000
netsh advfirewall firewall add rule name=storage_queue dir=in action=allow protocol=tcp localport=10001
netsh advfirewall firewall add rule name=storage_table dir=in action=allow protocol=tcp localport=10002

Create a package for emulator deployment

So, now that we have the test environment set with the Azure emulators, let’s prepare our application for build and deployment. My example application consists of one Worker role:

Solution with Worker Role

One challenge here is that if you build the cloud project for the emulator, no real “package” is created. The files are laid out as a directory structure to be picked up locally by the emulator. The package built for the real McCoy is not compatible with the emulator. Also, in order to deploy the code using Octopus Deploy, we need to wrap it as a Nuget package. The OctoPack tool is the natural choice for such a task, but it does not support the cloud project type.

Console project as ‘wrapper’

To fix the problem with creating a Nuget package for deployment to the emulator, we create a console project to act as a “wrapper.” We have no interest in a console application per se, but only in the project as a vehicle to create a Nuget package. So, we add a console project to our solution:

Added wrapper project to solution

We have to make sure that the WorkerRole project is built before our wrapper project, so we make sure the build order in the solution is correct:

Build order

Customizing the build

What we want, is the build to perform the following steps:

  1. Build WorkerRole dll
  2. Build Worker (cloud project) – prepare files for the compute emulator
  3. Build Worker.Wrapper – package compute emulator files and deployment script into a Nuget package

We have the two steps already covered with the existing setup. So what we need to do in the third step is to copy the prepared files to the build output directory of the Wrapper project, and then have OctoPack pick them up from there.

To copy the files, we set up a custom build step in the Wrapper project:

<PropertyGroup>
    <BuildDependsOn>
    CopyCsxFiles;
    $(BuildDependsOn);
  </BuildDependsOn>
  </PropertyGroup>
  <PropertyGroup>
    <CsxDirectory>$(MSBuildProjectDirectory)\..\Worker\csx\$(Configuration)</CsxDirectory>
  </PropertyGroup>
  <Target Name="CopyCsxFiles">
    <CreateItem Include="$(CsxDirectory)\**\*.*">
      <Output TaskParameter="Include" ItemName="CsxFilesToCopy" />
    </CreateItem>
    <ItemGroup>
      <CsConfigFile Include="$(MSBuildProjectDirectory)\..\Worker\ServiceConfiguration.Cloud.cscfg" />
    </ItemGroup>
    <Copy SourceFiles="@(CsxFilesToCopy)" DestinationFiles="@(CsxFilesToCopy->'$(OutDir)\%(RecursiveDir)%(Filename)%(Extension)')" />
    <Copy SourceFiles="@(CsConfigFile)" DestinationFolder="$(OutDir)" />
  </Target>

We copy all the files in the csx directory in addition to the cloud project configuration file.

The next step is then to install OctoPack in the Wrapper project. This is done using the package manager console:

Install-Package OctoPack -ProjectName Worker.Wrapper

We’re almost set. Like I said earlier, the Wrapper project is a console project, but we are not really interested in a console application. So in order to remove all unnecessary gunk from our deployment package, we specify a .nuspec file where we explicitly list the files we need in the package. The .nuspec file name is prefixed with the project name. In this case, Worker.Wrapper.nuspec, and it contains:

<package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">
  <files>
    <file src="bin\release\roles\**\*.*" target="roles" />
    <file src="bin\release\Service*" target="." />
    <file src="PostDeploy.ps1" target="." />
    <file src="PreDeploy.ps1" target="." />
  </files>
</package>

We can now create a deployment package from msbuild, and set for build this artifact on Team City:

msbuild WorkerExample.sln /p:RunOctoPack=true /p:Configuration=Release /p:PackageForComputeEmulator=true
dir Worker.Wrapper\bin\release\*.nupkg

Notice that we set the property PackageForComputeEmulator to true. If not, Msbuild will package for the real Azure compute service in release configuration.

Deployment scripts

The final step is to deploy the application. Using Octopus Deploy, this is quite simple. Octopus has a convention where you can add PowerShell scripts to be executed before and after the deployment. The deployment of a console app using Octopus Deploy consists of unpacking the package on the target server. In our situation we need to tell the emulator to pick up and deploy the application files afterwards.

In order to finish up the deployment step, we create one file PreDeploy.ps1 that is executed before the package is unzipped, and one file PostDeploy.ps1 to be run afterwards.

PreDeploy.ps1

In this step, we make sure that the emulators are running, and remove any existing deployment in the emulator:

$computeEmulator = "${env:ProgramFiles}\Microsoft SDKs\Azure\Emulator\csrun.exe"
$storageEmulator = "${env:ProgramFiles(x86)}\Microsoft SDKs\Azure\Storage Emulator\WAStorageEmulator.exe"

$ErrorActionPreference = 'continue'
Write-host "Starting the storage emulator, $storageEmulator start"
& $storageEmulator start 2>&1 | out-null

$ErrorActionPreference = 'stop'
Write-host "Checking if compute emulator is running"
& $computeEmulator /status 2>&1 | out-null
if (!$?) {
    Write-host "Compute emulator is not running. Starting..."
    & $computeEmulator /devfabric:start
} 

Write-host "Removing existing deployments, running $computeEmulator /removeall"
& $computeEmulator /removeall

PostDeploy.ps1

In this step, we do the deployment of the new application files to the emulator:

$here = split-path $script:MyInvocation.MyCommand.Path
$computeEmulator = "${env:ProgramFiles}\Microsoft SDKs\Azure\Emulator\csrun.exe"

$ErrorActionPreference = 'stop'
$configFile = join-path $here 'ServiceConfiguration.Cloud.cscfg'

Write-host "Deploying to the compute emulator $computeEmulator $here $configFile"
& $computeEmulator $here $configFile

And with that, we are done.

Moving my blog to Azure Web Sites

This blog has ran on a dedicated WordPress installation hosted by domeneshop.no. A couple of weeks ago, I decided to move my blog to Azure Web Sites. There were many reasons for this, all of them evolving around me wanting to investigate the technology and the offerings in Microsoft Azure in general, and in Web Sites in particular. Here’s how it went.

Creating the new site

The first step was obviously to create the new Web site in Azure. This turned out being very simple. I used the first part of Dave Bost’s blog series on Moving a WordPress Blog to Windows Azure for guidance. It all went easy peasy.

Moving the content

When Wordpess had been installed, the next step was to move the content. This step was a bit troublesome, and I had to try this I few times before making it work. I tried Dave’s approach in Moving a WordPress Blog to Windows Azure – Part 2, but it did not work our quite well for me. First of all, the “Portable phpMyAdmin” admin had evolved into “Adminer”, and a few of the features seems to have been changed on the way.

I ended up with first copying the content of the wp-content/uploads directory using FTP. Using FileZilla as a client, all I needed to do, was to reset my deployment credentials for the web site in Azure. I was then able to log in using FTP. (I were not able to make SFTP work, though.)

I then reinstalled the themes and plugins on the new site manually. After all, this was a good opportunity to clean up anyways, leaving the dated, not used, plugins and themes behind.

Finally, I moved the blog posts using the build in export and import functionality in WordPress.

Changing URLs

My intention as to host the www.kongsli.net address using my newly installed site at http://musings.azurewebsites.net. One tiny detail regarding URLs was that on my old site, the WordPress installation was in the subdirectory /nblog. On my new site, the WordPress installation was in the root directory. So I needed to forward requests to /nblog/* to /*. My first idea was to use IIS Rewrites for this, but according to this Stackoverflow question this module is not installed in Azure web sites. Instead, I then went on to creating an extremely simple ASP.NET MVC app to do the redirection. (Yes, I could probably have pulled this off using Web API as well, but MVC is more familiar to me)

Here is the essential code:

public class HomeController : Controller
{
    public ActionResult Index(string id)
    {
        var queryString = Request.QueryString.ToString();
        var location = "/" + id + (string.IsNullOrEmpty(queryString) ? string.Empty : "?" + queryString);
        return new RedirectResult(location, true);
    }
}

The trick I wanted to use, then, is to install this application in IIS under /nblog so that it will handle all requests to /nblog/*. To do this, I needed to use the FTP method for publishing the app to Azure:

Publish using FTP

Notice that the site path is set to site/nblog-redirector, which will locate it “beside” the WordPress installation at site/wwwroot on the server. Then, the application can be set up in the Azure Management
portal:

Applications and virtual directories

As you can see from the picture above, I also had to take care of some other content besides my blog, that I could FTP to the new site and register as virtual directories in IIS. Pretty nifty.

Using a custom domain

I wanted to host www.kongsli.net using my new web site in Azure. There were essentially two steps needed for this, only one of which that was apparent to me at the time. The apparent one was that I needed a DNS record that pointed www.kongsli.net to the web site. The existing record was an A record that pointed www.kongsli.net to my current hosting provider’s infrastructure. Because of the scalable, high availability nature of Azure web sites, this needed to be replaced by a CNAME record pointing www.kongsli.net to musings.azurewebsites.net. This as easy to set up at my current DNS provider:

dns_records_domeneshop

Once set up, all there was to do, was to wait for the DNS change to propagate. At least, so I thought. The final piece of the puzzle, was that the custom domain name to be hosted in the Azure web site needed to be registered. There might be more to it, but I guess that the web sites uses host headers to distinguish requests in shared hosting scenarios in Azure web sites. I also found that in order to add custom domain names, I needed to change my hosting plan from “Free” to at least “shared”. When I did, I could register my domain:

Setting up custom domain names in web sites

And voilá.

Hello, Azure Scheduler

The Scheduler is one of the new kids on the block in Azure Land. With Scheduler you can set up triggers for some sort of event in your system. It is currently in preview. I took some time to get to know the basics of it, and I wrote up a three part series of articles. You can find the articles in my company’s blog:

NLog: writing log entries to Azure Table Storage

In August last year, I blogged about how to get Log4Net log entries written to Azure Table Storage. In this article, I will show how the same thing can be easily achieved using NLog.

The concepts in NLog is very similar to Log4Net. More or less, replace the word “appender” in Log4Net lingo with “target”, and you’re game.

First, let’s create a class for log entries:

public class LogEntry : TableServiceEntity
{
    public LogEntry()
    {
        var now = DateTime.UtcNow;
        PartitionKey = string.Format("{0:yyyy-MM}", now);
        RowKey = string.Format("{0:dd HH:mm:ss.fff}-{1}", now, Guid.NewGuid());
    }
    #region Table columns
    public string Message { get; set; }
    public string Level { get; set; }
    public string LoggerName { get; set; }
    public string RoleInstance { get; set; }
    public string DeploymentId { get; set; }
    public string StackTrace { get; set; }
    #endregion
}

Next, we need to do is to create a class that represents the table storage service. It needs to inherit from TableServiceContext:

public class LogServiceContext : TableServiceContext
{
    public LogServiceContext(string baseAddress, StorageCredentials credentials) : base(baseAddress, credentials) { }
    internal void Log(LogEntry logEntry)
    {
        AddObject("LogEntries", logEntry);
        SaveChanges();
    }
    public IQueryable<LogEntry> LogEntries
    {
        get
        {
            return CreateQuery<LogEntry>("LogEntries");
        }
    }
}

Finally, as far as code is concerned, a class that is a custom NLog target that gets called when the NLog framework needs to log something:

[Target("AzureStorage")]
public class AzureStorageTarget : Target
{
    private LogServiceContext _ctx;
    private string _tableEndpoint;
    [Required]
    public string TableStorageConnectionStringName { get; set; }
    protected override void InitializeTarget()
    {
        base.InitializeTarget();
        var cloudStorageAccount =
            CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(TableStorageConnectionStringName));
        _tableEndpoint = cloudStorageAccount.TableEndpoint.AbsoluteUri;
        CloudTableClient.CreateTablesFromModel(typeof(LogServiceContext), _tableEndpoint, cloudStorageAccount.Credentials);
        _ctx = new LogServiceContext(cloudStorageAccount.TableEndpoint.AbsoluteUri, cloudStorageAccount.Credentials);
    }
    protected override void Write(LogEventInfo loggingEvent)
    {
        Action doWriteToLog = () =>
        {
            try
            {
                _ctx.Log(new LogEntry
                {
                    RoleInstance = RoleEnvironment.CurrentRoleInstance.Id,
                    DeploymentId = RoleEnvironment.DeploymentId,
                    Timestamp = loggingEvent.TimeStamp,
                    Message = loggingEvent.FormattedMessage,
                    Level = loggingEvent.Level.Name,
                    LoggerName = loggingEvent.LoggerName,
                    StackTrace = loggingEvent.StackTrace != null ? loggingEvent.StackTrace.ToString() : null
                });
            }
            catch (DataServiceRequestException e)
            {
                InternalLogger.Error(string.Format("{0}: Could not write log entry to {1}: {2}",
                    GetType().AssemblyQualifiedName, _tableEndpoint, e.Message), e);
            }
        };
        doWriteToLog.BeginInvoke(null, null);
    }
}

So, to make it work, we need to register the target with the NLog framework. This is done in the NLog.config file:

<?xml version="1.0"?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <extensions>
    <add assembly="Demo.NLog.Azure" />
  </extensions>
  <targets>
    <target name="azure" type="AzureStorage" tableStorageConnectionStringName="Log4Net.ConenctionString" />
  </targets>
  <rules>
    <logger name="*" minlevel="Info" writeTo="azure" />
  </rules>
</nlog>

For information about how to set your ServiceDefinition.csdef and ServiceConfiguration.cscfg files, see my previous post.You can find the code for this example on GitHub. Suggestions for improvement are very welcome.

Log4Net: writing log entries to Azure Table Storage

Earlier, I blogged about how one can leverage Azure Diagnostics to easily set up Log4Net for your application running in Azure, and how to customize the log entries for the Azure environment.

An alternative to doing this two-step process of first writing to the local disk and then transfer the logs to Azure blob storage, is to write the log entries directly to Azure table storage (or in principal, to Azure blob storage for that matter). This is what I will do here.

Each log entry that the application writes will be a single row in a table in the Azure Table Storage. The log message itself and various meta data about it will be inserted into separate columns in the table. In order to achieve this, we first create a class that represents each entry in the table:

public class LogEntry : TableServiceEntity
{
    public LogEntry()
    {
        var now = DateTime.UtcNow;
        PartitionKey = string.Format("{0:yyyy-MM}", now);
        RowKey = string.Format("{0:dd HH:mm:ss.fff}-{1}", now, Guid.NewGuid());
    }
    #region Table columns
    public string Message { get; set; }
    public string Level { get; set; }
    public string LoggerName { get; set; }
    public string Domain { get; set; }
    public string ThreadName { get; set; }
    public string Identity { get; set; }
    public string RoleInstance { get; set; }
    public string DeploymentId { get; set; }
    #endregion
}

Note that the PartitionKey is the current year and month, and the RowKey is a combination of the date, time and a GUID. This is done to make the querying of the log entries efficient for our purpose. So, the next thing we need to do is to create a class that represents the table storage service. It needs to inherit from TableServiceContext:

internal class LogServiceContext : TableServiceContext
{
    public LogServiceContext(string baseAddress, StorageCredentials credentials) : base(baseAddress, credentials) {}
    internal void Log(LogEntry logEntry)
    {
        AddObject("LogEntries", logEntry);
        SaveChanges();
    }
    public IQueryable<LogEntry> LogEntries
    {
        get
        {
            return CreateQuery<LogEntry>("LogEntries");
        }
    }
}

The method that we will actually use in our code is the Log method which takes a LogEntry instance and persists it to table storage. What we need next, is to create a new Appender for Log4Net which interacts with the table store to store the log entries:

public class AzureTableStorageAppender : AppenderSkeleton
{
    public string TableStorageConnectionStringName { get; set; }
    private LogServiceContext _ctx;
    private string _tableEndpoint;
    public override void ActivateOptions()
    {
        base.ActivateOptions();
        var cloudStorageAccount =
            CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(TableStorageConnectionStringName));
        _tableEndpoint = cloudStorageAccount.TableEndpoint.AbsoluteUri;
        CloudTableClient.CreateTablesFromModel(typeof(LogServiceContext), _tableEndpoint, cloudStorageAccount.Credentials);
        _ctx = new LogServiceContext(cloudStorageAccount.TableEndpoint.AbsoluteUri, cloudStorageAccount.Credentials);
    }
    protected override void Append(LoggingEvent loggingEvent)
    {
        Action doWriteToLog = () => {
            try
            {
                _ctx.Log(new LogEntry
                {
                    RoleInstance = RoleEnvironment.CurrentRoleInstance.Id,
                    DeploymentId = RoleEnvironment.DeploymentId,
                    Timestamp = loggingEvent.TimeStamp,
                    Message = loggingEvent.RenderedMessage,
                    Level = loggingEvent.Level.Name,
                    LoggerName = loggingEvent.LoggerName,
                    Domain = loggingEvent.Domain,
                    ThreadName = loggingEvent.ThreadName,
                    Identity = loggingEvent.Identity
                });
            }
            catch (DataServiceRequestException e)
            {
                ErrorHandler.Error(string.Format("{0}: Could not write log entry to {1}: {2}",
                    GetType().AssemblyQualifiedName, _tableEndpoint, e.Message));
            }
        };
        doWriteToLog.BeginInvoke(null, null);
    }
}

In the code above, the actually writing to the log is done asynchronically in order to prevent the log write to slow down the request handling. We are now done with all the coding. What is left is to use our new AzureTableStorageAppender. Here is the log4net.config:

<log4net>
  <appender name="AzureTableStoreAppender" type="Demo.Log4Net.Azure.AzureTableStorageAppender, Demo.Log4Net.Azure">
    <param name="TableStorageConnectionStringName" value="Log4Net.ConenctionString" />
  </appender>
  <root>
    <level value="DEBUG" />
    <appender-ref ref="AzureTableStoreAppender" />
  </root>
</log4net>

Notice the TableSTorageConnectionStringName attribute of the param element in the configuration. It corresponds to the property of the same name in the AzureTableStorageAppender. Furthermore, take take notice that it’s value is 'Log4Net.ConnectionString', which corresponds to a custom configuration setting that we will add to ServiceDefinition.csdef file:

<ServiceDefinition ...>
  <WebRole ...>
    <ConfigurationSettings>
      <Setting name="Log4Net.ConenctionString"/>
    </ConfigurationSettings>
    ...
  </WebRole>
</ServiceDefinition>

We also need to give the Log4Net.ConfigurationString setting a value in the ServiceConfiguration.cscfg file. It should be a connection string that points to the storage account to use for storing the log entries. In this example, let’s use the development storage:

<ServiceConfiguration ...>
  <Role ...>
    <ConfigurationSettings>
      <Setting name="Log4Net.ConenctionString" value="UseDevelopmentStorage=true"/>
    </ConfigurationSettings>
  </Role>
</ServiceConfiguration>

…and that’s it. You should now find the log entries in the table storage:

You can find the code for this example on GitHub. Suggestions for improvement are very welcome.

Customizing Log4net log entries on Azure

I have earlier blogged about how to use Log4Net on Azure compute. With this solution the log files from the various running instances gets transferred to a common container on Azure blob store. When I mine the log data, I usually merge all the log files together an run text utils like grep or sed on them.

One challenge when merging the log files together is that we then lose the information about which instance the different log entries came from. In order to fix this, we can customize the log entries to that we keep this information.

The first we need to do, is to create a new layout class that inherits from PatternLayout:

using log4net.Layout;
namespace Demo.Log4Net.Azure
{
    public class AzurePatternLayout : PatternLayout
    {
        public AzurePatternLayout()
        {
             // TODO: add converters
        }
    }
}

Next, we need to register this class in the log4net configuration:

<log4net>
  <appender ...>
    <layout type="Demo.Log4Net.Azure.AzurePatternLayout, Demo.Log4Net.Azure">
      <conversionPattern ... />
    </layout>
  </appender>
  ...
</log4net>

So now that we have a new PatternLayout class, we can add some logic into it for adding Azure-specific log information into the entries. To do so, we first need a new Converter class:

using System.IO;
using log4net.Util;
using Microsoft.WindowsAzure.ServiceRuntime;
namespace Demo.Log4Net.Azure
{
    internal class AzureInstanceIdPatternConverter : PatternConverter
    {
        protected override void Convert(TextWriter writer, object state)
        {
            writer.Write(RoleEnvironment.CurrentRoleInstance.Id);
        }
    }
}

Now, we register the new AzureInstanceIdPatternConverter in the constructor of AzurePatternLayout:

public class AzurePatternLayout : PatternLayout
{
    public AzurePatternLayout()
    {
        AddConverter("roleinstance", typeof(AzureInstanceIdPatternConverter));
    }
}

Then we can change the conversionPattern element of the Log4Net configuration to use the new Azure environment information:

<layout type="Demo.Log4Net.Azure.AzurePatternLayout, Demo.Log4Net.Azure">
  <conversionPattern value="%date [%roleinstance] [%thread] %-5level %logger [%appdomain] - %message%newline" />
</layout>

….which will make the log entries look something like this:

Custom log entries

(The screenshot shows instance ids generated by DevFabric, not an instance in the cloud)

Using Log4Net in Azure Compute

Log4Net is a popular logging framework, and if you have an existing application that you wish to move to Azure compute, you probably want to avoid rewriting your application to use another logging framework. Luckily, keeping Log4Net as your logging tool in Azure is certainly possible, but there are a few hoops you have to jump through to get there.

There are several ways to achieve this goal. I decided to rely as much as possible on a feature provided in Azure Compute that allows for automatically synchronizing certain directories on the instance’s local file system to Azure blob storage. Using this approach, there are only very few changes that need to be done in the application, and indeed none of the existing code needs to be altered.

Baseline: an existing application which uses Log4Net

In order for this example to work, we need an application that we what to move to the cloud:

  1. Start off with File->New->Project... in Visual Studio and use the “ASP.NET Web Application” template
  2. Add Log4Net capabilities to the application. This can be done by adding a reference to Log4Net using Nuget, and then configuring it like Phil Haack has described here.

Then, run the web application locally in Visual Studio to assert that the logging works.

Enabling the application for Azure

Starting off with the simple ASP.NET web application we created in the previous section, do the following:

  1. Right-click on the solution in Visual Studio to select Add->New Project. Use the Windows Azure Project template, and do not add any roles in the dialog box initially.
  2. Set the newly created cloud project as the startup project in the solution.
  3. Right-click on the Roles-folder of the newly created Azure project and select Add->Web role project in solution... to add the web application project as an Azure Web role.
Adding an Azure Compute web role to an existing ASP.NET solution

Now, press F5 to run the application in the local Azure development environment (DevFabric) to see that it works (functionally, not logging-wise)

So, we are done with the prerequisites. Now to the interesting parts!

Setting log directory for Azure

The first issue we will grapple with is the fact that in Azure Compute, the application effectively runs in a sandbox with limited access to the file system, which means that the “standard” approach logging to a file does not work. Basically, the Azure compute role has only access to a certain subdirectory of the file system and the exact location needs to be retrieved by the application at runtime.

In order to retain the existing logging in the application, locating the path to the role’s designated area on the disk can be solved by subclassing one of the appenders that Log4Net provides out of the box. I chose the RollingFileAppender because it provides the ability to split the log into several files. This is beneficial from an operations perspective. Here’s what the custom appender looks like:

using System.Diagnostics;
using System.IO;
using log4net.Appender;
using Microsoft.WindowsAzure.ServiceRuntime;
namespace Demo.Log4Net.Azure
{
    public class AzureAppender : RollingFileAppender
    {
        public override string File
        {
            set
            {
                base.File = RoleEnvironment.GetLocalResource("Log4Net").RootPath + @"" 
                    + new FileInfo(value).Name + "_"
                    + Process.GetCurrentProcess().ProcessName;
            }
        }
    }
}

What happens here, is that when the configuration is read when the logging framework initializes, it calls our method to set the log file name. This corresponds to the file element in the XML configuration for the appender:

<log4net>
  <appender>
    <param name="File" value="app.log" />
    ...
  </appender>
  ...
</log4net>

What happens here, is that the application asks the role environment for the whereabouts of the local resource called “Log4Net”. This resource is a directory that we specify designated for containing our logs and needs to be specified in the ServiceDefinition.csdef file:

<ServiceDefinition>
  <WebRole name="WebRole1">
    <LocalResources>
        <LocalStorage name="Log4Net" sizeInMB="2048" cleanOnRoleRecycle="true"/> 
    </LocalResources>
  </WebRole>
</ServiceDefinition>

When we have the path of the local resource, it is used to construct an absolute path for the log file. Also note that the current process name is appended to the filename. This is done because if you run the application as a WebRole in “Full IIS” mode in Azure, the web application and the RoleEntryPoint code run in different processes. (If you look at blog entries on the Internet for Azure information, you should have in mind that the “Full IIS” mode was introduced with the Azure SDK version 1.3 in late 2010, and information that predates this might not be valid for the current Azure version.) This means that if there are log entries in the RoleEntryPoint as well as in the rest of your application, two processes would potentially try to keep a write lock on the file at the same time. Therefore, we use one log file for each process. Note that this is not a relevant topic for Worker roles. For more on the execution model, take a look here.

So, now that the new custom appender is ready, we need to change the Log4Net configuration to use it. Basically, we change the assembly type in the appender configuration section so that the configuration looks like this:

<log4net>
  <appender name="AzureRollingLogFileAppender" type="Demo.Log4Net.Azure.AzureAppender, Demo.Log4Net.Azure">
    <param name="File" value="app.log" />
    <param name="AppendToFile" value="true" />
    <param name="RollingStyle" value="Date" />
    <param name="StaticLogFileName" value="false" />
    <param name="DatePattern" value=".yyyy-MM-dd.log" />
    <layout type="log4net.Layout.PatternLayout">
      <conversionPattern value="%date [%thread] %-5level %logger [%appdomain] - %message%newline" />
    </layout>
  </appender>
  <root>
    <level value="DEBUG" />
    <appender-ref ref="AzureRollingLogFileAppender" />
  </root>
</log4net>

Now it’s time to run the application to see if the logging works. First, deploy to devfabric, and then open the Windows Azure Compute Emulator. Right-click on the running instance, and click on Open local store....

Open the local store for a role instance running in DevFabric

Then navigate to the ‘directoryLog4Net‘ to find the log files:

Log files in local store in DevFabric

Persisting logs to Azure blob storage

The next issue we need to handle, is the fact that the local file system in an Azure role instance is not persistent. Local data will be lost when the application is redeployed (and also when the Role recycles, if you have chosen to do so). Furthermore, the only way to access the local file system is using a Remote Desktop Connection. In theory, you could probably also make the directory a shared drive accessible over the Internet, but you probably would not want to do that. Besides, it will be a headache if you have a lot of instances.

So, the solution that Azure offers to this, is to have a scheduled synchronization of certain of the local resources (directories) to the Azure blob store. What we need to do, is to add the following code to the descendant of RoleEntryPoint:

public class WebRole : RoleEntryPoint
{
    public override bool OnStart()
    {
        var diagnosticsConfig = DiagnosticMonitor.GetDefaultInitialConfiguration();
        diagnosticsConfig.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(5);
        diagnosticsConfig.Directories.DataSources.Add(
                new DirectoryConfiguration
                {
                    Path = RoleEnvironment.GetLocalResource("Log4Net").RootPath,
                    DirectoryQuotaInMB = 2048,
                    Container = "wad-log4net"
                }
        );
        DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", diagnosticsConfig);
        return base.OnStart();
    }
}

…and that’s it. Now you can try to run the application and observe a container called ‘wad-log4net’ will be created in your blob service account that will contain the logs:

Logs in Azure blob store

(I use the AzureXplorer extension for Visual Studio)

The solution shown here targeted an ASP.NET application running as a WebRole, but the setup works equally well for Worker roles.

Azure DevFabric: clean out old deploments

During development of an Azure application, I noticed that my disk kept running full. First time it happended, I thought nothing of it and just ran disk cleanup to clear out some obsolent files. Problem solved. At least, so I thought. Some hours later, I got a disk full warning once again. It turned out that it was caused by old local DevFabric deployments that used up disk space. Even though the deployments have been shut off, the deployment files lingered on. This is how the %USERPROFILE%AppData directory looked like:

What I had to do to fix this, was to run the following command:

csrun /devfabric:clean

Then, the old deployment files were deleted:

Creating Azure blob storage shared access signatures using JavaScript

There are quite a lot of examples on the Internet on how to create a shared access signature for Azure storage. However, the examples are often in C# or pseudo-code.

A Shared Access Signature is basically what is called a message authentication code – MAC – that is used to grant a user access to a restricted resource, often for a certain period of time. (In other words, a MAC typically has a validity period). In the case of Azure, a so-called Hashed MAC is used – HMAC. This is achieved using standard cryptography functionality, namely the SHA-256 hash algorithm. By using standard cryptography functionality, creating such MACs is also available in other programming languages and platforms.

In this blog entry I will show how to create MACs for Azure blob storage using JavaScript.

The first thing we need, is a library with cryptography functions. I chose Crypto-JS:

<script type="text/javascript" src="http://crypto-js.googlecode.com/files/2.0.0-crypto-sha256.js"></script>
<script type="text/javascript" src="http://crypto-js.googlecode.com/files/2.0.0-hmac.min.js"></script>

Next, we create a function to generate the signature we need:

Date.prototype.toIso8061 = function() {
   var d = this;
   function p(i) { return ("0"  + i).slice(-2); }
   return "yyyy-MM-ddThh:mm:ssZ"
      .replace(/yyyy/, d.getUTCFullYear())
      .replace(/MM/, p(d.getUTCMonth()))
      .replace(/dd/, p(d.getUTCDay()))
      .replace(/hh/, p(d.getUTCHours()))
      .replace(/mm/, p(d.getUTCMinutes()))
      .replace(/ss/, p(d.getUTCSeconds()));
};
function generateSignature(base64EncodedSharedKey, startTime, endTime, account, container, blobName) {
   var stringToSign = "rn{0}n{1}n/{2}/{3}/{4}n"
      .replace(/{0}/, startTime.toIso8061())
      .replace(/{1}/, endTime.toIso8061())
      .replace(/{2}/, account)
      .replace(/{3}/, container)
      .replace(/{4}/, blobName);
   var accessKeyBytes = Crypto.util.base64ToBytes(base64EncodedSharedKey);
   return Crypto.util.bytesToBase64(Crypto.HMAC(Crypto.SHA256, stringToSign, accessKeyBytes, { asBytes: true }));
}

Then, we can construct a URL that we will use to request a resource from the blob store:

var start = new Date(); // Start of the validity period of the MAC
var end = new Date(start.getTime() + (1000 * 60 * 30)); // End of the validity period, half an hour from now
var signature = generateSignature(startTime, endTime, "myaccount", "foryoureyesonly", "liveorletdie.avi");
var queryString = "?st={0}&se={1}&sr=b&sp=r&sig={2}"
   .replace(/{0}/, encodeURIComponent(startTime.toIso8061()))
   .replace(/{1}/, encodeURIComponent(endTime.toIso8061()))
   .replace(/{2}/, encodeURIComponent(signature));
var url = "http://myaccount.blob.core.windows.net/foryoureyesonly/liveorletdie.avi" + queryString;

Security warning

Although possible to do, it is not always advisable to generate the signature in a browser because generating the signature requires access to the shared key. The shared key is highly sensitive data, it is often not advisable to trust the browser client with this information if it should not be disclosed to the browser user. There are, however, some use cases where this is OK. I am planning to blog about one of them later on. Also, the code above can easily be used in a server-side scenario, such as a solution based on node.js.