Manage your DNS records with PowerShell

In my video course PowerShell 5 Recipes, I used the Dnsimple API as an example of writing PowerShell modules. In the course, I uploaded and made the library available in the PowerShell gallery. Even though it is not feature-complete with respect to the DNSimple API, it is still usable for the core parts of the API.

Getting started

In order to access the Dnsimple API, you need an access token. In order to create one, log in to the web page and generate one under<account id>/account/access_tokens

For simplicity going forward, create a PowerShell object to hold the account id and the access token:

$account = [pscustomobject]@{Account=27928;AccessToken=(
convertto-securestring 'G2uiIZLhsHNWt4BFNq6Y8VnF1iRz2sB1' -asplaintext -for

Then, the next step is to install the module from the PowerShell Gallery, and load it into the current PowerShell instance:

Find-Module DnsimplePS | Install-Module -Scope CurrentUser
Import-Module DnsimplePS

In order to check the available cmdlets, list them:

Get-Command -Module DnsimplePS

Listing, adding and removing Zone records

The simplest thing you can do, is to list the records for your domain:

$account | Get-ZoneRecord -Zone

You can also search for records of a certain type (case-sensitive):

$account | Get-ZoneRecord -Zone -RecordType MX

Or, you can get a certain record by name

$account | Get-ZoneRecord -Zone -Name www

Or, you can match records by prefix using -NameLike:

$account | Get-ZoneRecord -Zone -NameLike w

You can also add a record:

$account | Add-ZoneRecord -Zone -RecordType URL `
  -Name dnsimple -Content

And you can delete a record:

$account | Remove-ZoneRecord -Zone -Id 11601106

Storing your access token securely

In the getting started section above, I directed you to go to the DNSimple web page to get an access token. This is often not practical in automated scripts, so I provided a couple of cmdlets to store and retrieve an access token from a file, encrypted under the current user’s Windows credentials.

So, when you have retrieved the token from the web page, you can store it securely like so:

Write-AccessToken -AccessToken $t -Account 27928

This will create a file named .dnsimple.tokens next to your $PROFILE. The file will look something like this:


If you want to place the file somewhere else, point to it using the -Path parameter.

Extra: URL-shortener

In order to show off PowerShell modules being dependent on other modules, I created a second module that was dependent on the DnsimplePS module. This module has one cmdlet which is a short-hand for the special case I showed above for adding a record.

It creates a URL type record, which is basically a redirect record that allows the creation of short urls.


$account | Add-ShortUrl -Zone -Name dnsimple -Url

In the same vain, you can delete the url:

$account | Remove-ShortUrl -Zone -Name dnsimple

My PowerShell video course is available!

For the last months, I have been working on my PowerShell 5 recipes video course. It is now finally published!


Why make a video course? – you might ask. I come from a system development background, having worked with many development platforms in my time (Java, .NET, Lotus Notes, JavaScript, etc.) Back in 2011, after resisted for a while, I decided to learn PowerShell. So I dived in, head first. Why? Because I realized that in order to get a better delivery process for software, we need a broader scope than just the development per se. We need to get a holistic view of the environment the applications will run in, and we need to automate the deployment. This has become more and more evident with the coming of DevOps and is key to continuous delivery.

I am first and foremost a PowerShell user, and I do not consider myself an expert. Maybe a little more advanced than the average user, though. When the opportunity came to create a video course, the temptation to do something new to me as too hard to resist. I hope I have been able to keep the view of the user, resulting in a pragmatic, hands-on, course.

For whom?

The course is thought to be a smorgasbord of relevant topics in PowerShell that is relevant to the daily work of developers as well as IT. For people new to PowerShell, it will get you started by installing, and/or upgrading to the latest versions (even on Linux), customize, and set up the environment for your preferences. For developers, it gives you hands-on guidance to set up automated build and deployment. For DevOps (and IT), it guides you in provisioning Azure resources. For more advanced users, it gives you recipes on how to develop reusable scripts, handle modules and publishing your scripts to repositories.


HTTPS is here

During the last few months, I have written several blog posts in my company’s blog about how to secure a site with HTTPS. I started off talking about how to encrypt an Azure web site with Let’s encrypt, and then continued on to discuss how to try to prevent the browser being tricked into making non-HTTPS requests to the server. Finally, I talked about how to narrow the range of certificate issuers we want the browser to trust for our site in order to prevent ill-behaving issuers to make our site insecure, using so-called certificate pinning.

Quite recently, there has been discussions on how HTTPS is gaining traction, and that HTTPS is becoming the norm. Your web site should use it, too.

Slå opp i ordbøkene fra Powershell

Å bruke kommandolinjen er kjapt. Hvis du ønsker å slå opp i Bokmålsordboka eller Nynorskordboka på nett, kan du nå gjøre det fra Powershell. Slik konfigurerer du dette:

Hvis du ikke har det fra før, installér programvarepakkesystemet Scoop:

iex (new-object net.webclient).downloadstring('')

Neste steg er å legge til denne kilden til programvare:

scoop install git-with-openssh
scoop bucket add vidars

Dernest installerer du pakken `norskord`:

scoop install norskord

Når du har gjort dette, er du klar! Slik slår du opp i Bokmålsordboka:

bokm omkostning

Tilsvarende kan du søke i Nynorskordboka:

nyn bevegelse

Språk-nerd? Ja.

(Edit: fikset installeringsskript etter innspill fra Discus:@yngvar)

Installing Logstash on Windows using Scoop

I just made Logstash available for installation on Windows through the brilliant command-line installer Scoop. This makes it very easy to install, upgrade or uninstall Logstash on your system. I earlier posted a similar instruction on how to install Logstash’s companion products Elasticsearch and Kibana in Windows in my employer’s blog.

Here’s how you do it.

If you haven’t already, install the command-line installer by running the following in Powershell:

iex (new-object net.webclient).downloadstring('')

Then, add the extra definitions list (known as ‘buckets’ in Scoop):

scoop add bucket extras

That’s it for the installer part. Finally, install Logstash:

scoop install logstash --global

That’s it! To prove that the installation went well, run Logstash with a simple configuration:

logstash -e 'input { stdin { } } output { stdout {} }'

You can then go on to install Logstash as a windows service, and start it

logstash_service install -configPath path\to\logstash.conf
logstash_service start

Installing RabbitMQ on Windows using Scoop

I just made RabbitMQ available for installation on Windows through the brilliant command-line installer Scoop. This makes it very easy to install, upgrade or uninstall RabbitMQ from your system.

Here’s how you do it.

If you haven’t already, install the command-line installer by running the following in Powershell:

iex (new-object net.webclient).downloadstring('')

Then, add my custom program definitions list (known as ‘buckets’ in Scoop):

scoop add bucket vidars

That’s it for the installer part. Then, install Erlang on your system, if you don’t have it already:

scoop install erlang18 --global

Finally, install RabbitMQ:

scoop install rabbitmq --global

That’s it!
To prove that the installation went well, run RabbitMQ and query for status:

rabbitmq-server -detached
rabbitmqctl status

Running Azure emulators in on-site test environment

The Azure compute and storage emulators enable testing and debugging of Azure cloud services on the developer’s local computer. On my current project, I ran into the need to install and run the emulator in the test environment. I will not go into exactly why this was needed, but it could be a possible interim solution to try out the technology before your customer make the decision to establish an environment in Azure. There were quite a few hurdles in the way to do this. I try to summarize them all in this post.

So the basic setup that I will explain in this blog post is to build a package for deployment in the compute emulator on the Team City build server. We will do the deployment using Octopus Deploy:

+-------------+   +--------------+   +---------------+   +---------+----------+
|             |   |              |   |               |   |  Test environment  |
| Dev machine +---> Build server +---> Deploy server +---> (Compute emulator) |
|             |   | (Team City)  |   | (Octopus)     |   | (Storage emulator) |
|             |   |              |   |               |   |                    |
+-------------+   +--------------+   +---------------+   +--------------------+

Preparing the test environment

This step consists of preparing the test environment for running the emulators.

Installing the SDK

First of all, let’s install the necessary software. I used the currently (September 2014) latest version of the Microsoft Azure SDK, version 2.4. This is available in the Microsoft web platform installer (
Azure SDK 2.4 in Web Platform installer
Take note of the installation paths for the emulators. You’ll need them later on:

Compute emulator: C:\Program Files\Microsoft SDKs\Azure\Emulator
Storage emulator: C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator

Create a user for the emulator

The first problem I ran into was concerning which user should run the deployment scripts. In a development setting, this is currently logged in user on the computer, but in a test environment case, this is not so.

I decided to create a domain managed service account named “octopussy” for this. (You know, that James Bond movie.) Then, I made sure that that user was a local admin on the test machine by running

net localgroup administrators domain1\octopussy /add

In order for the Octopus Deploy tentacle to be able to run the deployment, the tentacle service must run as the aforementioned user account:
Setting user for Octopus Tentacle service

Creating windows services for storage and compute emulators

In a normal development situation, the emulator run as the logged in user. If you remotely log in to a computer and start the emulators, they will shut down when you log off. In a test environment, we need the emulators to keep running. Therefore, you should set them up to run as services. There are several ways to do this, and I choose to use the Non-sucking service manager.

First, create a command file that starts the emulator and never quits:


@echo off
"C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator\WAStorageEmulator.exe" start


@echo off
"C:\Program Files\Microsoft SDKs\Azure\Emulator\csrun.exe" /devfabric

Once the commands files are in place, define and start the services:

.\nssm.exe install az_storage C:\app\storage_service.cmd
.\nssm.exe set az_storage ObjectName 'domain1\octopussy' 'PWD'
.\nssm.exe set az_storage Start service_auto_start
start-service az_storage
.\nssm.exe install az_fabric C:\app\devfabric_service.cmd
.\nssm.exe set az_fabric ObjectName 'domain1\octopussy' 'PWD'
.\nssm.exe set az_fabric Start service_auto_start
start-service az_fabric

Change the storage endpoints

In a test environment, you would often like to access the storage emulator from remote machines, for instance for running integration tests. Again, being focused on local development, the storage emulator is only accessible on localhost. To fix this, you need to edit the file

C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator\WAStorageEmulator.exe.config

Per default, the settings for the endpoints are like so:

    <service name="Blob"  url="" />
    <service name="Queue" url="" />
    <service name="Table" url="" />

The host references should be change to the host’s IP address or NetBIOS name. This can easily be found using ipconfig. For example:

    <service name="Blob"  url="" />
    <service name="Queue" url="" />
    <service name="Table" url="" />

I found this trick here.

What we can do now, is to reach the storage emulator endpoints from a remote client. To do this, we need to set the DevelopmentStorageProxyUri parameter in the connection string, like so:


In some circumstances, for instance if accessing the storage services using Visual Studio server explorer, you cannot use the UseDevelopmentStorage parameter in the connection string. Then you need to format the connection string like this:


Open endpoints in firewall

If you run your test environment on a server flavor of Windows, you might have to open up the storage emulator TCP ports in the firewall:

netsh advfirewall firewall add rule name=storage_blob  dir=in action=allow protocol=tcp localport=10000
netsh advfirewall firewall add rule name=storage_queue dir=in action=allow protocol=tcp localport=10001
netsh advfirewall firewall add rule name=storage_table dir=in action=allow protocol=tcp localport=10002

Create a package for emulator deployment

So, now that we have the test environment set with the Azure emulators, let’s prepare our application for build and deployment. My example application consists of one Worker role:

Solution with Worker Role

One challenge here is that if you build the cloud project for the emulator, no real “package” is created. The files are laid out as a directory structure to be picked up locally by the emulator. The package built for the real McCoy is not compatible with the emulator. Also, in order to deploy the code using Octopus Deploy, we need to wrap it as a Nuget package. The OctoPack tool is the natural choice for such a task, but it does not support the cloud project type.

Console project as ‘wrapper’

To fix the problem with creating a Nuget package for deployment to the emulator, we create a console project to act as a “wrapper.” We have no interest in a console application per se, but only in the project as a vehicle to create a Nuget package. So, we add a console project to our solution:

Added wrapper project to solution

We have to make sure that the WorkerRole project is built before our wrapper project, so we make sure the build order in the solution is correct:

Build order

Customizing the build

What we want, is the build to perform the following steps:

  1. Build WorkerRole dll
  2. Build Worker (cloud project) – prepare files for the compute emulator
  3. Build Worker.Wrapper – package compute emulator files and deployment script into a Nuget package

We have the two steps already covered with the existing setup. So what we need to do in the third step is to copy the prepared files to the build output directory of the Wrapper project, and then have OctoPack pick them up from there.

To copy the files, we set up a custom build step in the Wrapper project:

  <Target Name="CopyCsxFiles">
    <CreateItem Include="$(CsxDirectory)\**\*.*">
      <Output TaskParameter="Include" ItemName="CsxFilesToCopy" />
      <CsConfigFile Include="$(MSBuildProjectDirectory)\..\Worker\ServiceConfiguration.Cloud.cscfg" />
    <Copy SourceFiles="@(CsxFilesToCopy)" DestinationFiles="@(CsxFilesToCopy->'$(OutDir)\%(RecursiveDir)%(Filename)%(Extension)')" />
    <Copy SourceFiles="@(CsConfigFile)" DestinationFolder="$(OutDir)" />

We copy all the files in the csx directory in addition to the cloud project configuration file.

The next step is then to install OctoPack in the Wrapper project. This is done using the package manager console:

Install-Package OctoPack -ProjectName Worker.Wrapper

We’re almost set. Like I said earlier, the Wrapper project is a console project, but we are not really interested in a console application. So in order to remove all unnecessary gunk from our deployment package, we specify a .nuspec file where we explicitly list the files we need in the package. The .nuspec file name is prefixed with the project name. In this case, Worker.Wrapper.nuspec, and it contains:

<package xmlns="">
    <file src="bin\release\roles\**\*.*" target="roles" />
    <file src="bin\release\Service*" target="." />
    <file src="PostDeploy.ps1" target="." />
    <file src="PreDeploy.ps1" target="." />

We can now create a deployment package from msbuild, and set for build this artifact on Team City:

msbuild WorkerExample.sln /p:RunOctoPack=true /p:Configuration=Release /p:PackageForComputeEmulator=true
dir Worker.Wrapper\bin\release\*.nupkg

Notice that we set the property PackageForComputeEmulator to true. If not, Msbuild will package for the real Azure compute service in release configuration.

Deployment scripts

The final step is to deploy the application. Using Octopus Deploy, this is quite simple. Octopus has a convention where you can add PowerShell scripts to be executed before and after the deployment. The deployment of a console app using Octopus Deploy consists of unpacking the package on the target server. In our situation we need to tell the emulator to pick up and deploy the application files afterwards.

In order to finish up the deployment step, we create one file PreDeploy.ps1 that is executed before the package is unzipped, and one file PostDeploy.ps1 to be run afterwards.


In this step, we make sure that the emulators are running, and remove any existing deployment in the emulator:

$computeEmulator = "${env:ProgramFiles}\Microsoft SDKs\Azure\Emulator\csrun.exe"
$storageEmulator = "${env:ProgramFiles(x86)}\Microsoft SDKs\Azure\Storage Emulator\WAStorageEmulator.exe"

$ErrorActionPreference = 'continue'
Write-host "Starting the storage emulator, $storageEmulator start"
& $storageEmulator start 2>&1 | out-null

$ErrorActionPreference = 'stop'
Write-host "Checking if compute emulator is running"
& $computeEmulator /status 2>&1 | out-null
if (!$?) {
    Write-host "Compute emulator is not running. Starting..."
    & $computeEmulator /devfabric:start

Write-host "Removing existing deployments, running $computeEmulator /removeall"
& $computeEmulator /removeall


In this step, we do the deployment of the new application files to the emulator:

$here = split-path $script:MyInvocation.MyCommand.Path
$computeEmulator = "${env:ProgramFiles}\Microsoft SDKs\Azure\Emulator\csrun.exe"

$ErrorActionPreference = 'stop'
$configFile = join-path $here 'ServiceConfiguration.Cloud.cscfg'

Write-host "Deploying to the compute emulator $computeEmulator $here $configFile"
& $computeEmulator $here $configFile

And with that, we are done.

Moving my blog to Azure Web Sites

This blog has ran on a dedicated WordPress installation hosted by A couple of weeks ago, I decided to move my blog to Azure Web Sites. There were many reasons for this, all of them evolving around me wanting to investigate the technology and the offerings in Microsoft Azure in general, and in Web Sites in particular. Here’s how it went.

Creating the new site

The first step was obviously to create the new Web site in Azure. This turned out being very simple. I used the first part of Dave Bost’s blog series on Moving a WordPress Blog to Windows Azure for guidance. It all went easy peasy.

Moving the content

When Wordpess had been installed, the next step was to move the content. This step was a bit troublesome, and I had to try this I few times before making it work. I tried Dave’s approach in Moving a WordPress Blog to Windows Azure – Part 2, but it did not work our quite well for me. First of all, the “Portable phpMyAdmin” admin had evolved into “Adminer”, and a few of the features seems to have been changed on the way.

I ended up with first copying the content of the wp-content/uploads directory using FTP. Using FileZilla as a client, all I needed to do, was to reset my deployment credentials for the web site in Azure. I was then able to log in using FTP. (I were not able to make SFTP work, though.)

I then reinstalled the themes and plugins on the new site manually. After all, this was a good opportunity to clean up anyways, leaving the dated, not used, plugins and themes behind.

Finally, I moved the blog posts using the build in export and import functionality in WordPress.

Changing URLs

My intention as to host the address using my newly installed site at One tiny detail regarding URLs was that on my old site, the WordPress installation was in the subdirectory /nblog. On my new site, the WordPress installation was in the root directory. So I needed to forward requests to /nblog/* to /*. My first idea was to use IIS Rewrites for this, but according to this Stackoverflow question this module is not installed in Azure web sites. Instead, I then went on to creating an extremely simple ASP.NET MVC app to do the redirection. (Yes, I could probably have pulled this off using Web API as well, but MVC is more familiar to me)

Here is the essential code:

public class HomeController : Controller
    public ActionResult Index(string id)
        var queryString = Request.QueryString.ToString();
        var location = "/" + id + (string.IsNullOrEmpty(queryString) ? string.Empty : "?" + queryString);
        return new RedirectResult(location, true);

The trick I wanted to use, then, is to install this application in IIS under /nblog so that it will handle all requests to /nblog/*. To do this, I needed to use the FTP method for publishing the app to Azure:

Publish using FTP

Notice that the site path is set to site/nblog-redirector, which will locate it “beside” the WordPress installation at site/wwwroot on the server. Then, the application can be set up in the Azure Management

Applications and virtual directories

As you can see from the picture above, I also had to take care of some other content besides my blog, that I could FTP to the new site and register as virtual directories in IIS. Pretty nifty.

Using a custom domain

I wanted to host using my new web site in Azure. There were essentially two steps needed for this, only one of which that was apparent to me at the time. The apparent one was that I needed a DNS record that pointed to the web site. The existing record was an A record that pointed to my current hosting provider’s infrastructure. Because of the scalable, high availability nature of Azure web sites, this needed to be replaced by a CNAME record pointing to This as easy to set up at my current DNS provider:


Once set up, all there was to do, was to wait for the DNS change to propagate. At least, so I thought. The final piece of the puzzle, was that the custom domain name to be hosted in the Azure web site needed to be registered. There might be more to it, but I guess that the web sites uses host headers to distinguish requests in shared hosting scenarios in Azure web sites. I also found that in order to add custom domain names, I needed to change my hosting plan from “Free” to at least “shared”. When I did, I could register my domain:

Setting up custom domain names in web sites

And voilá.

Deep thoughts on shallow topics