Category Archives: Windows Azure

Getting started with Windows Containers

Soon Windows Server 2016 will be released and so will the Docker Engine compatible Windows Containers feature. Here is a tutorial for you to get started with Windows Containers.

One thing to be aware of when working with containers is that the underlying host must be of the same type of operating system as the container running on the host. Linux containers on Linux hosts and Windows containers on Windows hosts.

First a container host is needed – you can use Windows 10 Anniversary Update or a Windows Server 2016.

The easiest way of getting started is to spin up a Windows Server 2016 on Azure (get a free trial) with the Container feature enabled.

  1. Select new
  2. Type “container” to filter only for container images
  3. Select the “Windows Server 2016 with Containers” equivalent (as of writing Tech Preview 5)CreateWindowsServer2016Container
  4. Follow the wizard to specify VM size etc.

Alternatively, you can follow Windows Containers on Windows Server guide to install the Container feature on an existing Windows Server 2016.

Once you have created the host you can connect to the host via RDP (in the Azure portal use the ”Connect” button in the top menu).

Start up a command prompt or the PowerShell. You can use PowerShell for Docker or the Docker CLI to execute Docker commands. The commands are the same across platform – no matter if you are using Linux or Windows-based containers. I’ll be using the Docker CLI commands. The common commands are:

  • Docker info – which will show you version etc.
  • Docker images – shows all the images in the local repository
  • Docker ps – show all the running containers
  • Docker ps -a – the -a (or -all) shows all containers including containers that are no longer running
  • Docker run – run an instance of an image

Let’s get started.

Run the following command to see which images are available in the local repository.

PS C:\Users\aly\Desktop> docker images
REPOSITORY                  TAG             IMAGE ID     CREATED      SIZE
microsoft/windowsservercore 10.0.14300.1030 02cb7f65d61b 10 weeks ago 7.764 GB
microsoft/windowsservercore latest          02cb7f65d61b 10 weeks ago 7.764 GB
PS C:\Users\aly\Desktop>

On my Windows Server 2016 Tech Preview 5 there are two images both with the name microsoft/windowsservercore with two different tags. Both of them has the same image id, so they are the same image with two different tags.

To start a container of the image tagged with ‘latest’ run the following:

PS C:\Users\aly\Desktop> docker run microsoft/windowsservercore:latest
Microsoft Windows [Version 10.0.14300]
(c) 2016 Microsoft Corporation. All rights reserved.

PS C:\Users\aly\Desktop>

The tag is optional, but the default value is ‘latest’.

The container was started and a command prompt appeared, but then it shut down again and it returned to my PowerShell prompt.

If you want to interact with the container, add the -it (interactive) option. You also have the option of specifying which process should be run in the container (cmd is default for this image):

docker run -it microsoft/windowsservercore:latest cmd

Now a command prompt appears and you are in the context of the container. If you modify a file e.g. adds or deletes a file, then the changes will only apply to the container and not the host.

Create a simple file like this:

echo "Hello Windows Containers" > hello.txt

You can exit the container by typing exit, and the container will terminate. Alternatively, you can press CTRL + P + Q to exit and leave the container running.
If you left the container running, you can see the container by listing the Docker processes:

PS C:\Users\aly\Desktop> docker ps
CONTAINER ID IMAGE                             COMMAND   CREATED       STATUS    PORTS NAMES
23ca16bb6fdb microsoft/windowsservercore:latest "cmd" 4 minutes ago Up 4 minutes pedantic_lamport

If the container was terminated, the -a option needs to be appended.
You can reattach to the container by specifying the container id or name. In my case 23ca16bb6fdb or pedantic_lamport like so:

Docker attach 23ca16bb6fdb

You only have the Windows Server Core image in the local repository, but you can download others by pulling from Docker Hub.

docker pull microsoft/nanoserver

Remember that only Windows-based images will run on a Windows host, so if you try the Hello-World Linux-based image, it will fail with a not so elaborate error message.

PS C:\Users\aly\Desktop> docker pull hello-world
Using default tag: latest
latest: Pulling from library/hello-world
c04b14da8d14: Extracting [==================================================>] 974 B/974 B
failed to register layer: re-exec error: exit status 1: output: ProcessBaseLayer C:\ProgramData\docker\winc266a137b0b1fffedf91d8cd6fcb6560f12afe5277e44bca8cb34ec530286: The system cannot find the path specified.

For now it is not easy to differentiate between Linux or Windows-based images on Docker Hub. I would have wished for a filter, making it easier to find relevant images.

Microsoft has a public repository of all the official released Microsoft container images.

How-to start and stop Azure VMs at a schedule

I use Azure VMs for dev/test and I do not want them to run all night, as I have to pay for it. Therefore, I stop the VMs at night with a scheduler, as I do not always remember to stop the VMs after use.

Azure Automation is the right tool for the job. Azure Automation automates Azure management tasks and orchestrates actions across external systems from within Azure. You need an Azure Automation Account, which is a container for all your runbooks, runbook executions (jobs), and the assets that your runbooks depend on.

To execute runbooks, a set of user credentials needs to be stored as an asset. Create a new user as described in Azure Automation: Authenticating to Azure using Azure Active Directory.

Below, see guide on how to create the Azure Automation account and the runbook.


The new Azure Automation account lybAutomation and the runbook Stop Windows Azure Virtual Machines on a Schedule are created from the gallery. The content in the gallery comes from the Azure Script Center. The Azure Script Center has many PowerShell scripts covering many scenarios, but not all can be used with Azure Automation, as some scripts use features not available in Azure Automation. You do get a warning if you select one that is not supported, but in my mind, it should not be available in the gallery at all.

It burned me the first time I tried Azure Automation. I used the Stop Windows Azure Virtual Machines on a Schedule from the gallery, but it uses an on-premise scheduler.

You need to store the credentials in the runbook of the user created earlier. See below.
SetupRunBookCredentialsThen you need to configure the runbook script with the credentials and the Azure subscription where the virtual machines reside. See below.
ConfigureRunbookVmStopYou find your subscription name in the top bar “Subscriptions” of the Azure portal.
Now you can test your runbook and all you need is to set up the schedule, so it runs every evening. See guide below.


Be aware that the time is in UTC, so you have to correct the time according to your time zone. I expect the scheduler to get an overhaul, as it is too simple right now.

Minimizing the cost of dev/test environments in Azure

I use Windows Azure as my dev/test environment because it is fast and convenient to create new virtual machines or services. I use the MSDN Subscription Azure Benefits, which includes free Azure Credits. The Azure Credits cover my dev/test needs even though I use more than a handful of VMs and services. I make smart use of the free Azure Credits by turning off VMs at night and during weekends, when I am not using them. Which means that I can use 3-4 times more VMs on Azure compared to just letting them run all the time. VMs are costly compared to the PaaS services such as Azure WebSites, SQL Azure and Cloud Services. So the PaaS services are not a cost issue.

I manage the Azure VMs and almost everything with the Server Explorer in Visual Studio. It is a quick way to start VMs in the morning.


If I have a list of VMs that I need to manage, then I use the Azure PowerShell cmdlets – see my How-to start and stop Azure VMs via PowerShell.

Finally, I use Azure Automation to ensure that I never have a running Azure VM all night, just because I forgot to shut it down – see How-to start and stop Azure VMs at a schedule. It automatically shuts down any VM running in my MSDN Subscription at 6 p.m. If I work later, I can just start the required VMs again – it only take a couple of minutes.

How-to start and stop Azure VMs via PowerShell

With PowerShell it is fast and convenient to manage my development and test servers running on Windows Azure. It is just easier to use command line tools than logging into the Azure management portal shutting down each VM. To set up PowerShell:

  1. Install the Azure PowerShell cmdlets
  2. Start the Azure PowerShell (do not start the regular PowerShell as it is not preconfigured with the Azure PowerShell cmdlets)
  3. Authorize Azure PowerShell to access your Azure subscriptions by typing in the Azure PowerShell shell:

    In the sign-in window, provide your Microsoft credentials for the Azure account.

If you like me have multiple Azure subscriptions – change the default subscription with:

Select-AzureSubscription [-SubscriptionName]

To start an Azure VM the syntax is:

Start-AzureVM [–Name] [-ServiceName]

To start a VM named vs2015 in the cloud service lybCloudService requires as little as:

Start-AzureVM vs2015 lybCloudService

To stop the VM is just as easy

Stop-AzureVM [-Name] [-ServiceName]

If it is the last running VM in the cloud service, then you will be asked if you want to deallocate the cloud service or not, as the cloud service will release the public IP address. That is not a problem if you access your VM via DNS name – which most people do.
You can override the question by appending –Force like this:

Stop-AzureVM vs2015 lybCloudService –Force

There are many useful Azure PowerShell cmdlets to use. To list all Azure PowerShell cmdlets:

Help Azure

Get details on Azure PowerShell cmdlet:

Man <cmdlet name>

List all VMs:


Get details of a specific VM:

Get-AzureVM [–Name] [-ServiceName]

The PowerShell prompt is just like a normal command prompt, so you can use tab completion and F7 to show all executed commands.

Using Lucene.Net with Microsoft Azure

Lucene indexes are usually stored on the file system and preferably on the local file system. In Azure there are additional types of storage with different capabilities, each with distinct benefits and drawbacks. The options for storing Lucene indexes in Azure are:

  • Azure CloudDrive
  • Azure Blob Storage

Azure CloudDrive

CloudDrive is the obvious solutions, as it is comparable to on premise file systems with mountable virtual hard drives (VHDs). CloudDrive is however not the optimal choice, as CloudDrive impose notable limitations. The most significant limitation is; only one web role, worker role or VM role can mount the CloudDrive at a time with read/write access. It is possible to mount multiple read-only snapshots of a CloudDrive, but you have to manage creation of new snapshots yourself depending on acceptable staleness of the Lucene indexes.

Azure Blob Storage

The alternative Lucene index storage solution is Blob Storage. Luckily a Lucene directory (Lucene index storage) implementation for Azure Blob Storage exists in the Azure library for Lucene.Net. It is called AzureDirectory and allows any role to modify the index, but only one role at a time. Furthermore each Lucene segment (See Lucene Index Segments) is stored in separate blobs, therefore utilizing many blobs at the same time. This allows the implementation to cache each segment locally and retrieve the blob from Blob Storage only when new segments are created. Consequently compound file format should not be used and optimization of the Lucene index is discouraged.

Code sample

Getting Lucene.Net up and running is simple, and using it with Azure library for Lucene.Net requires only the Lucene directory to be changes as highlighted below in Lucene index and search example. Most of it is Azure specific configuration pluming.

Lucene.Net.Util.Version version = Lucene.Net.Util.Version.LUCENE_29;

    (configName, configSetter) =>

var cloudAccount = CloudStorageAccount

var cacheDirectory = new RAMDirectory();

var indexName = "MyLuceneIndex";
var azureDirectory =
    new AzureDirectory(cloudAccount, indexName, cacheDirectory);

var analyzer = new StandardAnalyzer(version);

// Add content to the index
var indexWriter = new IndexWriter(azureDirectory, analyzer,

foreach (var document in CreateDocuments())


// Search for the content
var parser = new QueryParser(version, "text", analyzer);
Query q = parser.Parse("azure");

var searcher = new IndexSearcher(azureDirectory, true);

TopDocs hits = searcher.Search(q, null, 5, Sort.RELEVANCE);

foreach (ScoreDoc match in hits.scoreDocs)
    Document doc = searcher.Doc(match.doc);

    var id = doc.Get("id");
    var text = doc.Get("text");

Download the reference example which uses Azure SDK 1.3 and Lucene.Net 2.9 in a console application connecting either to Development Fabric or your Blob Storage account.

Lucene Index Segments (simplified)

Segments are the essential building block in Lucene. A Lucene index consists of one or more segments, each a standalone index. Segments are immutable and created when an IndexWriter flushes. Deletes or updates to an existing segment are therefore not removed stored in the original segment, but marked as deleted, and the new documents are stored in a new segment.

Optimizing an index reduces the number of segments, by creating a new segment with all the content and deleting the old ones.

Azure library for Lucene.Net facts

  • It is licensed under Ms-PL, so you do pretty much whatever you want to do with the code.
  • Based on Block Blobs (optimized for streaming) which is in tune with Lucene’s incremental indexing architecture (immutable segments) and the caching features of the AzureDirectory voids the need for random read/write of the Blob Storage.
  • Caches index segments locally in any Lucene directory (e.g. RAMDirectory) and by default in the volatile Local Storage.
  • Calling Optimize recreates the entire blob, because all Lucene segment combined into one segment. Consider not optimizing.
  • Do not use Lucene compound files, as index changes will recreate the entire blob. Also this stores the entire index in one blob (+metadata blobs).
  • Do use a VM role size (Small, Medium, Large or ExtraLarge) where the Local Resource size is larger than the Lucene index, as the Lucene segments are cached by default in Local Resource storage.

Azure CloudDrive facts

  • Only Fixed Size VHDs are supported.
  • Volatile Local Resources can be used to cache VHD content
  • Based on Page Blobs (optimized for random read/write).
  • Stores the entire VHS in one Page Blob and is therefore restricted to the Page Blob maximum limit of 1 TByte.
  • A role can mount up to 16 drives.
  • A CloudDrive can only be mounted to a single VM instance at a time for read/write access.
  • Snapshot CloudDrives are read-only and can be mounted as read-only drives by multiple different roles at the same time.

Additional Azure references