Tag Archives: Windows Azure

How-to start and stop Azure VMs at a schedule

I use Azure VMs for dev/test and I do not want them to run all night, as I have to pay for it. Therefore, I stop the VMs at night with a scheduler, as I do not always remember to stop the VMs after use.

Azure Automation is the right tool for the job. Azure Automation automates Azure management tasks and orchestrates actions across external systems from within Azure. You need an Azure Automation Account, which is a container for all your runbooks, runbook executions (jobs), and the assets that your runbooks depend on.

To execute runbooks, a set of user credentials needs to be stored as an asset. Create a new user as described in Azure Automation: Authenticating to Azure using Azure Active Directory.

Below, see guide on how to create the Azure Automation account and the runbook.

CreateRunbookStopVM

The new Azure Automation account lybAutomation and the runbook Stop Windows Azure Virtual Machines on a Schedule are created from the gallery. The content in the gallery comes from the Azure Script Center. The Azure Script Center has many PowerShell scripts covering many scenarios, but not all can be used with Azure Automation, as some scripts use features not available in Azure Automation. You do get a warning if you select one that is not supported, but in my mind, it should not be available in the gallery at all.

It burned me the first time I tried Azure Automation. I used the Stop Windows Azure Virtual Machines on a Schedule from the gallery, but it uses an on-premise scheduler.

You need to store the credentials in the runbook of the user created earlier. See below.
SetupRunBookCredentialsThen you need to configure the runbook script with the credentials and the Azure subscription where the virtual machines reside. See below.
ConfigureRunbookVmStopYou find your subscription name in the top bar “Subscriptions” of the Azure portal.
Now you can test your runbook and all you need is to set up the schedule, so it runs every evening. See guide below.

ConfigureRunbookSchedule

Be aware that the time is in UTC, so you have to correct the time according to your time zone. I expect the scheduler to get an overhaul, as it is too simple right now.

Minimizing the cost of dev/test environments in Azure

I use Windows Azure as my dev/test environment because it is fast and convenient to create new virtual machines or services. I use the MSDN Subscription Azure Benefits, which includes free Azure Credits. The Azure Credits cover my dev/test needs even though I use more than a handful of VMs and services. I make smart use of the free Azure Credits by turning off VMs at night and during weekends, when I am not using them. Which means that I can use 3-4 times more VMs on Azure compared to just letting them run all the time. VMs are costly compared to the PaaS services such as Azure WebSites, SQL Azure and Cloud Services. So the PaaS services are not a cost issue.

I manage the Azure VMs and almost everything with the Server Explorer in Visual Studio. It is a quick way to start VMs in the morning.

StartStopAzureVmFromVs2013

If I have a list of VMs that I need to manage, then I use the Azure PowerShell cmdlets – see my How-to start and stop Azure VMs via PowerShell.

Finally, I use Azure Automation to ensure that I never have a running Azure VM all night, just because I forgot to shut it down – see How-to start and stop Azure VMs at a schedule. It automatically shuts down any VM running in my MSDN Subscription at 6 p.m. If I work later, I can just start the required VMs again – it only take a couple of minutes.

How-to start and stop Azure VMs via PowerShell

With PowerShell it is fast and convenient to manage my development and test servers running on Windows Azure. It is just easier to use command line tools than logging into the Azure management portal shutting down each VM. To set up PowerShell:

  1. Install the Azure PowerShell cmdlets
  2. Start the Azure PowerShell (do not start the regular PowerShell as it is not preconfigured with the Azure PowerShell cmdlets)
  3. Authorize Azure PowerShell to access your Azure subscriptions by typing in the Azure PowerShell shell:
    Add-AzureAccount
    

    In the sign-in window, provide your Microsoft credentials for the Azure account.

If you like me have multiple Azure subscriptions – change the default subscription with:

Select-AzureSubscription [-SubscriptionName]

To start an Azure VM the syntax is:

Start-AzureVM [–Name] [-ServiceName]

To start a VM named vs2015 in the cloud service lybCloudService requires as little as:

Start-AzureVM vs2015 lybCloudService

To stop the VM is just as easy

Stop-AzureVM [-Name] [-ServiceName]

If it is the last running VM in the cloud service, then you will be asked if you want to deallocate the cloud service or not, as the cloud service will release the public IP address. That is not a problem if you access your VM via DNS name – which most people do.
You can override the question by appending –Force like this:

Stop-AzureVM vs2015 lybCloudService –Force

There are many useful Azure PowerShell cmdlets to use. To list all Azure PowerShell cmdlets:

Help Azure

Get details on Azure PowerShell cmdlet:

Man <cmdlet name>

List all VMs:

Get-AzureVM

Get details of a specific VM:

Get-AzureVM [–Name] [-ServiceName]

The PowerShell prompt is just like a normal command prompt, so you can use tab completion and F7 to show all executed commands.

Using Lucene.Net with Microsoft Azure

Lucene indexes are usually stored on the file system and preferably on the local file system. In Azure there are additional types of storage with different capabilities, each with distinct benefits and drawbacks. The options for storing Lucene indexes in Azure are:

  • Azure CloudDrive
  • Azure Blob Storage

Azure CloudDrive

CloudDrive is the obvious solutions, as it is comparable to on premise file systems with mountable virtual hard drives (VHDs). CloudDrive is however not the optimal choice, as CloudDrive impose notable limitations. The most significant limitation is; only one web role, worker role or VM role can mount the CloudDrive at a time with read/write access. It is possible to mount multiple read-only snapshots of a CloudDrive, but you have to manage creation of new snapshots yourself depending on acceptable staleness of the Lucene indexes.

Azure Blob Storage

The alternative Lucene index storage solution is Blob Storage. Luckily a Lucene directory (Lucene index storage) implementation for Azure Blob Storage exists in the Azure library for Lucene.Net. It is called AzureDirectory and allows any role to modify the index, but only one role at a time. Furthermore each Lucene segment (See Lucene Index Segments) is stored in separate blobs, therefore utilizing many blobs at the same time. This allows the implementation to cache each segment locally and retrieve the blob from Blob Storage only when new segments are created. Consequently compound file format should not be used and optimization of the Lucene index is discouraged.

Code sample

Getting Lucene.Net up and running is simple, and using it with Azure library for Lucene.Net requires only the Lucene directory to be changes as highlighted below in Lucene index and search example. Most of it is Azure specific configuration pluming.

Lucene.Net.Util.Version version = Lucene.Net.Util.Version.LUCENE_29;

CloudStorageAccount.SetConfigurationSettingPublisher(
    (configName, configSetter) =>
        configSetter(RoleEnvironment
        .GetConfigurationSettingValue(configName)));

var cloudAccount = CloudStorageAccount
    .FromConfigurationSetting("LuceneBlobStorage");

var cacheDirectory = new RAMDirectory();

var indexName = "MyLuceneIndex";
var azureDirectory =
    new AzureDirectory(cloudAccount, indexName, cacheDirectory);

var analyzer = new StandardAnalyzer(version);

// Add content to the index
var indexWriter = new IndexWriter(azureDirectory, analyzer,
    IndexWriter.MaxFieldLength.UNLIMITED);
indexWriter.SetUseCompoundFile(false);

foreach (var document in CreateDocuments())
{
    indexWriter.AddDocument(document);
}

indexWriter.Commit();
indexWriter.Close();

// Search for the content
var parser = new QueryParser(version, "text", analyzer);
Query q = parser.Parse("azure");

var searcher = new IndexSearcher(azureDirectory, true);

TopDocs hits = searcher.Search(q, null, 5, Sort.RELEVANCE);

foreach (ScoreDoc match in hits.scoreDocs)
{
    Document doc = searcher.Doc(match.doc);

    var id = doc.Get("id");
    var text = doc.Get("text");
}
searcher.Close();

Download the reference example which uses Azure SDK 1.3 and Lucene.Net 2.9 in a console application connecting either to Development Fabric or your Blob Storage account.

Lucene Index Segments (simplified)

Segments are the essential building block in Lucene. A Lucene index consists of one or more segments, each a standalone index. Segments are immutable and created when an IndexWriter flushes. Deletes or updates to an existing segment are therefore not removed stored in the original segment, but marked as deleted, and the new documents are stored in a new segment.

Optimizing an index reduces the number of segments, by creating a new segment with all the content and deleting the old ones.

Azure library for Lucene.Net facts

  • It is licensed under Ms-PL, so you do pretty much whatever you want to do with the code.
  • Based on Block Blobs (optimized for streaming) which is in tune with Lucene’s incremental indexing architecture (immutable segments) and the caching features of the AzureDirectory voids the need for random read/write of the Blob Storage.
  • Caches index segments locally in any Lucene directory (e.g. RAMDirectory) and by default in the volatile Local Storage.
  • Calling Optimize recreates the entire blob, because all Lucene segment combined into one segment. Consider not optimizing.
  • Do not use Lucene compound files, as index changes will recreate the entire blob. Also this stores the entire index in one blob (+metadata blobs).
  • Do use a VM role size (Small, Medium, Large or ExtraLarge) where the Local Resource size is larger than the Lucene index, as the Lucene segments are cached by default in Local Resource storage.

Azure CloudDrive facts

  • Only Fixed Size VHDs are supported.
  • Volatile Local Resources can be used to cache VHD content
  • Based on Page Blobs (optimized for random read/write).
  • Stores the entire VHS in one Page Blob and is therefore restricted to the Page Blob maximum limit of 1 TByte.
  • A role can mount up to 16 drives.
  • A CloudDrive can only be mounted to a single VM instance at a time for read/write access.
  • Snapshot CloudDrives are read-only and can be mounted as read-only drives by multiple different roles at the same time.

Additional Azure references