Lucene indexes are usually stored on the file system and preferably on the local file system. In Azure there are additional types of storage with different capabilities, each with distinct benefits and drawbacks. The options for storing Lucene indexes in Azure are:
- Azure CloudDrive
- Azure Blob Storage
CloudDrive is the obvious solutions, as it is comparable to on premise file systems with mountable virtual hard drives (VHDs). CloudDrive is however not the optimal choice, as CloudDrive impose notable limitations. The most significant limitation is; only one web role, worker role or VM role can mount the CloudDrive at a time with read/write access. It is possible to mount multiple read-only snapshots of a CloudDrive, but you have to manage creation of new snapshots yourself depending on acceptable staleness of the Lucene indexes.
Azure Blob Storage
The alternative Lucene index storage solution is Blob Storage. Luckily a Lucene directory (Lucene index storage) implementation for Azure Blob Storage exists in the Azure library for Lucene.Net. It is called AzureDirectory and allows any role to modify the index, but only one role at a time. Furthermore each Lucene segment (See Lucene Index Segments) is stored in separate blobs, therefore utilizing many blobs at the same time. This allows the implementation to cache each segment locally and retrieve the blob from Blob Storage only when new segments are created. Consequently compound file format should not be used and optimization of the Lucene index is discouraged.
Getting Lucene.Net up and running is simple, and using it with Azure library for Lucene.Net requires only the Lucene directory to be changes as highlighted below in Lucene index and search example. Most of it is Azure specific configuration pluming.
Lucene.Net.Util.Version version = Lucene.Net.Util.Version.LUCENE_29;
(configName, configSetter) =>
var cloudAccount = CloudStorageAccount
var cacheDirectory = new RAMDirectory();
var indexName = "MyLuceneIndex";
var azureDirectory =
new AzureDirectory(cloudAccount, indexName, cacheDirectory);
var analyzer = new StandardAnalyzer(version);
// Add content to the index
var indexWriter = new IndexWriter(azureDirectory, analyzer,
foreach (var document in CreateDocuments())
// Search for the content
var parser = new QueryParser(version, "text", analyzer);
Query q = parser.Parse("azure");
var searcher = new IndexSearcher(azureDirectory, true);
TopDocs hits = searcher.Search(q, null, 5, Sort.RELEVANCE);
foreach (ScoreDoc match in hits.scoreDocs)
Document doc = searcher.Doc(match.doc);
var id = doc.Get("id");
var text = doc.Get("text");
Download the reference example which uses Azure SDK 1.3 and Lucene.Net 2.9 in a console application connecting either to Development Fabric or your Blob Storage account.
Lucene Index Segments (simplified)
Segments are the essential building block in Lucene. A Lucene index consists of one or more segments, each a standalone index. Segments are immutable and created when an IndexWriter flushes. Deletes or updates to an existing segment are therefore not removed stored in the original segment, but marked as deleted, and the new documents are stored in a new segment.
Optimizing an index reduces the number of segments, by creating a new segment with all the content and deleting the old ones.
Azure library for Lucene.Net facts
- It is licensed under Ms-PL, so you do pretty much whatever you want to do with the code.
- Based on Block Blobs (optimized for streaming) which is in tune with Lucene’s incremental indexing architecture (immutable segments) and the caching features of the AzureDirectory voids the need for random read/write of the Blob Storage.
- Caches index segments locally in any Lucene directory (e.g. RAMDirectory) and by default in the volatile Local Storage.
- Calling Optimize recreates the entire blob, because all Lucene segment combined into one segment. Consider not optimizing.
- Do not use Lucene compound files, as index changes will recreate the entire blob. Also this stores the entire index in one blob (+metadata blobs).
- Do use a VM role size (Small, Medium, Large or ExtraLarge) where the Local Resource size is larger than the Lucene index, as the Lucene segments are cached by default in Local Resource storage.
Azure CloudDrive facts
- Only Fixed Size VHDs are supported.
- Volatile Local Resources can be used to cache VHD content
- Based on Page Blobs (optimized for random read/write).
- Stores the entire VHS in one Page Blob and is therefore restricted to the Page Blob maximum limit of 1 TByte.
- A role can mount up to 16 drives.
- A CloudDrive can only be mounted to a single VM instance at a time for read/write access.
- Snapshot CloudDrives are read-only and can be mounted as read-only drives by multiple different roles at the same time.
Additional Azure references