How to Build A Search Page With ElasticSearch and .Net

How to Build A Search Page With ElasticSearch and .Net

    1. Part I

ElasticSearch is credited to achieve fast search responses because it searches an index instead of searching the context directly.

Very much like retrieving pages in a book related to a word by scanning the index at the back of a book, as instead of searching every word in every page of the book.

This type of index is known as the inverted index because it inverts a page-centric data structure (page->words) to a keyword-centric data structure (word->pages).

This uses Apache Lucene to create and manage this inverted index.
On the other hand, SQL Server’s Full-Text search is good for searching text that is within a database, specially if the context is less-well structured, or comes from a wide variety of sources or formats.

Many .NET developers might ask why they would need other search engines when they are happy with SQL Server’s Full-Text Search feature. Well, the answer is that it might be enough for very simple searches, but need a better choice when we need to index and search unstructured data from different sources.

Elasticsearch currently the most popular search engine; is an open source search engine, created in Java and based on Lucene. It offers greater scalability than SQL Server’s full-text search.


To interact with Elasticsearch, NEST 2.3.0 is used which is one of two official .NET clients for Elasticsearch. NEST is a high-level client which tracks closely to Elasticsearch API. All the request and response objects are mapped. NEST provides the alternatives of either a syntax for creating queries, which resembles the structure of raw JSON requests to API, or the use of object initializer syntax.

In order to build a web page, use Single Page Application (SPA) approach with AngularJS as MVVM framework. The client side will send AJAX requests to ASP.NET Web API 2. The Web API 2 controller will use NEST to communicate with Elasticsearch.

Code snippets in this article will show the service implementation only.

Installation of Elasticsearch

Visit the web page, download an installer, unzip it and install in the mentioned three simple steps.
It exposes an HTTP API so it is easy to use cURL to make requests but its recommended to use Sense which is Chrome extension. The Elasticsearch reference contains samples in a format called cURL: E.g. the request to get high-level statistics for all the indices seems like this:

curl localhost:9200/_stats
but Sense gives a nice copy and paste feature that converts cURL requests to the proper Sense syntax:
GET /_stats
Search index population
Elasticsearch is document-oriented, it stores entire documents in its index. First of need to create a client to communicate with Elasticsearch….
var node = new Uri(“http://localhost:9200″);

var settings = new ConnectionSettings(node);
var client = new ElasticClient(settings);

Next, let’s create a class representing our document.
public class Post

public string Id { get; set; }

public DateTime? CreationDate { get; set; }

public int? Score { get; set; }

public int? AnswerCount { get; set; }

public string Body { get; set; }

public string Title { get; set; }

[String(Index = FieldIndexOption.NotAnalyzed)]

public IEnumerable<string> Tags { get; set; }

public IEnumerable<string> Suggest { get; set; }

Although Elasticsearch is able to resolve the document type and its fields at index time, field mappings can be override or use attributes on fields in order to give for more advanced usages.
In this example, the POCO class is decorated with some attributes so we need to create mappings with AutoMap.

var indexDescriptor = new CreateIndexDescriptor(stack overflow)
.Mappings(ms => ms
Map<Post>(m => m.AutoMap()));

Then, create the index called and put the mappings.
client.CreateIndex(“stackoverflow”, i => indexDescriptor);

After defining our mappings and created an index, we can seed it with documents. Elasticsearch does not give any handler to import specific file formats such as XML or CSV, but because it has client libraries for different languages, it is easy to build your own importer. As StackOverflow dump is in XML format, use .NET XmlReader class to read question rows, map them to an instance of Post and add objects to the collection.

With this we conclude for today, stay connected with us for further discussion on this in our next article.

If you want to learn ASP.Net and perfect yourself in .NET training, then CRB Tech Solutions would be of great help and support for you. Join us with our updated program in ASP.Net course.

Stay tuned to CRB Tech reviews for more technical optimization and other resources.

Related Articles:

.Net programming concepts explained in detail