Our next-gen architecture is built to help you make sense of your ever-growing data.

Watch a 4-min demo video!

Elasticsearch Architecture: 8 Key Components and Putting Them to Work

  • 7 min read

What Is Elasticsearch? 

Elasticsearch is a distributed, RESTful search and analytics engine for scalability and search capabilities. Built on top of Apache Lucene, it allows users to store, search, and analyze large volumes of data quickly and in near real-time. Given its speed and ability to perform complex search queries, it is often used for data analysis and full-text search applications.

By using distributed computing, Elasticsearch ensures that data is spread across multiple nodes, providing fault tolerance and high availability. It also supports various data types and offers full-text search through an easy-to-use JSON-based REST API. These features make Elasticsearch suitable for use cases like log and event data analytics, search functionalities in applications, and business intelligence.

This is part of a series of articles about ELK stack

In this article, you will learn:

Elasticsearch Architecture and Components 

The Elasticsearch architecture includes the following components:

  1. Clusters: A cluster is a collection of one or more nodes that together hold all the data and provide federated indexing and search capabilities across nodes. In Elasticsearch, clusters ensure data is replicated and load is distributed across nodes, ensuring high availability and fault tolerance. Each cluster has a unique identifier for coordinating tasks among nodes. Clusters can be scaled horizontally by adding more nodes. 
  2. Nodes: These are single instances of Elasticsearch that store data and participate in the cluster’s indexing and search capabilities. Each node is identified by a unique name and belongs to a cluster. Nodes can assume different roles such as master nodes, data nodes, or coordinating nodes.
  3. Ports: Each Elasticsearch node communicates over specific ports, the most common being TCP port 9200 for REST API and 9300 for node-to-node communication. Proper configuration of these ports is critical for secure and efficient data transfer. The 9200 port serves the HTTP REST API, while the 9300 port enables cluster coordination and data replication among nodes.
  4. Index/shards: An index in Elasticsearch is similar to a database in traditional RDBMS; it stores documents and enables efficient search and retrieval. Each index can be subdivided into shards, which are the individual instances of Lucene. Sharding helps distribute data and query load, improving performance and ensuring scalability.
  5. Replicas: Replica shards are copies of primary shards that Elasticsearch uses to provide fault tolerance and high availability. By default, each primary shard has at least one replica. If a node fails, replica shards ensure that no data is lost, and the cluster remains operational.
  6. Analyzers: These help process text data by breaking it down into tokens or terms suitable for indexing and searching. An analyzer consists of a tokenizer and possibly a set of filters. The tokenizer divides text into terms, while filters can modify these terms, like converting them to lowercase or removing stop words. Elasticsearch provides a variety of built-in analyzers, and users can create custom analyzers. 
  7. Documents: These are the basic units of information that can be indexed. They are JSON objects and consist of fields, where each field has a specific data type. Documents are indexed within an index and can be queried using the Elasticsearch query DSL. They are schema-less, meaning the structure can vary from document to document, providing flexibility.
  8. REST API: This feature of Elasticsearch allows users to interact with the cluster through HTTP requests. Operations such as indexing, searching, updating, and deleting documents can be performed easily via the REST API. It supports CRUD operations and offers extensive querying capabilities using JSON.

Elasticsearch Architecture in Action: How Elasticsearch Works

Indices, Documents, and Fields

In Elasticsearch, an index is a collection of documents that are logically related to each other. An index is akin to a database in a traditional RDBMS, and it is created for storing and managing documents. Each document within an index is a JSON object, consisting of fields that hold data. A field is the smallest data unit in Elasticsearch, with each field being associated with a specific data type such as text, number, date, or geo-point.

Elasticsearch allows schema flexibility, meaning documents in the same index can have different structures. However, for optimal performance and querying, it’s often useful to define mappings, which define the data types of fields and how they should be indexed. Indices are subdivided into shards for better distribution of data across nodes, ensuring scalability and fault tolerance.

Search and Analysis

Elasticsearch’s search functionality is powered by its ability to perform full-text searches, where queries are executed over indexed documents. The primary mechanism for executing searches is via the Query DSL (Domain Specific Language), a powerful, JSON-based language that enables complex queries combining full-text searches, filtering, and aggregation.

In addition to search, Elasticsearch is useful for analyzing data. It allows users to break down text into tokens using analyzers, enabling efficient indexing and retrieval of data. Elasticsearch supports various query types like match, term, and range queries, and allows result ranking based on relevance. Aggregations enable the system to perform statistical and analytical operations on large datasets.

Scalability and Resilience

Elasticsearch’s distributed architecture allows horizontal scaling by adding more nodes to the cluster. Data is distributed across primary and replica shards, enabling high availability. As new nodes are added to the cluster, the system rebalances shards. In case of node failure, the system automatically reallocates shards to ensure continued data access and prevent downtime.

To ensure resilience, Elasticsearch supports replica shards, which are copies of primary shards. If a node containing a primary shard fails, a replica shard can quickly take over to prevent data loss and downtime. When a node joins or leaves the cluster, the system automatically redistributes shards to maintain a balanced and healthy state.

Chen Harel
VP Product, Coralogix

Tips from the expert

In my experience, here are tips that can help you better optimize your Elasticsearch architecture and operations:

  1. Use index templates for consistency: Create index templates to enforce consistent settings and mappings across indices, ensuring optimal performance and avoiding potential issues with dynamic mapping.
  2. Implement cross-cluster search for scalability: Use cross-cluster search to allow querying across multiple Elasticsearch clusters. This enhances scalability and can help manage large datasets spread across different clusters.
  3. Optimize garbage collection settings: Fine-tune JVM garbage collection settings to minimize latency and pauses. G1GC (Garbage-First Garbage Collector) is recommended for Elasticsearch to handle large heap sizes efficiently.
  4. Ensure proper field data types: Carefully choose the appropriate data types for fields to optimize storage and search performance. Avoid using text fields for aggregations and sorting; prefer keyword fields instead.
  5. Implement security best practices: Ensure security configurations, such as enabling HTTPS, configuring role-based access control (RBAC), and integrating with existing authentication systems (e.g., LDAP, OAuth), to protect data and maintain compliance.

Best Practices for Managing Elasticsearch Clusters 

Here are some of the ways that organizations can manage their clusters and ensure the best use of Elasticsearch.

Cluster Planning and Sizing

Estimating data volume, query load, and growth trends helps in determining the number of nodes, shards, and replicas needed. Capacity planning ensures resources are used efficiently, and the cluster can handle peak loads without performance degradation.

Sizing the cluster involves balancing the trade-offs between performance, redundancy, and cost. Over-provisioning may lead to unnecessary expenses, while under-provisioning can cause system failures. Regular monitoring and scaling based on workload trends are crucial for maintaining an optimal Elasticsearch environment.

Index Lifecycle Management

Index lifecycle management (ILM) automates the management of indices according to their lifecycle stages, such as hot, warm, and cold. Implementing ILM policies helps manage storage costs and improve performance by optimizing data retention and movement. Hot indices contain frequently accessed data, while warm and cold indices hold less frequently accessed data.

By defining ILM policies, users can automate index transitions and ensure performant queries. ILM assists in archiving old data, thus freeing up resources for active indices. Regular review of ILM policies ensures they align with evolving data and query patterns, improving resource utilization.

Bulk Indexing

Bulk indexing is a technique for indexing large volumes of data efficiently. Elasticsearch provides a bulk API that allows multiple documents to be indexed, updated, or deleted in a single request. This reduces overhead and improves throughput compared to processing documents individually.

Effective bulk indexing involves optimizing batch sizes and carefully handling errors to prevent data loss. Tweaking bulk request parameters and monitoring ingestion performance can achieve high indexing throughput, useful for managing large data sets.

Query Optimization

Query optimization is essential for maximizing Elasticsearch performance. Crafting efficient queries reduces response times and resource utilization. Techniques include using filters, aggregations, and scoring functions appropriately while avoiding non-selective queries.

Caching is useful to store the results of frequent queries, reducing computation time. Analyzing query performance metrics and adjusting indices and mappings can further optimize search performance. Continuous monitoring and refinement of queries ensure they remain efficient as data and usage patterns change.

Virtual Memory Adjustment

Adjusting virtual memory settings, such as the vm.max_map_count parameter, helps ensure optimal Elasticsearch performance on Linux systems. This parameter controls the maximum number of memory map areas a process can utilize. Insufficient settings can lead to system crashes and degraded performance.

Regularly reviewing and fine-tuning virtual memory settings helps align system resources with Elasticsearch’s requirements. Properly configured virtual memory reduces latency and enhances stability, ensuring Elasticsearch can handle large-scale operations.

Full-stack, affordable Observability with Coralogix

Coralogix sets itself apart in observability with its modern architecture, enabling real-time insights into logs, metrics, and traces with built-in cost optimization. Coralogix’s straightforward pricing covers all its platform offerings including APM, RUM, SIEM, Infrastructure Monitoring and much more. With unparalleled support that features less than 1 minute response times and 1 hour resolution times, Coralogix is a leading choice for thousands of organizations across the globe.

Learn more about how companies like Claroty and Bharatpe simplified their Observability practice by migrating from Elastic to Coralogix.

Observability and Security
that Scale with You.