OpenSearch Multi‐Node Cluster Setup Guide

Last updated: July 3, 2025


This guide outlines the step-by-step process for safely adding and configuring additional nodes to an existing OpenSearch cluster.

---

## 1. Take latest Opensearch Snapshot

- Log in to the OpenSearch Dashboard.

- Verify if snapshot repositories are configured.

- Check the timestamp of the latest snapshot to confirm recent backup.

- Manually take a snapshot before initiating the Multi-Node Setup activity to ensure we have latest data available.

---

## 2. Backup OpenSearch Templates

- Go to Dev Tools and run:

```http

GET _cat/templates

```

- Check for any customer-created templates.

- Then run:

```http

GET _template

```

- Copy the entire output and save it locally (Ctrl + A → Ctrl + C).

> You can use this backup with a PUT request if templates need to be restored.

---

## 3. Launch New Nodes

- Create a new EC2 instance same size as of existing Diagnostic Node:

- Type: t3a.large

> Note: Depending upon customer's index size and retention period you can update the instance type later in another maintenance window.

- AZ: Same as the existing diagnostic node.

- Repeat to launch a second instance.

> Note: Provide proper naming conventions to new nodes such as [<tenant-name>-oc-diagnostics-1] and [<tenant-name>-oc-diagnostics-2].

---

## 4. Pre-Configuration Check

- Ensure both new nodes show connected status before proceeding.

---

## 5. Prepare Directory Structure

Run on both new nodes:

```bash

mkdir -p /data/es

chmod 777 /data/es

```

Also Set Kernel Parameters on all nodes:

```bash

echo 'vm.max_map_count=262144' | sudo tee -a /etc/sysctl.conf

sudo sysctl -p

```

> Note: The vm.max_map_count setting controls how many memory-mapped areas a process can use, which OpenSearch needs for managing large indices. Setting it to 262144 ensures stability, and applying it via sysctl makes the change immediate and persistent.

---

## 6. Validate Hosts and Directories

- Check /etc/hosts:

```bash

cat /etc/hosts

```

- Verify presence of loopback IP (`127.0.0.1` or ::1).

- Confirm /data/es exists.

---

## 7. Verify Cluster Name

- Run:

```http

GET /

```

- Confirm that cluster.name is correct.

---

## 8. Update Elasticsearch Service Configuration

Modify the Elasticsearch service environment variables:

```json

{

"plugins.security.disabled": "true",

"compatibility.override_main_response_version": "true",

"OPENSEARCH_JAVA_OPTS": "-Xmx4g -Xms4g",

"http.max_content_length": "200mb",

"cluster.name": "docker-cluster",

"discovery.seed_hosts": "<IP_1>,<IP_2>,<IP_3>",

"cluster.initial_cluster_manager_nodes": "<IP_1>,<IP_2>,<IP_3>"

}

```

> Replace <IP_1>, <IP_2>, <IP_3> with actual node IPs.

> discovery.seed_hosts can include all worker nodes; initial_cluster_manager_nodes should include only the 3 master-eligible nodes.

Also, update the replica count of elasticsearch to 3.

> Note: Validate the ElasticHost URLs configured in Filebeat and Kibana to ensure connectivity is not broken.

---

## 9. Confirm Node Connectivity

In Dev Tools, run:

```http

GET /

GET _cat/nodes?v

```

- Ensure new node IPs are listed.

- If not, restart Elasticsearch pods.

---

## 10. Clear Data on New Nodes

Run only on the two new nodes:

```bash

rm -rvf /data/es/*

```

---

## 11. Restart Elasticsearch Pod (if needed)

If new IPs are still missing, restart the pod:

```bash

docker restart <elasticsearch# ## # containername>

```

---

## 12. Final Health Check

Verify cluster health:

```http

GET _cat/health?v

```

---

## Done!

Your OpenSearch multi-node cluster is now configured. Monitor node status and cluster logs to ensure everything remains healthy and stable.

---

# Steps for ES-Exporter Multi-Node Setup

---

1. ES Exporter pods are deployed to push the metrics related to Elasticsearch

2. Since we have implemented the Multi-Node Opensearch, setting up the Mulit-Node ES-Exporter will help us gain the metrics from all the instances of Opensearch pods.

3. To enable Multi-Node ES-Exporter setup, update the simply update replica count to 3 in the ES-Exporter service.

4. This will spin-up 3 instances of ES-Exporter all running on 3 different instances along with Opensearch pods.