George
George

Reputation: 1293

How can I set up multiple elastic search instances on one server with different data?

We use Elastic Search and Kibana at my company. I want to create a second elastic search instance running on the same server but in a different JVM - lets call them A and B. I would like A to have an index called other_logs and B to have an index called batch. I want to be able to search both of them via a single kibana instance and set up dashboards that can read either index on either JVM. Data written to A should not be written to B and vice versa.

The reason is we have some batch jobs which depend on ES and ES has been a bit unstable causing batch job failures. The batch reads/writes very little data to ES but the rest of the app writes a ton of logs and is causing the instability. If we can't read logs its a minor issue but if the batch fails its a major issue. Hence for a short term fix while we look at the ES instability, I would like to move the batch dependencies to a new JVM (ES instance B) which should be small and more stable.

I assume I need the second ES instance to run with a different cluster name otherwise the data will get replicated. When testing this I am seeing a few exceptions so not sure if I'm going in the right direction. I'm looking at "cross-cluster search" which looks like it might allow me to keep one kibana and search both clusters but have zero experience with ES or Kibana and not much time to research this.

Any suggestions on how I can accomplish the configuration? Am I on the right path?

Upvotes: 4

Views: 2850

Answers (1)

George
George

Reputation: 1293

I think I proved everything out at least on my own local test machine. Essentially what I did was created a second cluster which can run on the same machine and have independent configuration files. By changing the folder I am also able to set independent jvm.options since I want less memory for the new cluster. Once that was working I configured the single kibana instance to know about the new cluster and then created an index pattern so I could search it. Cross cluster searching is discussed here and you can refer to the new 'remote' cluster directly in searches: https://www.elastic.co/guide/en/elasticsearch/reference/6.6/modules-remote-clusters.html

Port 9300 is the default port all the nodes on a cluster will use to talk to each other so I changed the new cluster to use 9301. With the default it was scanning 9300 first and throwing an exception then scanning 9301. So it was working without hardcoding to 9301 but I dont like seeing the exceptions in the logs and I wanted to control which port is used.

For posterity sake here are the details:

1). Created a copied config folder under elastic search to configB and edited elasticsearch.yml:

  • cluster.name: ClusterB
  • path.data: dataB
  • path.logs: logsB
  • http.port: 9201
  • transport.port: 9301

2). Since testing on windows I copied elasticsearch.bat to elasticsearchB.bat and added this at the top (linux has some different method for passing config directory). This allows the new batch file to use its own config directory while all other folders for ES remain the same (so upgrading ES will upgrade both instances):

  • SET ES_PATH_CONF=..\configB

3). started both instances of elastic search with elasticsearch.bat and elasticsearch2.bat

4). start single instance of kibana which points at 9200 by default

5). In kibana modify the cluster settings by running this in the dev tools:

PUT _cluster/settings
{
  "persistent": {
    "cluster": {
      "remote": {
        "CluserB": {
          "seeds": [
            "127.0.0.1:9301"
          ]
        }
      }
    }
  }
}

4). Created data in the new ES cluster (PUT /batch/_doc/1 {...} )

5). In kibana create a new index pattern and refer to the remote cluster and index like this ClusterB:batch

6). Create a dashboard using the new remote index pattern

Upvotes: 1

Related Questions