RafaelJan
RafaelJan

Reputation: 3608

elasticsearch - moving from multi servers to one server

I have a cluster of 5 servers for elasticsearch, all with the same version of elasticsearch.

I need to move all data from servers 2, 3, 4, 5 to server 1.

How can I do it?

How can I know which server has data at all?

After change of _cluster/settings with:

PUT _cluster/settings
{
  "persistent" : {
    "cluster.routing.allocation.require._host" : "server1"
  }
}

I get for: curl -GET http://localhost:9200/_cat/allocation?v

the following:

shards disk.indices disk.used disk.avail disk.total disk.percent host    ip      node
     6       54.5gb   170.1gb      1.9tb      2.1tb            7 *.*.*.* *.*.*.* node-5
     6       50.4gb   167.4gb      1.9tb      2.1tb            7 *.*.*.* *.*.*.* node-3
     6       22.6gb   139.8gb        2tb      2.1tb            6 *.*.*.* *.*.*.* node-2
     6       49.8gb   166.6gb      1.9tb      2.1tb            7 *.*.*.* *.*.*.* node-4
     6       54.8gb   172.1gb      1.9tb      2.1tb            7 *.*.*.* *.*.*.* node-1

and for: GET _cluster/settings?include_defaults

the following:

#! Deprecation: [node.max_local_storage_nodes] setting was deprecated in Elasticsearch and will be removed in a future release!
{
  "persistent" : {
    "cluster" : {
      "routing" : {
        "allocation" : {
          "require" : {
            "_host" : "server1"
          }
        }
      }
    }
  },
  "transient" : { },
  "defaults" : {
    "cluster" : {
      "max_voting_config_exclusions" : "10",
      "auto_shrink_voting_configuration" : "true",
      "election" : {
        "duration" : "500ms",
        "initial_timeout" : "100ms",
        "max_timeout" : "10s",
        "back_off_time" : "100ms",
        "strategy" : "supports_voting_only"
      },
      "no_master_block" : "write",
      "persistent_tasks" : {
        "allocation" : {
          "enable" : "all",
          "recheck_interval" : "30s"
        }
      },
      "blocks" : {
        "read_only_allow_delete" : "false",
        "read_only" : "false"
      },
      "remote" : {
        "node" : {
          "attr" : ""
        },
        "initial_connect_timeout" : "30s",
        "connect" : "true",
        "connections_per_cluster" : "3"
      },
      "follower_lag" : {
        "timeout" : "90000ms"
      },
      "routing" : {
        "use_adaptive_replica_selection" : "true",
        "rebalance" : {
          "enable" : "all"
        },
        "allocation" : {
          "node_concurrent_incoming_recoveries" : "2",
          "node_initial_primaries_recoveries" : "4",
          "same_shard" : {
            "host" : "false"
          },
          "total_shards_per_node" : "-1",
          "shard_state" : {
            "reroute" : {
              "priority" : "NORMAL"
            }
          },
          "type" : "balanced",
          "disk" : {
            "threshold_enabled" : "true",
            "watermark" : {
              "low" : "85%",
              "flood_stage" : "95%",
              "high" : "90%"
            },
            "include_relocations" : "true",
            "reroute_interval" : "60s"
          },
          "awareness" : {
            "attributes" : [ ]
          },
          "balance" : {
            "index" : "0.55",
            "threshold" : "1.0",
            "shard" : "0.45"
          },
          "enable" : "all",
          "node_concurrent_outgoing_recoveries" : "2",
          "allow_rebalance" : "indices_all_active",
          "cluster_concurrent_rebalance" : "2",
          "node_concurrent_recoveries" : "2"
        }
      },
     ...
      "nodes" : {
        "reconnect_interval" : "10s"
      },
      "service" : {
        "slow_master_task_logging_threshold" : "10s",
        "slow_task_logging_threshold" : "30s"
      },
      ...
      "name" : "cluster01",
      ...
      "max_shards_per_node" : "1000",
      "initial_master_nodes" : [ ],
      "info" : {
        "update" : {
          "interval" : "30s",
          "timeout" : "15s"
        }
      }
    },
...

Upvotes: 2

Views: 652

Answers (1)

Val
Val

Reputation: 217254

You can use shard allocation filtering to move all your data to server 1.

Simply run this:

PUT _cluster/settings
{
  "persistent" : {
    "cluster.routing.allocation.require._name" : "node-1",
    "cluster.routing.allocation.exclude._name" : "node-2,node-3,node-4,node-5"
  }
}

Instead of _name you can also use _ip or _host depending on what is more practical for you.

After running this command, all primary shards will migrate to server1 (the replicas will be unassigned). You just need to make sure that server1 has enough storage space to store all the primary shards.

If you want to get rid of the unassigned replicas (and get back to green state), simply run this:

PUT _all/_settings
{
  "index" : {
    "number_of_replicas" : 0
  }
}

Upvotes: 3

Related Questions