Reputation: 341
I added 12 new data nodes to an existing cluster of 8 data nodes. I am trying to shutdown the previous 8 nodes using the "exclude allocation" as recommended
curl -XPUT localhost:9200/_cluster/settings -d '{ "transient" : { "cluster.routing.allocation.exclude._ip" : "10.0.0.1" } }'
It wasn't relocating any shards, so I ran the reroute command with the 'explain' option. Can someone explain what the following text is saying ?
> "explanations" : [ {
> "command" : "move",
> "parameters" : {
> "index" : "2015-09-20",
> "shard" : 0,
> "from_node" : "_dDn1SmqSquhMGgjti6vGg",
> "to_node" : "OQBFMt17RaWboOzMnUy2jA"
> },
> "decisions" : [ {
> "decider" : "same_shard",
> "decision" : "YES",
> "explanation" : "shard is not allocated to same node or host"
> }, {
> "decider" : "filter",
> "decision" : "YES",
> "explanation" : "node passes include/exclude/require filters"
> }, {
> "decider" : "replica_after_primary_active",
> "decision" : "YES",
> "explanation" : "shard is primary"
> }, {
> "decider" : "throttling",
> "decision" : "YES",
> "explanation" : "below shard recovery limit of [16]"
> }, {
> "decider" : "enable",
> "decision" : "YES",
> "explanation" : "allocation disabling is ignored"
> }, {
> "decider" : "disable",
> "decision" : "YES",
> "explanation" : "allocation disabling is ignored"
> }, {
> "decider" : "awareness",
> "decision" : "NO",
> "explanation" : "too many shards on nodes for attribute: [dc]" }, {
> "decider" : "shards_limit",
> "decision" : "YES",
> "explanation" : "total shard limit disabled: [-1] <= 0"
> }, {
> "decider" : "node_version",
> "decision" : "YES",
> "explanation" : "target node version [1.4.5] is same or newer than source node version [1.4.5]"
> }, {
> "decider" : "disk_threshold",
> "decision" : "YES",
> "explanation" : "enough disk for shard on node, free: [1.4tb]"
> }, {
> "decider" : "snapshot_in_progress",
> "decision" : "YES", "explanation" : "no snapshots are currently running"
>
Upvotes: 0
Views: 4239
Reputation: 14492
If you have replicas, you can simply switch off your nodes, one by one and wait for each that the cluster becomes green again.
You don't need to explicitly reroute in that case.
That said, in your logs, it sounds like you are using awareness
in your elasticsearch.yml
file. You should check your settings.
Upvotes: 2
Reputation: 325
You can install kopf plugin, it will help you manage elasticsearch nodes and the task will be more simplified.
With this plugin what you want it's easier.
You can download here: https://github.com/lmenezes/elasticsearch-kopf .
Other plugins with support you can get in: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-plugins.html .
Upvotes: 1