dorinand
dorinand

Reputation: 1727

Elasticsearch - Replicas is unassigned after reopen index INDEX_REOPENED error

I closed my index and reopen it and now my shards can't assigne.

curl -s -XGET localhost:9201/_cat/shards?h=index,shard,prirep,state,unassigned.reason | grep UNASSIGNED
2018.03.27-team-logs 2 r UNASSIGNED INDEX_REOPENED
2018.03.27-team-logs 5 r UNASSIGNED INDEX_REOPENED
2018.03.27-team-logs 3 r UNASSIGNED INDEX_REOPENED
2018.03.27-team-logs 4 r UNASSIGNED INDEX_REOPENED
2018.03.27-team-logs 1 r UNASSIGNED INDEX_REOPENED
2018.03.27-team-logs 0 r UNASSIGNED INDEX_REOPENED
2018.03.28-team-logs 2 r UNASSIGNED INDEX_REOPENED
2018.03.28-team-logs 5 r UNASSIGNED INDEX_REOPENED
2018.03.28-team-logs 3 r UNASSIGNED INDEX_REOPENED
2018.03.28-team-logs 4 r UNASSIGNED INDEX_REOPENED
2018.03.28-team-logs 1 r UNASSIGNED INDEX_REOPENED
2018.03.28-team-logs 0 r UNASSIGNED INDEX_REOPENED

Could anybody explain me what does error means and how to solve it? Before I closed it everything works fine. I configured 6 shards and 1 replica. Installed Elasticsearch 6.2.

EDIT:

Output of curl -XGET "localhost:9201/_cat/shards":

2018.03.29-team-logs 1 r STARTED    1739969 206.2mb 10.207.46.247 elk-es-data-hot-1.platform.osdc2.mall.local
2018.03.29-team-logs 1 p STARTED    1739969   173mb 10.206.46.246 elk-es-data-hot-2.platform.osdc1.mall.local
2018.03.29-team-logs 2 p STARTED    1739414 169.9mb 10.207.46.247 elk-es-data-hot-1.platform.osdc2.mall.local
2018.03.29-team-logs 2 r STARTED    1739414 176.3mb 10.207.46.248 elk-es-data-hot-2.platform.osdc2.mall.local
2018.03.29-team-logs 4 p STARTED    1740185   186mb 10.206.46.247 elk-es-data-hot-1.platform.osdc1.mall.local
2018.03.29-team-logs 4 r STARTED    1740185 169.4mb 10.206.46.246 elk-es-data-hot-2.platform.osdc1.mall.local
2018.03.29-team-logs 5 r STARTED    1739660 164.3mb 10.207.46.248 elk-es-data-hot-2.platform.osdc2.mall.local
2018.03.29-team-logs 5 p STARTED    1739660 180.1mb 10.206.46.246 elk-es-data-hot-2.platform.osdc1.mall.local
2018.03.29-team-logs 3 p STARTED    1740606 171.2mb 10.207.46.248 elk-es-data-hot-2.platform.osdc2.mall.local
2018.03.29-team-logs 3 r STARTED    1740606 173.4mb 10.206.46.247 elk-es-data-hot-1.platform.osdc1.mall.local
2018.03.29-team-logs 0 r STARTED    1740166 169.7mb 10.207.46.247 elk-es-data-hot-1.platform.osdc2.mall.local
2018.03.29-team-logs 0 p STARTED    1740166   187mb 10.206.46.247 elk-es-data-hot-1.platform.osdc1.mall.local
2018.03.28-team-logs 1 p STARTED    2075020 194.2mb 10.207.46.248 elk-es-data-hot-2.platform.osdc2.mall.local
2018.03.28-team-logs 1 r UNASSIGNED                               
2018.03.28-team-logs 2 p STARTED    2076268 194.9mb 10.206.46.247 elk-es-data-hot-1.platform.osdc1.mall.local
2018.03.28-team-logs 2 r UNASSIGNED                               
2018.03.28-team-logs 4 p STARTED    2073906 194.9mb 10.207.46.247 elk-es-data-hot-1.platform.osdc2.mall.local
2018.03.28-team-logs 4 r UNASSIGNED                               
2018.03.28-team-logs 5 p STARTED    2072921   195mb 10.207.46.248 elk-es-data-hot-2.platform.osdc2.mall.local
2018.03.28-team-logs 5 r UNASSIGNED                               
2018.03.28-team-logs 3 p STARTED    2074579 194.1mb 10.206.46.246 elk-es-data-hot-2.platform.osdc1.mall.local
2018.03.28-team-logs 3 r UNASSIGNED                               
2018.03.28-team-logs 0 p STARTED    2073349 193.9mb 10.207.46.248 elk-es-data-hot-2.platform.osdc2.mall.local
2018.03.28-team-logs 0 r UNASSIGNED                               
2018.03.27-team-logs 1 p STARTED     356769  33.5mb 10.207.46.246 elk-es-data-warm-1.platform.osdc2.mall.local
2018.03.27-team-logs 1 r UNASSIGNED                               
2018.03.27-team-logs 2 p STARTED     356798  33.6mb 10.206.46.244 elk-es-data-warm-2.platform.osdc1.mall.local
2018.03.27-team-logs 2 r UNASSIGNED                               
2018.03.27-team-logs 4 p STARTED     356747  33.7mb 10.207.46.246 elk-es-data-warm-1.platform.osdc2.mall.local
2018.03.27-team-logs 4 r UNASSIGNED                               
2018.03.27-team-logs 5 p STARTED     357399  33.8mb 10.207.46.245 elk-es-data-warm-2.platform.osdc2.mall.local
2018.03.27-team-logs 5 r UNASSIGNED                               
2018.03.27-team-logs 3 p STARTED     357957  33.7mb 10.206.46.245 elk-es-data-warm-1.platform.osdc1.mall.local
2018.03.27-team-logs 3 r UNASSIGNED                               
2018.03.27-team-logs 0 p STARTED     356357  33.4mb 10.207.46.245 elk-es-data-warm-2.platform.osdc2.mall.local
2018.03.27-team-logs 0 r UNASSIGNED                               
.kibana                  0 p STARTED          2  12.3kb 10.207.46.247 elk-es-data-hot-1.platform.osdc2.mall.local
.kibana                  0 r UNASSIGNED

Output of curl -XGET "localhost:9201/_cat/nodes":

10.207.46.248  8 82 0 0.07 0.08 0.11 d - elk-es-data-hot-2
10.206.46.245  9 64 0 0.04 0.11 0.08 d - elk-es-data-warm-1
10.207.46.249 11 90 0 0.00 0.01 0.05 m * elk-es-master-2
10.207.46.245  9 64 0 0.00 0.01 0.05 d - elk-es-data-warm-2
10.206.46.247 12 82 0 0.00 0.06 0.08 d - elk-es-data-hot-1
10.206.46.244 10 64 0 0.08 0.04 0.05 d - elk-es-data-warm-2
10.207.46.243  5 86 0 0.00 0.01 0.05 d - elk-kibana
10.206.46.248 10 92 1 0.04 0.18 0.24 m - elk-es-master-1
10.206.46.246  6 82 0 0.02 0.07 0.09 d - elk-es-data-hot-2
10.207.46.247  9 82 0 0.06 0.06 0.05 d - elk-es-data-hot-1
10.206.46.241  6 91 0 0.00 0.02 0.05 m - master-test
10.206.46.242  8 89 0 0.00 0.02 0.05 d - es-kibana
10.207.46.246  8 64 0 0.00 0.02 0.05 d - elk-es-data-warm-1

Upvotes: 3

Views: 3043

Answers (1)

pkhlop
pkhlop

Reputation: 1844

It is expected behaviour.

Elasticsearch will not put primary and replica shard on the same node. You will need at least 2 nodes to have have 1 replica.

You can simply set replica to 0

PUT */_settings
{
    "index" : {
    "number_of_replicas" : 0
    }
}

UPDATE:

After running following request

GET /_cluster/allocation/explain?pretty

we can see response here

https://pastebin.com/1ag1Z7jL

"explanation" : "there are too many copies of the shard allocated to nodes with attribute [datacenter], there are [2] total configured shard copies for this shard id and [3] total attribute values, expected the allocated shard count per attribute [2] to be less than or equal to the upper bound of the required number of shards per attribute [1]"

Probbably you have zone setting used. Elasticsearch will avoid to put primary and replica shard in same zone https://www.elastic.co/guide/en/elasticsearch/reference/current/allocation-awareness.html

With ordinary awareness, if one zone lost contact with the other zone, Elasticsearch would assign all of the missing replica shards to a single zone. But in this example, this sudden extra load would cause the hardware in the remaining zone to be overloaded.

Forced awareness solves this problem by NEVER allowing copies of the same shard to be allocated to the same zone.

For example, lets say we have an awareness attribute called zone, and we know we are going to have two zones, zone1 and zone2. Here is how we can force awareness on a node:

cluster.routing.allocation.awareness.force.zone.values: zone1,zone2 cluster.routing.allocation.awareness.attributes: zone

Upvotes: 3

Related Questions