Reputation: 594
I have the following setup in elasticsearch
[root elasticsearch]$ curl localhost:9200/_cluster/health?pretty
{
"cluster_name" : "iresbi",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 10,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 0.0
}
I have 3 nodes which are acting as both data node as well as master, currently the searches in the cluster are failing with the following exception
[2017-04-24T01:36:44,134][DEBUG][o.e.a.s.TransportSearchAction] [node-1] All shards failed for phase: [query]
org.elasticsearch.action.NoShardAvailableActionException: null
Caused by: org.elasticsearch.action.NoShardAvailableActionException
when i did a cat on shards i get the following output
[root elasticsearch]$ curl localhost:9200/_cat/shards?pretty
customer 4 p UNASSIGNED
customer 4 r UNASSIGNED
customer 2 p UNASSIGNED
customer 2 r UNASSIGNED
customer 3 p UNASSIGNED
customer 3 r UNASSIGNED
customer 1 p UNASSIGNED
customer 1 r UNASSIGNED
customer 0 p UNASSIGNED
customer 0 r UNASSIGNED
following the disk space usage :
[root elasticsearch]$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_root-root 8125880 1587988 6102080 21% /
devtmpfs 3994324 0 3994324 0% /dev
tmpfs 4005212 4 4005208 1% /dev/shm
tmpfs 4005212 8624 3996588 1% /run
tmpfs 4005212 0 4005212 0% /sys/fs/cgroup
/dev/vda3 999320 1320 945572 1% /crashdump
/dev/vda1 245679 100027 132545 44% /boot
/dev/mapper/vg_root-var 6061632 5727072 3604 100% /var
/dev/mapper/vg_root-tmp 1998672 6356 1871076 1% /tmp
/dev/mapper/vg_root-var_log 1998672 55800 1821632 3% /var/log
/dev/mapper/vg_root-apps 25671908 292068 24052736 2% /apps
/dev/mapper/vg_root-home 1998672 169996 1707436 10% /home
/dev/mapper/vg_root-var_log_audit 1998672 8168 1869264 1% /var/log/audit
/dev/vdb 257898948 61464 244713900 1% /data
tmpfs 801044 0 801044 0% /run/user/1000
Need to get these shards assigned back, i can add one more node to the cluster, will that solve the issue? how to get this resolve?
Upvotes: 1
Views: 1697
Reputation: 2681
Based on some info gathered by others, if you haven't adapted your /etc/elasticsearch.yml,
the data of elasticsearch will be stored in /var/lib/elasticsearch/
So your /var being 100% is likely the cause of your problem.
Proper resolution will be dependent on the amount of data in your shards, presence of replicas or not, and if the /data mount point is the one you intended to use for elasticsearch or not.
In all cases, resolution will be done properly by migrating the indices data to a filesystem which has sufficient space
Another person already asked and got a reply on the migration approach here
Upvotes: 1