Reputation: 837
I have an Elasticsearch cluster setup on kubernetes. Recently logstash was not able to push any data to the cluster because one of the node in the cluster was out of disk space.
This was the error in logstash
[Ruby-0-Thread-13@[main]>worker1: /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:383] elasticsearch - retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
The es-master had marked the node as read only because the available disk space crossed the threshold
[WARN ][o.e.c.r.a.DiskThresholdMonitor] [es-master-65ccf55794-pm4xz] flood stage disk watermark [95%] exceeded on [SaRCGuyyTBOxTjNtvjui-g][es-data-1][/data/data/nodes/0] free: 9.1gb[2%], all indices on this node will be marked read-only
Following this I freed up resources on that node and now it has enough space available (almost 50%). But logstash is still not able to push data to elastic search and is logging the same error above.
I have the following questions
Upvotes: 2
Views: 1390
Reputation: 7221
You have to manually reset the read-only block on your indices.
You can see documentation here in the cluster.routing.allocation.disk.watermark.flood_stage
block
The index block must be released manually once there is enough disk space available to allow indexing operations to continue.
PUT /<your index name>/_settings
{
"index.blocks.read_only_allow_delete": null
}
Upvotes: 5