Sean Hammond
Sean Hammond

Reputation: 14670

Elasticsearch error: cluster_block_exception [FORBIDDEN/12/index read-only / allow delete (api)], flood stage disk watermark exceeded

When trying to post documents to Elasticsearch as normal I'm getting this error:

cluster_block_exception [FORBIDDEN/12/index read-only / allow delete (api)];

I also see this message on the Elasticsearch logs:

flood stage disk watermark [95%] exceeded ... all indices on this node will marked read-only

Upvotes: 341

Views: 327132

Answers (8)

Dherik
Dherik

Reputation: 19110

I was having the same issue, but I was not able to call the API using curl or even using Dev Tools.

So I did the changes manually on each index:

  1. Go to Management
  2. Go to Index Management
  3. Enable Include rollup indices and Include system indices
  4. Change each index read_only_allow_delete configuration from true to null using the Edit settings.

Upvotes: 0

Ronnie Dutta
Ronnie Dutta

Reputation: 19

Even if the computer storage is revived above 95% the issue will still persist.

Short term solution is to increase kibana limit above 95%.This solution works in Windows only.

a. Create a json file with following parameters


{
  "persistent": {
    "cluster.routing.allocation.disk.watermark.low": "90%",
    "cluster.routing.allocation.disk.watermark.high": "95%",
    "cluster.routing.allocation.disk.watermark.flood_stage": "97%"
  }
}

b.Name it anything ,e.g : json.txt

c.Type following command in command prompt

>curl -X PUT "localhost:9200/_cluster/settings?pretty" -H "Content-Type: application/json" -d @json.txt

d.Following output is received.

{
  "acknowledged" : true,
  "persistent" : {
    "cluster" : {
      "routing" : {
        "allocation" : {
          "disk" : {
            "watermark" : {
              "low" : "90%",
              "flood_stage" : "97%",
              "high" : "95%"
            }
          }
        }
      }
    }
  },
  "transient" : { }
}

e.Create another json file with following parameter


{
  "index.blocks.read_only_allow_delete": null
}

f.Name it anything ,e.g : json1.txt

g.Type following command in command prompt

>curl -X PUT "localhost:9200/*/_settings?expand_wildcards=all" -H "Content-Type: application/json" -d @json1.txt

h.You should get following output

{"acknowledged":true}

i.Restart ELK stack/Kibana and the issue should be resolved.

Upvotes: 2

Dmytro Nesteriuk
Dmytro Nesteriuk

Reputation: 8305

A nice guide from the ELK team:

https://www.elastic.co/guide/en/elasticsearch/reference/master/disk-usage-exceeded.html

It worked for me with ELK 7.x

Upvotes: 1

Waldron
Waldron

Reputation: 124

Only changing the settings with the following command did not work in my environment:

curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

I had to also ran the Force Merge API command:

curl -X POST "localhost:9200/my-index-000001/_forcemerge?pretty"

ref: Force Merge API

Upvotes: 10

Ishaq Khan
Ishaq Khan

Reputation: 959

This error is usually observed when your machine is low on disk space. Steps to be followed to avoid this error message

  1. Resetting the read-only index block on the index:

    $ curl -X PUT -H "Content-Type: application/json" http://127.0.0.1:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
    
    Response
    ${"acknowledged":true}
    
  2. Updating the low watermark to at least 50 gigabytes free, a high watermark of at least 20 gigabytes free, and a flood stage watermark of 10 gigabytes free, and updating the information about the cluster every minute

     Request
     $curl -X PUT "http://127.0.0.1:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d' { "transient": { "cluster.routing.allocation.disk.watermark.low": "50gb", "cluster.routing.allocation.disk.watermark.high": "20gb", "cluster.routing.allocation.disk.watermark.flood_stage": "10gb", "cluster.info.update.interval": "1m"}}'
    
      Response
      ${
       "acknowledged" : true,
       "persistent" : { },
       "transient" : {
       "cluster" : {
       "routing" : {
       "allocation" : {
       "disk" : {
         "watermark" : {
           "low" : "50gb",
           "flood_stage" : "10gb",
           "high" : "20gb"
         }
       }
     }
    }, 
    "info" : {"update" : {"interval" : "1m"}}}}}
    

After running these two commands, you must run the first command again so that the index does not go again into read-only mode

Upvotes: 43

zaibatsu
zaibatsu

Reputation: 767

curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

FROM

https://techoverflow.net/2019/04/17/how-to-fix-elasticsearch-forbidden-12-index-read-only-allow-delete-api/

Upvotes: 65

Payam Khaninejad
Payam Khaninejad

Reputation: 7996

By default, Elasticsearch installed goes into read-only mode when you have less than 5% of free disk space. If you see errors similar to this:

Elasticsearch::Transport::Transport::Errors::Forbidden: [403] {"error":{"root_cause":[{"type":"cluster_block_exception","reason":"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"}],"type":"cluster_block_exception","reason":"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"},"status":403}

Or in /usr/local/var/log/elasticsearch.log you can see logs similar to:

flood stage disk watermark [95%] exceeded on [nCxquc7PTxKvs6hLkfonvg][nCxquc7][/usr/local/var/lib/elasticsearch/nodes/0] free: 15.3gb[4.1%], all indices on this node will be marked read-only

Then you can fix it by running the following commands:

curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_cluster/settings -d '{ "transient": { "cluster.routing.allocation.disk.threshold_enabled": false } }'
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

Upvotes: 249

Sean Hammond
Sean Hammond

Reputation: 14670

This happens when Elasticsearch thinks the disk is running low on space so it puts itself into read-only mode.

By default Elasticsearch's decision is based on the percentage of disk space that's free, so on big disks this can happen even if you have many gigabytes of free space.

The flood stage watermark is 95% by default, so on a 1TB drive you need at least 50GB of free space or Elasticsearch will put itself into read-only mode.

For docs about the flood stage watermark see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/disk-allocator.html.

The right solution depends on the context - for example a production environment vs a development environment.

Solution 1: free up disk space

Freeing up enough disk space so that more than 5% of the disk is free will solve this problem. Elasticsearch won't automatically take itself out of read-only mode once enough disk is free though, you'll have to do something like this to unlock the indices:

$ curl -XPUT -H "Content-Type: application/json" https://[YOUR_ELASTICSEARCH_ENDPOINT]:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

Solution 2: change the flood stage watermark setting

Change the "cluster.routing.allocation.disk.watermark.flood_stage" setting to something else. It can either be set to a lower percentage or to an absolute value. Here's an example of how to change the setting from the docs:

PUT _cluster/settings
{
  "transient": {
    "cluster.routing.allocation.disk.watermark.low": "100gb",
    "cluster.routing.allocation.disk.watermark.high": "50gb",
    "cluster.routing.allocation.disk.watermark.flood_stage": "10gb",
    "cluster.info.update.interval": "1m"
  }
}

Again, after doing this you'll have to use the curl command above to unlock the indices, but after that they should not go into read-only mode again.

Upvotes: 534

Related Questions