CentAu
CentAu

Reputation: 11190

Failed to obtain node lock, is the following location writable

Elasticsearch won't start using ./bin/elasticsearch. It raises the following exception:

- ElasticsearchIllegalStateException[Failed to obtain node lock, is the following location writable?: [/home/user1/elasticsearch-1.4.4/data/elasticsearch]

I checked the permissions on the same location and the location has 777 permissions on it and is owned by user1.

ls -al /home/user1/elasticsearch-1.4.4/data/elasticsearch
drwxrwxrwx  3 user1 wheel 4096 Mar  8 13:24 .
drwxrwxrwx  3 user1 wheel 4096 Mar  8 13:00 ..
drwxrwxrwx 52 user1 wheel 4096 Mar  8 13:51 nodes

What is the problem?

Trying to run elasticsearch 1.4.4 on linux without root access.

Upvotes: 94

Views: 133290

Answers (23)

ABHISHEK ACHARYA
ABHISHEK ACHARYA

Reputation: 1

I resolved this issue by giving permission to that data directory, like this:

# Grant ownership to UID 1000 (default for Elasticsearch container)
sudo chown -R 1000:1000 ./esdata01 ./esdata02 ./esdata03

# Ensure read, write, and execute permissions
sudo chmod -R 775 ./esdata01 ./esdata02 ./esdata03

Upvotes: -2

asdf
asdf

Reputation: 56

In my case, the data-folder used by Elasticsearch was mounted from the host-system into the container. There was a separate volume created using the "docker volume create" command. So that was somehow conflicting and causing the new container to crash with the Failed to obtain node lock error.

After removing the standalone volume, I was able to start the Elasticsearch container successfully.

Upvotes: 0

rll
rll

Reputation: 5587

If you came here and you're running elastic on docker/rancher the problem could be that you are using docker with privileged permissions.

Then the local folder with the data does not match the permissions (because you are running with sudo). This for some reason will make it impossible to acquire the lock even if you use the most permissive settings on the folder.

Just build and run the container without using the privileged user and it should be fine.

Upvotes: 0

Frol
Frol

Reputation: 161

In my case (docker on windows) I solved it by temporarily changing mapped volume source path:

  1. Change mapped path for docker container i.e. from
    volumes:
      - ./artifacts/elasticsearch-data:/usr/share/elasticsearch/data

to

    volumes:
      - ./artifacts/elasticsearch-data1:/usr/share/elasticsearch/data
  1. docker compose down && docker compose up --wait -d or any other way to recreate containers.
  2. Delete source path ./artifacts/elasticsearch-data. Just in case.
  3. change path back to original
    volumes:
      - ./artifacts/elasticsearch-data:/usr/share/elasticsearch/data
  1. one more time recreate containers

Other ways didn't work for me. I tried to delete folder, prune volumes, delete all containers, restarting docker, restarting PC, etc.

Upvotes: 2

Bitler
Bitler

Reputation: 63

mkdir -p ./elasticsearch/docs
mkdir -p ./elasticsearch/logs
chmod -R 777 ./elasticsearch

where elasticsearch is your docker volume directory on host machine

Check https://github.com/elastic/elasticsearch/issues/96601#issuecomment-1580384744

Is solution for me.

Upvotes: 1

Mohan
Mohan

Reputation: 51

Mostly this error occurs when you kill the process abruptly. When you kill the process, node.lock file may not be cleared. you can manually remove the node.lock file and start the process again, it should work

Upvotes: 3

Jay
Jay

Reputation: 143

If you are on windows then try this:

  1. Kill any java processes
  2. If the start batch is interrupted in between then rather than closing the terminal, press ctrl+c to properly stop the elastic search service before you exit the terminal.

Upvotes: 1

Jonathan Rys
Jonathan Rys

Reputation: 1886

If anyone is seeing this being caused by:

Caused by: java.lang.IllegalStateException: failed to obtain node locks, tried [[/docker/es]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?

The solution is to set max_local_storage_nodes in your elasticsearch.yml

node.max_local_storage_nodes: 2

The docs say to set this to a number greater than one on your development machine

By default, Elasticsearch is configured to prevent more than one node from sharing the same data path. To allow for more than one node (e.g., on your development machine), use the setting node.max_local_storage_nodes and set this to a positive integer larger than one.

I think that Elasticsearch needs to have a second node available so that a new instance can start. This happens to me whenever I try to restart Elasticsearch inside my Docker container. If I relaunch my container then Elasticsearch will start properly the first time without this setting.

Upvotes: 1

Talis Pähn
Talis Pähn

Reputation: 91

In my case the /var/lib/elasticsearch was the dir with missing permissions (CentOS 8):

error: java.io.IOException: failed to obtain lock on /var/lib/elasticsearch/nodes/0

To fix it, use:

chown -R elasticsearch:elasticsearch /var/lib/elasticsearch

Upvotes: 4

devops-admin
devops-admin

Reputation: 1993

check these options

sudo chown 1000:1000 <directory you wish to mount>
# With docker
sudo chown 1000:1000 /data/elasticsearch/ 
OR
# With VM
sudo chown elasticsearch:elasticsearch /data/elasticsearch/

Upvotes: 7

Dinesh Pokalwar
Dinesh Pokalwar

Reputation: 31

chown -R elasticsearch:elasticsearch /var/lib/elasticsearch

It directly shows it doesn't have permission to obtain a lock. So need to give permissions.

Upvotes: 3

kuiro5
kuiro5

Reputation: 1601

I had an orphaned Java process related to Elasticsearch. Killing it solved the lock issue.

ps aux | grep 'java'
kill -9 <PID>

Upvotes: 122

Scott Buchanan
Scott Buchanan

Reputation: 1233

As with many others here replying, this was caused by wrong permissions on the directory (not owned by the elasticsearch user). In our case it was caused by uninstalling Elasticsearch and reinstalling it (via yum, using the official repositories).

As of this moment, the repos do not delete the nodes directory when they are uninstalled, but they do delete the elasticsearch user/group that owns it. So then when Elasticsearch is reinstalled, a new, different elasticsearch user/group is created, leaving the old nodes directory still present, but owned by the old UID/GID. This then conflicts and causes the error.

A recursive chown as mentioned by @oleksii is the solution.

Upvotes: 9

Faulander
Faulander

Reputation: 337

For me the error was a simple one: I created a new data directory /mnt/elkdata and changed the ownership to the elastic user. I then copied the files and forgot to change the ownership afterwards again.

After doing that and restarting the elastic node it worked.

Upvotes: 0

Tobias Gassmann
Tobias Gassmann

Reputation: 11829

After I upgraded the elasticsearch docker-image from version 5.6.x to 6.3.y the container would not start anymore because of the aforementioned error

Failed to obtain node lock

In my case the root-cause of the error was missing file-permissions

The data-folder used by elasticsearch was mounted from the host-system into the container (declared in the docker-compose.yml):

    volumes:
      - /var/docker_folders/common/experimental-upgrade:/usr/share/elasticsearch/data

This folder could not be accessed anymore by elasticsearch for reasons I did not understand at all. After I set very permissive file-permissions to this folder and all sub-folders the container did start again.

I do not want to reproduce the command to set those very permissive access-rights on the mounted docker-folder, because it is most likely a very bad practice and a security-issue. I just wanted to share the fact that it might not be a second process of elasticsearch running, but actually just missing access-rights to the mounted folder.

Maybe someone could elaborate on the apropriate rights to set for a mounted-folder in a docker-container?

Upvotes: 14

ninjaas
ninjaas

Reputation: 337

To add to the above answers there could be some other scenarios in which you can get the error.In fact I had done a update from 5.5 to 6.3 for elasticsearch.I have been using the docker compose setup with named volumes for data directories.I had to do a docker volume prune to remove the stale ones.After doing that I was no longer facing the issue.

Upvotes: 1

Gokul
Gokul

Reputation: 41

I had an another ElasticSearch running on the same machine.

Command to check : netstat -nlp | grep 9200 (9200 - Elastic Port) Result : tcp 0 0 :::9210 :::* LISTEN 27462/java

Kill the process by, kill -9 27462 27462 - PID of ElasticSearch instance

Start the elastic search and it may run now.

Upvotes: 4

Qin Kai
Qin Kai

Reputation: 53

Try the following: 1. find the port 9200, e.g.: lsof -i:9200 This will show you which processes use the port 9200. 2. kill the pid(s), e.g. repeat kill -9 pid for each PID that the output of lsof showed in step 1 3. restart elasticsearch, e.g. elasticsearch

Upvotes: 5

Iman Mirzadeh
Iman Mirzadeh

Reputation: 13600

the reason is another instance is running!
first find the id of running elastic.

ps aux | grep 'elastic'

then kill using kill -9 <PID_OF_RUNNING_ELASTIC>.
There were some answers to remove node.lock file but that didn't help since the running instance will make it again!

Upvotes: 30

Walker Rowe
Walker Rowe

Reputation: 983

You already have ES running. To prove that type:

curl 'localhost:9200/_cat/indices?v'

If you want to run another instance on the same box you can set node.max_local_storage_nodes in elasticsearch.yml to a value larger than 1.

Upvotes: 6

Darren Hicks
Darren Hicks

Reputation: 5086

I got this same error message, but things were mounted fine and the permissions were all correctly assigned.

Turns out that I had an 'orphaned' elasticsearch process that was not being killed by the normal stop command.

I had to manually kill the process and then restarting elasticsearch worked again.

Upvotes: 31

Tom Robinson
Tom Robinson

Reputation: 8528

In my case, this error was caused by not mounting the devices used for the configured data directories using "sudo mount".

Upvotes: 3

oleksii
oleksii

Reputation: 35925

In my situation I had wrong permissions on the ES dir folder. Setting correct owner solved it.

# change owner
chown -R elasticsearch:elasticsearch /data/elasticsearch/

# to validate
ls /data/elasticsearch/ -la
# prints    
# drwxr-xr-x 2 elasticsearch elasticsearch 4096 Apr 30 14:54 CLUSTER_NAME

Upvotes: 20

Related Questions