Reputation: 11190
Elasticsearch won't start using ./bin/elasticsearch
.
It raises the following exception:
- ElasticsearchIllegalStateException[Failed to obtain node lock, is the following location writable?: [/home/user1/elasticsearch-1.4.4/data/elasticsearch]
I checked the permissions on the same location and the location has 777 permissions on it and is owned by user1.
ls -al /home/user1/elasticsearch-1.4.4/data/elasticsearch
drwxrwxrwx 3 user1 wheel 4096 Mar 8 13:24 . drwxrwxrwx 3 user1 wheel 4096 Mar 8 13:00 .. drwxrwxrwx 52 user1 wheel 4096 Mar 8 13:51 nodes
What is the problem?
Trying to run elasticsearch 1.4.4 on linux without root access.
Upvotes: 94
Views: 133290
Reputation: 1
I resolved this issue by giving permission to that data directory, like this:
# Grant ownership to UID 1000 (default for Elasticsearch container)
sudo chown -R 1000:1000 ./esdata01 ./esdata02 ./esdata03
# Ensure read, write, and execute permissions
sudo chmod -R 775 ./esdata01 ./esdata02 ./esdata03
Upvotes: -2
Reputation: 56
In my case, the data-folder used by Elasticsearch was mounted from the host-system into the container. There was a separate volume created using the "docker volume create" command. So that was somehow conflicting and causing the new container to crash with the Failed to obtain node lock
error.
After removing the standalone volume, I was able to start the Elasticsearch container successfully.
Upvotes: 0
Reputation: 5587
If you came here and you're running elastic on docker/rancher the problem could be that you are using docker with privileged permissions.
Then the local folder with the data does not match the permissions (because you are running with sudo). This for some reason will make it impossible to acquire the lock even if you use the most permissive settings on the folder.
Just build and run the container without using the privileged user and it should be fine.
Upvotes: 0
Reputation: 161
In my case (docker on windows) I solved it by temporarily changing mapped volume source path:
volumes:
- ./artifacts/elasticsearch-data:/usr/share/elasticsearch/data
to
volumes:
- ./artifacts/elasticsearch-data1:/usr/share/elasticsearch/data
docker compose down && docker compose up --wait -d
or any other way to recreate containers../artifacts/elasticsearch-data
. Just in case. volumes:
- ./artifacts/elasticsearch-data:/usr/share/elasticsearch/data
Other ways didn't work for me. I tried to delete folder, prune volumes, delete all containers, restarting docker, restarting PC, etc.
Upvotes: 2
Reputation: 63
mkdir -p ./elasticsearch/docs
mkdir -p ./elasticsearch/logs
chmod -R 777 ./elasticsearch
where elasticsearch
is your docker volume directory on host machine
Check https://github.com/elastic/elasticsearch/issues/96601#issuecomment-1580384744
Is solution for me.
Upvotes: 1
Reputation: 51
Mostly this error occurs when you kill the process abruptly. When you kill the process, node.lock file may not be cleared. you can manually remove the node.lock file and start the process again, it should work
Upvotes: 3
Reputation: 143
If you are on windows then try this:
ctrl+c
to properly stop the elastic search service before you exit the terminal.Upvotes: 1
Reputation: 1886
If anyone is seeing this being caused by:
Caused by: java.lang.IllegalStateException: failed to obtain node locks, tried [[/docker/es]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
The solution is to set max_local_storage_nodes
in your elasticsearch.yml
node.max_local_storage_nodes: 2
The docs say to set this to a number greater than one on your development machine
By default, Elasticsearch is configured to prevent more than one node from sharing the same data path. To allow for more than one node (e.g., on your development machine), use the setting node.max_local_storage_nodes and set this to a positive integer larger than one.
I think that Elasticsearch needs to have a second node available so that a new instance can start. This happens to me whenever I try to restart Elasticsearch inside my Docker container. If I relaunch my container then Elasticsearch will start properly the first time without this setting.
Upvotes: 1
Reputation: 91
In my case the /var/lib/elasticsearch
was the dir with missing permissions (CentOS 8):
error: java.io.IOException: failed to obtain lock on /var/lib/elasticsearch/nodes/0
To fix it, use:
chown -R elasticsearch:elasticsearch /var/lib/elasticsearch
Upvotes: 4
Reputation: 1993
check these options
sudo chown 1000:1000 <directory you wish to mount>
# With docker
sudo chown 1000:1000 /data/elasticsearch/
OR
# With VM
sudo chown elasticsearch:elasticsearch /data/elasticsearch/
Upvotes: 7
Reputation: 31
chown -R elasticsearch:elasticsearch /var/lib/elasticsearch
It directly shows it doesn't have permission to obtain a lock. So need to give permissions.
Upvotes: 3
Reputation: 1601
I had an orphaned Java process related to Elasticsearch. Killing it solved the lock issue.
ps aux | grep 'java'
kill -9 <PID>
Upvotes: 122
Reputation: 1233
As with many others here replying, this was caused by wrong permissions on the directory (not owned by the elasticsearch user). In our case it was caused by uninstalling Elasticsearch and reinstalling it (via yum, using the official repositories).
As of this moment, the repos do not delete the nodes
directory when they are uninstalled, but they do delete the elasticsearch
user/group that owns it. So then when Elasticsearch is reinstalled, a new, different elasticsearch
user/group is created, leaving the old nodes
directory still present, but owned by the old UID/GID. This then conflicts and causes the error.
A recursive chown as mentioned by @oleksii is the solution.
Upvotes: 9
Reputation: 337
For me the error was a simple one: I created a new data directory /mnt/elkdata and changed the ownership to the elastic user. I then copied the files and forgot to change the ownership afterwards again.
After doing that and restarting the elastic node it worked.
Upvotes: 0
Reputation: 11829
After I upgraded the elasticsearch docker-image from version 5.6.x to 6.3.y the container would not start anymore because of the aforementioned error
Failed to obtain node lock
In my case the root-cause of the error was missing file-permissions
The data-folder used by elasticsearch was mounted from the host-system into the container (declared in the docker-compose.yml):
volumes:
- /var/docker_folders/common/experimental-upgrade:/usr/share/elasticsearch/data
This folder could not be accessed anymore by elasticsearch for reasons I did not understand at all. After I set very permissive file-permissions to this folder and all sub-folders the container did start again.
I do not want to reproduce the command to set those very permissive access-rights on the mounted docker-folder, because it is most likely a very bad practice and a security-issue. I just wanted to share the fact that it might not be a second process of elasticsearch running, but actually just missing access-rights to the mounted folder.
Maybe someone could elaborate on the apropriate rights to set for a mounted-folder in a docker-container?
Upvotes: 14
Reputation: 337
To add to the above answers there could be some other scenarios in which you can get the error.In fact I had done a update from 5.5 to 6.3 for elasticsearch.I have been using the docker compose setup with named volumes for data directories.I had to do a docker volume prune
to remove the stale ones.After doing that I was no longer facing the issue.
Upvotes: 1
Reputation: 41
I had an another ElasticSearch running on the same machine.
Command to check : netstat -nlp | grep 9200 (9200 - Elastic Port) Result : tcp 0 0 :::9210 :::* LISTEN 27462/java
Kill the process by, kill -9 27462 27462 - PID of ElasticSearch instance
Start the elastic search and it may run now.
Upvotes: 4
Reputation: 53
Try the following:
1. find the port 9200, e.g.: lsof -i:9200
This will show you which processes use the port 9200.
2. kill the pid(s), e.g. repeat kill -9 pid
for each PID that the output of lsof
showed in step 1
3. restart elasticsearch, e.g. elasticsearch
Upvotes: 5
Reputation: 13600
the reason is another instance is running!
first find the id of running elastic.
ps aux | grep 'elastic'
then kill using kill -9 <PID_OF_RUNNING_ELASTIC>
.
There were some answers to remove node.lock file but that didn't help since the running instance will make it again!
Upvotes: 30
Reputation: 983
You already have ES running. To prove that type:
curl 'localhost:9200/_cat/indices?v'
If you want to run another instance on the same box you can set node.max_local_storage_nodes in elasticsearch.yml to a value larger than 1.
Upvotes: 6
Reputation: 5086
I got this same error message, but things were mounted fine and the permissions were all correctly assigned.
Turns out that I had an 'orphaned' elasticsearch process that was not being killed by the normal stop command.
I had to manually kill the process and then restarting elasticsearch worked again.
Upvotes: 31
Reputation: 8528
In my case, this error was caused by not mounting the devices used for the configured data directories using "sudo mount".
Upvotes: 3
Reputation: 35925
In my situation I had wrong permissions on the ES dir folder. Setting correct owner solved it.
# change owner
chown -R elasticsearch:elasticsearch /data/elasticsearch/
# to validate
ls /data/elasticsearch/ -la
# prints
# drwxr-xr-x 2 elasticsearch elasticsearch 4096 Apr 30 14:54 CLUSTER_NAME
Upvotes: 20