Reputation: 341
I am trying to install hadoop on ubuntu 16.04 but while starting the hadoop it will give me following error
localhost: ERROR: Cannot set priority of datanode process 32156.
Starting secondary namenodes [it-OptiPlex-3020]
2017-09-18 21:13:48,343 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting resourcemanager
Starting nodemanagers
Please someone tell me why i am getting this error ? Thanks in advance.
Upvotes: 20
Views: 59207
Reputation: 1
Try this
If you are using WSL and you have properties set in the hdfs-site.xml for datanode
and namenode
trying deleting them and add this to yarn-site.xml :
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
Upvotes: 0
Reputation: 1
I tried some of the methods above but didn't work out. Changed it to Java 8, and it worked for me.
Upvotes: 0
Reputation: 1
just check the datanode logs and towards the end of the file, read the error message, it mentions exactly where the error is. In my case the error was due to the path name for datanode was mentioned wrongly in my hdfs-site.xml file. So when I corrected the path in that file, my datanode opened.
Upvotes: 0
Reputation: 1849
The issue with my system was that I was already running something on the ports hadoop tries to run its services on. For example, port 8040
was in use. I found out the culprit by first watching logs
tail -f /opt/homebrew/Cellar/hadoop/3.3.4/libexec/logs/*
And then stopping that particular service. You can also simply restart your system to check if it helps, unless your system start up scripts again spins up the services that have conflict with hadoop ports.
The default ports
fs.defaultFS
property in core-site.xml
dfs.secondary.http.address
property in hdfs-site.xml
dfs.datanode.address
property in hdfs-site.xml
Upvotes: 0
Reputation: 141
PROBLEM: You might get cannot set priority or cannot start secondary namenode error, let me share what worked for me:
Diagnosis: I checked if hdfs namenode -format
gave any error(which I had)
Fixed the errors:
Folders didn't exist: While setting up configuration, in your .xml files(5 files that you setup and you overwrite), make sure the directory you are pointing to is there. Otherwise create the directory if it is not there.
Didn't have permission to read write execute: Change the ownership to 777 for all the directories you pointed to in .xml file as well as your hadoop folder using this command
sudo chmod -R 777 /path_to_folders
Upvotes: 0
Reputation: 8228
This can be caused by many things, usually a mistake in one of the configuration files. So it's best you check the log files
Upvotes: 2
Reputation: 56
I also encountered this error and found that the error is from the core-site.xml file and changed the file to this form:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
Upvotes: 0
Reputation: 3771
This can occur for various reasons, its best to check logs @ $HADOOP_HOME/logs
In my case the /etc/hosts file was misconfigured i.e. my host name was not resolving to localhost
Bottom line: Check your namenode/datanode log files :)
Upvotes: 0
Reputation: 1
For me the other solutions didn't work. It was not related to directory permissions.
There is an entry JSVC_HOME in hadoop-env.sh that needs to be uncommented.
Download and make jsvc from here: http://commons.apache.org/proper/commons-daemon/jsvc.html
Alternatively, jvsc jar is also present in hadoop dir.
Upvotes: 0
Reputation: 1840
Problem Solved Here!(Both two high ranked answers didnt work for me)
This issue happens because you are running your hadoop(namenode-user,datanode-user,...) with a user that is not the owner of all your hadoop file-folders.
just do a sudo chown YOURUSER:YOURUSER -R /home/YOURUSER/hadoop/*
Upvotes: 4
Reputation: 371
The solution for my situation is add export HADOOP_SHELL_EXECNAME=root
to the last line of $HADOOP_HOME/etc/hadoop/hadoop-env.sh
,otherwise, the default value of environment variable is hdfs
Upvotes: 1
Reputation: 382
I have encountered the same issues as well.
My problem is as follows: the datanode folder permission is not granted, I have changed the rights as sudo chmod 777 ./datanode/
My advice is to check all the relevant paths/folders and make them 777 first (can changed back afterwards).
There might be some other reasons which lead to the failure of starting the datanode. Common reasons are
sudo chmod ...
ssh datanode1
to checkIf everything has been checked, and something still does not work, we can login the datanode server and go to $HADOOP_HOME/logs folder and check the log information to debug.
Upvotes: 0
Reputation: 13500
I had to deal with the same issue and kept getting the following exception:
Starting namenodes on [localhost]
Starting datanodes
localhost: ERROR: Cannot set priority of datanode process 8944
Starting secondary namenodes [MBPRO-0100.local]
2019-07-22 09:56:53,020 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
As others have mentioned, you need to first make sure that all path parameters are set correctly which is what I checked first. Then followed these steps to solve the issue:
1- Stop dfs service and format hdfs:
sbin/stop-dfs.sh
sudo bin/hdfs namenode -format
2- Change permissions for the hadoop temp directory:
sudo chmod -R 777 /usr/local/Cellar/hadoop/hdfs/tmp
3- Start service again:
sbin/start-dfs.sh
Good luck
Upvotes: 8
Reputation: 7144
I suggest you take a look at your hadoop datanode
logs.
This is probably a configuration issue.
In my case, folders configured in dfs.datanode.data.dir
didn't exist and an exception was thrown and written to log.
Upvotes: 6
Reputation: 81
Faced the same issue, flushed the folders: datanode
& namenode
.
I have put the folders in /hadoop_store/hdfs/namenode
& /hadoop_store/hdfs/datanode
Post deleting the folders, recreate and then run the command hdfs namenode -format
Start the hadoop:
After the fix the logs look good:
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [ip]
2019-02-11 09:41:30,426 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
jps:
21857 NodeManager
21697 ResourceManager
21026 NameNode
22326 Jps
21207 DataNode
21435 SecondaryNameNode
Upvotes: 3
Reputation: 49
I have run into the same error when installing Hadoop 3.0.0-RC0. My situation was all services starting successfully except Datanode.
I found that some configs in hadoop-env.sh weren't correct in version 3.0.0-RC0, but were correct in version 2.x.
I ended up replacing my hadoop-env.sh with the official one and set JAVA_HOME and HADOOP_HOME. Now, Datanodes is working fine.
Upvotes: 3