haki
haki

Reputation: 41

The datanode of hadoop runs successfully, but the livenode is 0 on the master:8088 website

Recently, when I configured Hadoop, I found that the Datanode node was started normally through JPS, but the number of live nodes displayed in master: 8088 was 0.

Following are the configuration files on the master node and data node:

/etc/hosts

192.168.127.130   Master
192.168.127.129   Slave
192.168.127.131   Slave1

core-site.xml

<configuration>
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://Master:9000</value>
        </property>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/usr/local/hadoop/tmp</value>
                <description>Abase for other temporary directories.</description>
        </property>
</configuration>

hdfs-site.xml

<configuration>
        <property>
                <name>dfs.namenode.secondary.http-address</name>
                <value>Master:50090</value>
        </property>
        <property>
                <name>dfs.namenode.http.address</name>
               <value>Master:50070</value>
        </property>
        <property>
                <name>dfs.replication</name>
                <value>1</value>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>/usr/local/hadoop/tmp/dfs/name</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>/usr/local/hadoop/tmp/dfs/data</value>
        </property>
        <property>
                <name>dfs.permissions.enabled</name>
                <value>false</value>  
        </property>
        <property>
                <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
        <value>false</value>
        </property>
</configuration>

mapred-site.xml

<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.address</name>
                <value>Master:10020</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.webapp.address</name>
                <value>Master:19888</value>
        </property>
        <property>
                <name>yarn.app.mapreduce.am.env</name>
                <value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value>
        </property>
        <property>
                <name>mapreduce.map.env</name>
                <value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value>
        </property>
        <property>
                <name>mapreduce.reduce.env</name>
                <value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value>
        </property> 
</configuration>

yarn-site.xml

<configuration>
        <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>Master</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        
        <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>2048</value>
 </property>
 <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>1</value>
 </property>
 
 <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
        </property>
        <property>
             <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
             <value>org.apache.hadoop.mapred.ShuffleHandler</value>
         </property>

</configuration>

the jps run on the Namenode give the following:

Jps
NameNode
SecondaryNameNode
ResourceManager

and jps on datanode:

DataNode
Jps
NodeManager

which to me seems right. but when I looking to master:8088,there is no live nodes exist. Why I am geting the error?

by the way,I have already checked the logs of all nodes and no errors are shown.Each node can ping others.

and I also have try

1.stop and restart Hadoop. not work

2.stop Hadoop. delete all the files in /usr/local/hadoop/tmp

3.format namenode by hdfs namenode -format, still not work

Upvotes: 0

Views: 233

Answers (1)

haki
haki

Reputation: 41

I find out the problem.My datanode is unhealthy, because local-dirs usable space is below configured utilization percentage/no more usable space .After allocating more space to disk,this problem is solved.

Upvotes: 1

Related Questions