Searene
Searene

Reputation: 27564

Hadoop: each namenode and datanode only last for a momentary time

Using CentOs 5.4

Three virtual machines(using vmware workstation):master, slave1, slave2. master is used for the namenode, and slave1 slave2 are used for the datanodes.

Hadoop version is hadoop-0.20.1.tar.gz, I have configured all the relative files, and closed the firewall with root user using the command:/sbin/service iptables stop. Then I tried to format namenode and start hadoop in the master(namenode) virtual machine with the following commands, no error was reported.

bin/hadoop namenode -format
bin/start-all.sh

Then I typed the command "jps" in the master machine right now, and found the right result:

5144 JobTracker
4953 NameNode
5079 SecondaryNameNode
5216 Jps

But after about several seconds, when I tried to type the "jps" command, all the virtual machines only have one process: JPS. The following is the result displayed in the namenode(master)

5236 Jps

What's the matter? Or how can I find what caused the matter? Dose it mean that it cannot find any namenode or datanode? Thank you.


Attachment: all the places I have modified:

hadoop-env.sh:

# set java environment
export JAVA_HOME=/usr/jdk1.6.0_13/

core-site.xml:

<configuration>

<property>
        <name>master.node</name>
        <value>namenode_master</value>
        <description>master</description>
</property>

<property>
        <name>hadoop.tmp.dir</name>
        <value>/usr/local/hadoop/tmp</value>
        <description>local dir</description>
</property>

<property>
        <name>fs.default.name</name>
        <value>hdfs://${master.node}:9000</value>
        <description> </description>
</property>

</configuration>

hdfs-site.xml:

<configuration>
   <property>
       <name>dfs.replication</name>  
       <value>2</value>   
  </property>

<property>
     <name>dfs.name.dir</name>
         <value>${hadoop.tmp.dir}/hdfs/name</value>
         <description>local dir</description>
</property>

<property>
     <name>dfs.data.dir</name>
     <value>${hadoop.tmp.dir}/hdfs/data</value>
     <description>  </description>
</property>

</configuration>

mapred-site.xml:

<configuration>

<property>
        <name>mapred.job.tracker</name>
        <value>${master.node}:9001</value>
        <description> </description>
</property>

<property>
        <name>mapred.local.dir</name>
        <value>${hadoop.tmp.dir}/mapred/local</value>
        <description> </description>
</property>

<property>
        <name>mapred.system.dir</name>
        <value>/tmp/mapred/system</value>
        <description>hdfs dir</description>
</property>

</configuration>

master:

master

slaves:

slave1 
slave2  

/etc/hosts:

192.168.190.133 master
192.168.190.134 slave1
192.168.190.135 slave2

Upvotes: 1

Views: 491

Answers (1)

Searene
Searene

Reputation: 27564

From the log files, I found that I should change the namenode_master to master in the file core-site.xml. Now it works.

Upvotes: 2

Related Questions