Rochdi Dadci
Rochdi Dadci

Reputation: 21

hadoop error coming when starting hadoop

Hi i can't resolve my problem when running hadoop with start-all.sh

rochdi@127:~$ start-all.sh

/usr/local/hadoop/bin/hadoop-daemon.sh: line 62: [: localhost: integer expression expected

starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-rochdi-namenode-127.0.0.1

localhost: /usr/local/hadoop/bin/hadoop-daemon.sh: line 62: [: localhost: integer expression expected

localhost: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-rochdi-datanode-127.0.0.1

localhost: /usr/local/hadoop/bin/hadoop-daemon.sh: line 62: [: localhost: integer expression expected

localhost: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-rochdi-secondarynamenode-127.0.0.1

/usr/local/hadoop/bin/hadoop-daemon.sh: line 62: [: localhost: integer expression expected

starting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-rochdi-jobtracker-127.0.0.1

localhost: /usr/local/hadoop/bin/hadoop-daemon.sh: line 62: [: localhost: integer expression expected

localhost: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-rochdi-tasktracker-127.0.0.1

localhost: Erreur : impossible de trouver ou charger la classe principale localhost

path:

rochdi@127:~$ echo "$PATH"
/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/hadoop/bin:/usr/local/hadoop/lib

before coming error i change hostname file as:

127.0.0.1 localhost
127.0.1.1 ubuntu.local ubuntu

and i configured my bashrc file as

export HADOOP_PREFIX=/usr/local/hadoop

export PATH=$PATH:$HADOOP_PREFIX/bin

export JAVA_HOME=/usr/lib/jvm/java-7-oracle

and jps command

  rochdi@127:~$ jps
     3427 Jps

help me please

Upvotes: 2

Views: 1874

Answers (6)

Rushikesh Shinde
Rushikesh Shinde

Reputation: 1

check the master and slaves file as well as bashrc. Ensure you enter the name of DN in slaves and try on this in core-site.xml file

    <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
    </property>
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/usr/local/hadoop/tmp</value>
    </property>

Upvotes: 0

rsantiago
rsantiago

Reputation: 2099

The OP posted that the main error was fixed by changing the hostname (answered Dec 6 '13 at 14:19). It suggests issues with the file /etc/hosts and the file 'slaves' in the master. Remember that each host name in the cluster must match the values in those files. When an xml is wrongly configured, it normally throw up Connectivity issues between the ports of the services.

From the message "no ${SERVICE} to stop" most probably the previous start-all.sh leaved the java processes orphaned. The solutions is to stop manually each process, e.g.

$ kill -9 4605

and then execute again the start-all.sh command

It's important to mention that this is an old question, and currently we have the version 2 and 3 of Hadoop. And I strongly recommend using on of the latest version.

Upvotes: 0

Sagar Chawla
Sagar Chawla

Reputation: 59

Use server IP address instead of using localhost in core-site.xml and check your entries in etc/hosts and slaves file.

Upvotes: 0

fyarci
fyarci

Reputation: 549

Check your hosts file and *-site.xml files for host names. This error occurs when the hostnames not defined properly.

Upvotes: 0

Krunal Makwana
Krunal Makwana

Reputation: 37

after extract hadoop tar file open ~/bashrc file and add following at the end of file

export HADOOP_HOME=/usr/local/hadoop 
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native 
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin 
export HADOOP_INSTALL=$HADOOP_HOME 

then,

edit file $HADOOP_HOME/etc/hadoop/core-site.xml add following config then start hadoop

<configuration>

   <property>
      <name>fs.default.name </name>
      <value> hdfs://localhost:9000 </value> 
   </property>

</configuration>

still problem the use this link click here

Upvotes: 0

Rochdi Dadci
Rochdi Dadci

Reputation: 21

i resolve the problem i just change my hostname but and all nodes start but when i stop them i have this message:

rochdi@acer:~$ jps
4605 NameNode
5084 SecondaryNameNode
5171 JobTracker
5460 Jps
5410 TaskTracker
rochdi@acer:~$ stop-all.sh 
stopping jobtracker
localhost: no tasktracker to stop
stopping namenode
localhost: no datanode to stop
localhost: stopping secondarynamenode

Upvotes: 0

Related Questions