Reputation: 23
I set up my configuration files and formatted my file system already but whenever I try to execute the start shell scripts I get this error.
Below I put the alias for hstart
computer:~ seanplowman$ hstart
18/04/14 23:34:43 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 69: [: Mac.out: integer expression expected
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-seanplowman-namenode-Seans
localhost: Error: Could not find or load main class Mac.log
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 69: [: Mac.out: integer expression expected
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-seanplowman-datanode-Seans
localhost: Error: Could not find or load main class Mac.log
Starting secondary namenodes [0.0.0.0]
0.0.0.0: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 69: [: Mac.out: integer expression expected
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-seanplowman-secondarynamenode-Seans
0.0.0.0: Error: Could not find or load main class Mac.log
18/04/14 23:35:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
/usr/local/hadoop/sbin/yarn-daemon.sh: line 60: [: Mac.out: integer expression expected
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-seanplowman-resourcemanager-Seans
Error: Could not find or load main class Mac.log
localhost: /usr/local/hadoop/sbin/yarn-daemon.sh: line 60: [: Mac.out: integer expression expected
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-seanplowman-nodemanager-Seans
localhost: Error: Could not find or load main class Mac.log
jps also says that none of the nodes are up after running the start scripts. From what I have researched it seems like it may be something wrong with my hostnames however trying to change those hasn't fixed anything.
I will provide my other config files to show how they are setup for context.
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. The uri's scheme determines
the config property (fs.SCHEME.impl) naming the FileSystem implementation
class. The uri's authority is used to determine the host, port, etc. for a filesystem.
</description>
</property>
</configuration>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/datanode</value>
</property>
</configuration>
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9010</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
</configuration>
I made a few changes to my hadoop-env.sh as well. I will put those below.
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home
and
export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true -Djava.security.krb5.realm= -Djava.security.krb5.kdc="
#Hadoop variables
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib/native"
###end of paste
alias hstart="/usr/local/hadoop/sbin/start-dfs.sh;/usr/local/hadoop/sbin/start-yarn.sh"
alias hstop="/usr/local/hadoop/sbin/stop-yarn.sh;/usr/local/hadoop/sbin/stop-dfs.sh"
I'm not sure what the next step to take from here is having looked at pretty much every file involved.
Upvotes: 2
Views: 192
Reputation: 191953
I think you have spaces in your Mac's hostname. For example, Seans Mac
The default log files are named using that,
HDFS: log=$HADOOP_LOG_DIR/hadoop-$HADOOP_IDENT_STRING-$command-$HOSTNAME.out
YARN: log=$YARN_LOG_DIR/yarn-$YARN_IDENT_STRING-$command-$HOSTNAME.out
Where $HOSTNAME
is the issue, and spaces are unexpected.
If you look at the output, you'll notice hadoop-seanplowman-namenode-Seans
, so I suspect
HADOOP_IDENT_STRING
= user running the scripts = seanplowman
command
= hadoop
HOSTNAME
= Seans Mac
See if fixing the hostname without spaces changes anything.
If not, edit the yarn-daemon.sh
and hadoop-daemon.sh
scripts to start with
#!/usr/bin/env bash
set -xv
Then edit the question with the outputs
Upvotes: 2