Cod3br3aker
Cod3br3aker

Reputation: 21

start-all.sh: command not found. How do I fix this?

I tried installing hadoop using this tutorial, link (timestamped the video from where problem occurs)

However, after formatting the namenode(hdfs namenode -format) I don't get the "name" folder in /abc. Also the start-all.sh and other /sbin commands dont work.

P.S I did try installing hadoop as a single node which didnt work so I tried removing it, redoing everything as a double node setup, so i had to reformat the namenode..i dont know if that affected this somehow.

EDIT 1: I fixed the start-all.sh command not working because there was a mistake in .bashrc that i corrected. However I get these error messages when running start-all.sh or start-dfs.sh etc.

hadoop@linux-virtual-machine:~$ start-dfs.sh Starting namenodes on [localhost] localhost: mkdir: cannot create directory ‘/usr/local/hadoop-2.10.0/logs’: Permission denied localhost: chown: cannot access '/usr/local/hadoop-2.10.0/logs': No such file or directory localhost: starting namenode, logging to /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out: No such file or directory localhost: head: cannot open '/usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out' for reading: No such file or directory localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out: No such file or directory localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out: No such file or directory localhost: mkdir: cannot create directory ‘/usr/local/hadoop-2.10.0/logs’: Permission denied localhost: chown: cannot access '/usr/local/hadoop-2.10.0/logs': No such file or directory localhost: starting datanode, logging to /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out: No such file or directory localhost: head: cannot open '/usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out' for reading: No such file or directory localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out: No such file or directory localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out: No such file or directory Starting secondary namenodes [0.0.0.0] The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established. ECDSA key fingerprint is SHA256:a37ThJJRRW+AlDso9xrOCBHzsFCY0/OgYet7WczVbb0. Are you sure you want to continue connecting (yes/no)? no 0.0.0.0: Host key verification failed.

EDIT 2: Fixed the above error my changing the permissions to hadoop folder (in my case both hadoop-2.10.0 and hadoop) start-all.sh works perfectly but namenode doesnt show up.

Upvotes: 2

Views: 3068

Answers (1)

OneCricketeer
OneCricketeer

Reputation: 191728

It's not clear how you setup your PATH variable. Or how the scripts are not "working". Did you chmod +x them to make them executable? Any logs output that comes from them at all?

The start-all script is available in the sbin directory of where you downloaded Hadoop, so just /path/to/sbin/start-all.sh is all you really need.

Yes, the namenode needs formatted on a fresh cluster. Using the official Apache Guide is the most up-to-date source and works fine for most.

Otherwise, I would suggest you learn about Apache Amabri, which can automate your installation. Or just use a Sandbox provided by Cloudera, or use many of the Docker containers that already exist for Hadoop if you don't care about fully "installing" it.

Upvotes: 1

Related Questions