Uselesssss
Uselesssss

Reputation: 2133

JobTracker in hadoop not running

Actually i installed and configured my hadoop single cluster using

http://wiki.apache.org/hadoop/Running_Hadoop_On_Ubuntu_Linux_%28Single-Node_Cluster%29

Now when i am using

NameNode - (http://localhost:50070)/ (for my name node) it is running fine but for

JobTracker - (http://localhost:50030)/ it is not working

What can be the case

Thanks

Upvotes: 5

Views: 23505

Answers (8)

Yousef Irman
Yousef Irman

Reputation: 109

start it with

 $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh --config $HADOOP_CONF_DIR start historyserver

Upvotes: 0

Code wrangler
Code wrangler

Reputation: 134

Might be a bit late to reply but i hope will be useful for other readers.

In Hadoop 2.0, the JobTracker and TaskTracker no longer exist and have been replaced by three components:

ResourceManager: a scheduler that allocates available resources in the cluster amongst the competing applications.

NodeManager: runs on each node in the cluster and takes direction from the ResourceManager. It is responsible for managing resources available on a single node.

ApplicationMaster: an instance of a framework-specific library, an ApplicationMaster runs a specific YARN job and is responsible for negotiating resources from the ResourceManager and also working with the NodeManager to execute and monitor Containers.

So as far as you are seeing ResourceManager(on NN) & NodeManager(on DN) processes you are good to go.

Upvotes: 1

Mit Mehta
Mit Mehta

Reputation: 789

In new version of hadoop you can monitor jobs being executed at

localhost:8088

where you will find the webUI for new hadoop

Link : https://stackoverflow.com/a/24105597/1971660

Upvotes: 1

Jorge Cadavid
Jorge Cadavid

Reputation: 1

Please try this command --- hadoop dfsadmin -safemode leave --- is more effective.

Upvotes: 0

pablo pidal
pablo pidal

Reputation: 349

  • hd0@HappyUbuntu:/usr/local/hadoop$ bin/hadoop jobtracker
  • You probably will view an error about credentials. Type:
  • sudo chown -R hd0 /usr/local/hadoop
  • Now, type "jps" and check JobTracker is running
  • Later, perhaps you need type "bin/hadoop dfsadmin -safemode leave" if you obtains "org.apache.hadoop.mapred.SafeModeException: JobTracker is in safe mode"

Upvotes: 2

Hisham Muneer
Hisham Muneer

Reputation: 8742

format your namenode using the following command.

$ <path_to_hadoop.x.xx>/bin/hadoop namenode -format

This will solve your problem.

Upvotes: 1

kiru
kiru

Reputation: 29

Well ..what distribution/version of Hadoop are using ? Its been a long time since I have used hadoop-site.xml. With Hadoop 1.0.x it is core-site.xml and mapred-site.xml. Basically, I think start-all is not starting your jobtracker at all as it is not configured properly.

Upvotes: 0

Happy3
Happy3

Reputation: 309

After you run $HADOOP_HOME/bin/start-all.sh, you can type a command "jps" to check whether all the neccessary hadoop proccesses have started. If everything is ok, it should be like this:

hd0@HappyUbuntu:/usr/local/hadoop$ jps
18694 NameNode
19576 TaskTracker
19309 JobTracker
19225 SecondaryNameNode
19629 Jps
18972 DataNode

It's possible that your JobTracker proccess is out of work. So check it first. If it's true, then you should look into the log files in the logs directory for a more specific reason.

Upvotes: 2

Related Questions