Jinni Raja
Jinni Raja

Reputation: 11

Mapreduce job is not running

After installing and configuring my hadoop 2.7.1 in pseudo-distributed mode everything is running, as you can see in the

~$ jps
4825 Jps
4345 NameNode
4788 JobHistoryServer
4496 ResourceManager

Than i ran mapreduce example

  hadoop jar /usr/local/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar pi 2 10

And the execution frezees (?)

  Number of Maps  = 2
Samples per Map = 10
15/07/14 08:40:09 WARN util.NativeCodeLoader: Unable to load native-hadoop          library for your platform... using builtin-java classes where applicable
Wrote input for Map #0
Wrote input for Map #1
Starting Job
15/07/14 08:40:13 INFO client.RMProxy: Connecting to ResourceManager at  master/10.0.0.4:8032
15/07/14 08:40:15 INFO input.FileInputFormat: Total input paths to process :  2
15/07/14 08:40:15 INFO mapreduce.JobSubmitter: number of splits:2
15/07/14 08:40:16 INFO mapreduce.JobSubmitter: Submitting tokens for job:  job_1436860512406_0002
15/07/14 08:40:17 INFO impl.YarnClientImpl: Submitted application   application_1436860512406_0002
15/07/14 08:40:17 INFO mapreduce.Job: The url to track the job:  http://master:8088/proxy/application_1436860512406_0002/
15/07/14 08:40:17 INFO mapreduce.Job: Running job: job_1436860512406_0002

After 2 hours it shows same..

please give any idea..

Thanks

Upvotes: 1

Views: 4795

Answers (2)

rbyndoor
rbyndoor

Reputation: 729

set yarn.nodemanager.resource.memory-mb and yarn.nodemanager.resource.cpu-vcores in yarn configuration some higher number and it should be resolved.

Read more from cloudera here

Upvotes: 0

Nakul91
Nakul91

Reputation: 1245

Here I can see when you execute jps command

~$ jps
4825 Jps
4345 NameNode
4788 JobHistoryServer
4496 ResourceManager

it is not showing your data-node. Means your data node is down. You need to format it and start again.

I was having same issue on my server. The steps I have followed is:

  1. stop-all.sh
  2. hadoop namenode -format
  3. hadoop datanode -format
  4. Go to the actual directory where your hdfs namenode and datanode are located. Remove all the files using sudo rm -rf *
  5. Remove files from tmp directory for e.g. app/hadoop/tmp/
  6. Start hadoop using start-all.sh
  7. Check if everything is running or not using jps

Upvotes: 2

Related Questions