Alex Gordon
Alex Gordon

Reputation: 60912

running an elementary mapreduce job with java on hadoop

I am just getting started with linux/java/hadoop/EMR.

I am following this neat book.

The assignment is to run:

bin/hadoop jar hadoop-cookbook-chapter1.jar chapter1.WordCount input output

And this is the response that I get:

alex@HadoopMachine:/usr/share/hadoop$ sudo hadoop jar hadoop-cookbook-chapter1.jar chapter1.WordCount input output
13/05/01 01:01:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/05/01 01:01:08 INFO input.FileInputFormat: Total input paths to process : 1
13/05/01 01:01:08 WARN snappy.LoadSnappy: Snappy native library not loaded
13/05/01 01:01:09 INFO mapred.JobClient: Running job: job_local_0001
13/05/01 01:01:09 INFO util.ProcessTree: setsid exited with exit code 0
13/05/01 01:01:09 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@1c04d881
13/05/01 01:01:09 INFO mapred.MapTask: io.sort.mb = 100
13/05/01 01:01:09 WARN mapred.LocalJobRunner: job_local_0001
java.lang.OutOfMemoryError: Java heap space
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:949)
    at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:674)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:756)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
13/05/01 01:01:10 INFO mapred.JobClient:  map 0% reduce 0%
13/05/01 01:01:10 INFO mapred.JobClient: Job complete: job_local_0001
13/05/01 01:01:10 INFO mapred.JobClient: Counters: 0

Frankly, since I have almost no java background, I do not even know where to start debugging.

I would be most grateful for any guidance on how to tackle this issue.

update

after following greedybuddha's advice i am getting:

    alex@HadoopMachine:/usr/share/hadoop$ sudo hadoop jar hadoop-cookbook-chapter1.jar chapter1.WordCount -Dmapred.child.java.opts=-Xmx1G input output
[sudo] password for alex: 
13/05/01 11:03:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/05/01 11:03:54 INFO input.FileInputFormat: Total input paths to process : 1
13/05/01 11:03:54 WARN snappy.LoadSnappy: Snappy native library not loaded
13/05/01 11:03:54 INFO mapred.JobClient: Running job: job_local_0001
13/05/01 11:03:54 INFO util.ProcessTree: setsid exited with exit code 0
13/05/01 11:03:54 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@35756b65
13/05/01 11:03:54 INFO mapred.MapTask: io.sort.mb = 100
13/05/01 11:03:54 WARN mapred.LocalJobRunner: job_local_0001
java.lang.OutOfMemoryError: Java heap space
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:949)
    at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:674)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:756)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
13/05/01 11:03:55 INFO mapred.JobClient:  map 0% reduce 0%
13/05/01 11:03:55 INFO mapred.JobClient: Job complete: job_local_0001
13/05/01 11:03:55 INFO mapred.JobClient: Counters: 0

Upvotes: 0

Views: 1351

Answers (1)

greedybuddha
greedybuddha

Reputation: 7507

Java needs a certain amount of memory to run programs. When a program uses too much, it will throw the error you are having. The solution is to tell java to allocate more memory for the program. In this case you should be able to tell hadoop to allocate you the memory. Try the following.

bin/hadoop jar hadoop-cookbook-chapter1.jar chapter1.WordCount -Dmapred.child.java.opts=-Xmx1G input output

the option -Xmx1G says allow up 1 Gigabyte.

This other stackoverflow question is also very similar. out of Memory Error in Hadoop

Upvotes: 2

Related Questions