Sandeep vashisth
Sandeep vashisth

Reputation: 1088

unable to run hadoop wordcount example?

I am running hadoop wordcount example in single node environment on ubuntu 12.04 in vmware. i running the example like this:--

hadoop@master:~/hadoop$ hadoop jar hadoop-examples-1.0.4.jar wordcount    
/home/hadoop/gutenberg/ /home/hadoop/gutenberg-output

i have input file at below location:

/home/hadoop/gutenberg

and location for output file is:

    /home/hadoop/gutenberg-output

when i run wordcount program i am getting following errors:--

 13/04/18 06:02:10 INFO mapred.JobClient: Cleaning up the staging area     
hdfs://localhost:54310/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201304180554_0001       
13/04/18 06:02:10 ERROR security.UserGroupInformation: PriviledgedActionException       
as:hadoop cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory 
/home/hadoop/gutenberg-output already exists 
org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory 
/home/hadoop/gutenberg-output already exists at 

org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.j 
ava:137) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:887) at 
org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:416) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at   
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850) at  
org.apache.hadoop.mapreduce.Job.submit(Job.java:500) at  
org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530) at 
org.apache.hadoop.examples.WordCount.main(WordCount.java:67) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:616) at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68) 
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) at 
org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:616) at   
org.apache.hadoop.util.RunJar.main(RunJar.java:156) hadoop@master:~/hadoop$ bin/stop-
all.sh Warning: $HADOOP_HOME is deprecated. stopping jobtracker localhost: stopping   
tasktracker stopping namenode localhost: stopping datanode localhost: stopping 
secondarynamenode    hadoop@master:~/hadoop$

Upvotes: 4

Views: 12757

Answers (4)

Thiago Messias
Thiago Messias

Reputation: 21

If you've created your own .jar and is trying to run it, pay attention:

In order the run your job, you had to have written something like this:

hadoop jar <jar-path> <package-path> <input-in-hdfs-path> <output-in-hdfs-path>

But if you take a closer look to your driver code, you'll see that you have set arg[0] as your input and arg[1] as your output... I'll show it:

FileInputFormart.addInputPath(conf, new Path(args[0]));
FileOutFormart.setOutputPath(conf, new Path(args[1]));

But, hadoop is taking arg[0] as <package-path> instead of <input-in-hdfs-path> and arg[1] as <input-in-hdfs-path> instead of <output-in-hdfs-path>

So, in order to make it work, you should use:

FileInputFormart.addInputPath(conf, new Path(args[1]));
FileOutFormart.setOutputPath(conf, new Path(args[2]));

With arg[1] and arg[2], so it'll get the right things! :) Hope it helped. Cheers.

Upvotes: 2

Nuray Altin
Nuray Altin

Reputation: 1324

check whether there is 'tmp' folder or not.

hadoop fs -ls /

if you see the output folder or 'tmp' delete both (considering no running active jobs)

hadoop fs -rmr /tmp

Upvotes: 1

highlycaffeinated
highlycaffeinated

Reputation: 19867

Like Dave (and the exceptions) said, your output directory already exists. You either need to output to a different directory or remove the existing one first, using:

hadoop fs -rmr /home/hadoop/gutenberg-output

Upvotes: 2

Dave Newton
Dave Newton

Reputation: 160181

Delete the output file that already exists, or output to a different file.

(I'm a little curious what other interpretations of the error message you considered.)

Upvotes: 9

Related Questions