Reputation: 95
I ran a wordcount example using Mapreduce the first time, and it worked. Then, I stopped the cluster, started it back in a while, and followed the same procedure.
Showed this error:
10P:/$ hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /user/test/tester /user/output
15/08/05 00:16:04 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
15/08/05 00:16:04 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
org.apache.hadoop.mapred.FileAlreadyExistsException: **Output directory hdfs://localhost:54310/user/output already exists**
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:562)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Upvotes: 6
Views: 22430
Reputation: 31
I think you need to use hadoop fs to check your FileSystem whether it have the directory
hadoop fs -ls /user/output
# if have the directory
hadoop fs -rm -r /user/output
if you not use the absolute path, it
Upvotes: 0
Reputation: 404
hdfs://localhost:54310/user/output
Delete the output directory before running the job.
i.e execute the following command:
hadoop fs -rm -r /user/output
before running the job.
Upvotes: 17
Reputation: 799
Simply write the driver code like this
public class TestDriver extends Configured implements Tool {
static Configuration cf;
@Override
public int run(String[] arg0) throws IOException,InterruptedException,ClassNotFoundException {
cf=new Configuration();
Job j=Job.getInstance(cf);
j.setJarByClass(TestDriver.class);
j.setMapperClass(CustMapper.class);
j.setMapperClass(TxnMapper.class);
j.setMapOutputKeyClass(CustKey.class);
j.setMapOutputValueClass(Text.class);
j.setReducerClass(JoinReducer.class);
j.setOutputKeyClass(CustKey.class);
j.setOutputValueClass(Text.class);
//FOCUS ON THE LINE BELOW
Path op=new Path(arg0[2]);
j.setInputFormatClass(CustInputFormat.class);
MultipleInputs.addInputPath(j, new Path(arg0[0]),CustInputFormat.class,CustMapper.class);
MultipleInputs.addInputPath(j, new Path(arg0[1]),ShopIpFormat.class,TxnMapper.class);
j.setOutputFormatClass(CustTxOutFormat.class);
FileOutputFormat.setOutputPath(j, op);
//WRITING THIS LINE SHALL DELETE THE OUTPUT FOLDER AFTER YOU'RE DONE WITH THE //JOB
op.getFileSystem(cf).delete(op,true);
return j.waitForCompletion(true)?0:1;
}
public static void main(String argv[])throws Exception{
int res=ToolRunner.run(cf, new TestDriver(), argv);
System.exit(res);
}
}
Hope this clears your doubt.
Upvotes: 0
Reputation: 369
Add the following code snippet in your configuration class.
// Delete output if exists
FileSystem hdfs = FileSystem.get(conf);
if (hdfs.exists(outputDir))
hdfs.delete(outputDir, true);
// Execute job
int code = job.waitForCompletion(true) ? 0 : 1;
System.exit(code);
Upvotes: 6