AatG
AatG

Reputation: 725

hadoop complains about attempting to overwrite nonempty destination directory

I'm following Rasesh Mori's instructions to install Hadoop on a multinode cluster, and have gotten to the point where jps shows the various nodes are up and running. I can copy files into hdfs; I did so with $HADOOP_HOME/bin/hdfs dfs -put ~/in /in and then tried to run the wordcount example program on it with $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /in /out but I get the error

15/06/16 00:59:53 INFO mapreduce.Job: Task Id : attempt_1434414924941_0004_m_000000_0, Status : FAILED Rename cannot overwrite non empty destination directory /home/hduser/hadoop-2.6.0/nm-local-dir/usercache/hduser/appcache/application_1434414924941_0004/filecache/10 java.io.IOException: Rename cannot overwrite non empty destination directory /home/hduser/hadoop-2.6.0/nm-local-dir/usercache/hduser/appcache/application_1434414924941_0004/filecache/10 at org.apache.hadoop.fs.AbstractFileSystem.renameInternal(AbstractFileSystem.java:716) at org.apache.hadoop.fs.FilterFs.renameInternal(FilterFs.java:228) at org.apache.hadoop.fs.AbstractFileSystem.rename(AbstractFileSystem.java:659) at org.apache.hadoop.fs.FileContext.rename(FileContext.java:909) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:364) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)

How can I fix this?

Upvotes: 2

Views: 3671

Answers (2)

Shinob-X
Shinob-X

Reputation: 11

I had the same error with the /hadoop/yarn/local/usercache/hue/filecache/ directory. I've done sudo rm -rf /hadoop/yarn/local/usercache/hue/filecache/* and it solved it.

Upvotes: 0

parshimers
parshimers

Reputation: 56

This is a bug in Hadoop 2.6.0. It's been marked as fixed but it still happens occasionally (see: https://issues.apache.org/jira/browse/YARN-2624).

Clearing out the appcache directory and restarting the YARN daemons should most likely fix this.

Upvotes: 4

Related Questions