Reputation: 1260
I just ran a hdfs demo like this:
public final class HDFSRemoveDemo {
public static void main(String[] args) throws Exception {
Path root = new Path("hdfs://localhost:49000/");
FileSystem fs = root.getFileSystem(new Configuration());
fs.create(new Path("/tmp/test"));
fs.delete(new Path("/tmp/test"), false);
fs.close();
}
}
A puzzling exception threw as follows:
org.apache.hadoop.hdfs.DFSClient closeAllFilesBeingWritt
en
SEVERE: Failed to close file /tmp/test
org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.Le
aseExpiredException: No lease on /tmp/test File does not exist. Holder DFSClient
_NONMAPREDUCE_-1727094995_1 does not have any open files
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.
java:1999)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.
java:1990)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSN
amesystem.java:2045)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesyste
m.java:2033)
at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:805)
at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.ja
va:1190)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
at org.apache.hadoop.ipc.Client.call(Client.java:1113)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy1.complete(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57
)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocati
onHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHand
ler.java:62)
at com.sun.proxy.$Proxy1.complete(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.jav
a:4121)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:4022)
at org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:41
7)
at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:433)
at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.jav
a:369)
When I removed fs.close();
, it worked well.
The environment is:
hadoop-core -- 1.2.1
jdk -- 1.6.0_21
What happened when filesystem closed? Anyone has encountered this problem?
Upvotes: 2
Views: 5734
Reputation: 31
Generally, you should not call fs.close()
when you do a FileSystem.get(...)
.
FileSystem.get(...)
won't actually open a "new" FileSystem
object. When you do a close()
on that FileSystem, you will close it for any upstream process as well.
For example, if you close the FileSystem
during a mapper, your MapReduce driver will fail when it again tries to close the FileSystem on cleanup.
Upvotes: 3