Vinod Jayachandran
Vinod Jayachandran

Reputation: 3898

HDFS write failed org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.RecoveryInProgressException): Failed to close file

I am trying to write a file in HDFS. Below is my sample code

URI uri = URI.create(sURI);
            System.setProperty(HADOOP_USER_NAME, grailsApplication.config.hadoop.user.name);
            Configuration conf = new Configuration();
            conf.set(FS_DEFAULT_NAME, grailsApplication.config.fs.default.name);
            conf.set(DFS_REPLICATION, grailsApplication.config.dfs.replication);
            Path path = new Path(uri);
            FileSystem file = FileSystem.get(uri, conf);
            FSDataOutputStream outputStream;
            if (file.exists(path))
                outputStream = file.append(new Path(uri));
            else outputStream = file.create(new Path(uri))


            outputStream.write(request.data.getBytes());
            outputStream.close();

I get the following below exception. Please advise what could i probably be doing wrong.

HDFS write failed org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.RecoveryInProgressException): Failed to close file /EligibilityDataFeederJob/status.txt. Lease recovery is in progress. Try again later.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:3071)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2861)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:3145)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:3108)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:598)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:415)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)

Upvotes: 1

Views: 1779

Answers (1)

madhu
madhu

Reputation: 1170

There is an operation called outputStream = file.append(new Path(uri)); in your code. Append operation generally works better if the replication factor is set to 1 in our code. Just check the replication factor you are using.This error occurs because there is a possibility that the replicas of a block may have different Generation Stamp values.

Upvotes: 1

Related Questions