Reputation: 929
If you want to know how I solved it, go here.
I have a oozie workflow. There is a shell action inside.
<action name="start_fair_usage">
<shell xmlns="uri:oozie:shell-action:0.1">
<job-tracker>${JOB_TRACKER}</job-tracker>
<name-node>${NAME_NODE}</name-node>
<exec>${start_fair_usage}</exec>
<argument>${today_without_dash}</argument>
<argument>${yesterday_with_dash}</argument>
<file>${start_fair_usage_path}#${start_fair_usage}</file>
<capture-output/>
</shell>
<ok to="END"/>
<error to="KILL"/>
</action>
This action starts a script. start_fair_usage.sh
echo "today_without_dash="$today_without_dash
echo "yeasterday_with_dash="$yeasterday_with_dash
echo "-----------RUN copy mta-------------"
bash copy_file.sh mta $today_without_dash
echo "-----------RUN copy rcr-------------"
bash copy_file.sh rcr $today_without_dash
echo "-----------RUN copy sub-------------"
bash copy_file.sh sub $today_without_dash
Which in turn starts another script. copy_file.sh
# directories in which where sub mtr rcr files are kept
echo "directories"
dirs=(
/user/comverse/data/${2}_B
)
# clear the hdfs directory of old files and copy new files
echo "remove old files "${1}
hadoop fs -rm -skipTrash /apps/hive/warehouse/amd.db/fair_usage/fct_evkuzmin/file_${1}/*
for i in $(hadoop fs -ls "${dirs[@]}" | egrep ${1}.gz | awk -F " " '{print $8}')
do
hadoop fs -cp $i /apps/hive/warehouse/amd.db/fair_usage/fct_evkuzmin/file_${1}
echo "copy file - "${i}
done
echo "end copy "${1}" files"
How do I start the workflow so that it can copy files?
Upvotes: 1
Views: 14757
Reputation: 3079
I met the same problem ,below is the stack trace:
2017-07-03 18:07:24,208 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: reportBadBlock encountered RemoteException for block: BP-455427998-10.120.117.100-1466433731629:blk_1140369410_67364810
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category WRITE is not supported in state standby
at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1774)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1313)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.reportBadBlocks(FSNamesystem.java:6263)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.reportBadBlocks(NameNodeRpcServer.java:798)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.reportBadBlocks(DatanodeProtocolServerSideTranslatorPB.java:272)
at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28766)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)
at org.apache.hadoop.ipc.Client.call(Client.java:1475)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy14.reportBadBlocks(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.reportBadBlocks(DatanodeProtocolClientSideTranslatorPB.java:290)
at org.apache.hadoop.hdfs.server.datanode.ReportBadBlockAction.reportTo(ReportBadBlockAction.java:62)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processQueueMessages(BPServiceActor.java:988)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:727)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:824)
at java.lang.Thread.run(Thread.java:745)
If you are familiar with hadoop RPC, you will know that the above error log happends when RPC client(DataNode
) trying to make a remote RPC
call to the RPC
Server(NameNode
),NodeNode
throw a exception because it is a standby one.So ,some of the above stacktrace was the server-side stack and some are the client-side stacktrace.
But the key is , does it has any bad influence to you HDFS
system?
The answer is absolute no.
From BPOffsetService.java
:
void notifyNamenodeReceivingBlock(ExtendedBlock block, String storageUuid) {
checkBlock(block);
ReceivedDeletedBlockInfo bInfo = new ReceivedDeletedBlockInfo(
block.getLocalBlock(), BlockStatus.RECEIVING_BLOCK, null);
for (BPServiceActor actor : bpServices) {
actor.notifyNamenodeBlock(bInfo, storageUuid, false);
}
}
The bpServices
store the rpc list of both namenodes , including the active and the standby one.Obviously , a same request it send to both namenodes ,at least one request will report a 'category WRITE is not supported in state standby***'error and the other one will succeed.
So ,no worries about it.
In your hdfs HA configurtion ,if you configure like this:
<property>
<name>dfs.ha.namenodes.datahdfsmaster</name>
<value>namenode1,namenode2</value>
</property>
And unfortunately ,if the namenode1 is the standby one , then you will receive much INFO-level log because namenode1 will be requested for some Operation and surely the NameNode-side checkOperation() will throw a INFO-level exception.
Upvotes: 0