Reputation: 73
Code:
Filter filter = new RowFilter(CompareFilter.CompareOp.EQUAL,new SubstringComparator(args[1]));
Scan scan = new Scan();
scan.setFilter(filter);
ResultScanner res = table.getScanner(scan);
for(Result r :res) //LINE 49
{...}
I run this jar ,then i got the following exception message:
> Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
> attempts=32, exceptions: Mon Dec 26 11:45:00 CST 2016, null,
> java.net.SocketTimeoutException: callTimeout=60000,
> callDuration=60304: row '' on table 'maintable' at
> region=maintable,,1482293923088.ac4aebf960554591febc078e38ef5f08.,
> hostname=comp75,16020,1482598230057, seqNum=5645968
>
> at org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:97)
> at com.beidu.hbaseutil.Query.main(Query.java:49) Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException:
> Failed after attempts=32, exceptions: Mon Dec 26 11:45:00 CST 2016,
> null, java.net.SocketTimeoutException: callTimeout=60000,
> callDuration=60304: row '' on table 'maintable' at
> region=maintable,,1482293923088.ac4aebf960554591febc078e38ef5f08.,
> hostname=comp75,16020,1482598230057, seqNum=5645968
>
> at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:264)
> at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:199)
> at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:56)
> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
> at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:287)
> at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:367)
> at org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:94)
> ... 1 more Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=60304: row '' on table 'maintable' at
> region=maintable,,1482293923088.ac4aebf960554591febc078e38ef5f08.,
> hostname=comp75,16020,1482598230057, seqNum=5645968
> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
> at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:294)
> at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:275)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: Call to comp75/172.16.249.75:16020 failed on
> local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException:
> Call id=2, waitTime=60002, operationTimeout=60000 expired.
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1235)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1203)
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300)
> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:31751)
> at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:199)
> at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
> ... 6 more Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=2,
> waitTime=60002, operationTimeout=60000 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1177)
> ... 12 more
Could anyone give some clues?thx
Upvotes: 0
Views: 1104
Reputation: 292
In this case scanner that you provided,is carrying out tasks which execute for a time interval more than the specified timeout.(callTimeout=60000, callDuration=60304).
If you are connected to the cluster then either you need to optimize your cluster/table/schema for reads,or change the timeout and in case you need huge scan,go for map-reduce.
Upvotes: 0