Reputation: 485
I have 3 master node + 3 data nodes elasticsearch cluster on azure. I am trying to execute bulk operation but I get failed error about the nodes itself, here is how I setup my client:
final Builder builder = Settings.builder();
final org.elasticsearch.client.transport.TransportClient.Builder transBuilder = TransportClient.builder();
builder.put("cluster.name", esCluster);
if (esShield) {
builder.put("shield.user", esUsername + ":" + esPassword);
transBuilder.addPlugin(ShieldPlugin.class);
}
final Settings settings = builder.build();
TransportClient esClient = transBuilder.settings(settings).build();
final String[] hosts = esHost.split(",");
for (String host : hosts) {
esClient.addTransportAddress(new InetSocketTransportAddress(new InetSocketAddress(host, Integer.parseInt(esPort))));
}
here is the bulk operation:
BulkProcessor bulkProcessor = BulkProcessor.builder(getClient(), new BulkProcessor.Listener() {
@Override
public void beforeBulk(long executionId, BulkRequest request) {
LOGGER.info("Going to execute new bulk composed of {" + request.numberOfActions() + "} actions");
}
@Override
public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {
LOGGER.info("Executed bulk composed of {" + request.numberOfActions() + "} actions");
}
@Override
public void afterBulk(long executionId, BulkRequest request, Throwable failure) {
LOGGER.info("Error executing bulk");
failure.printStackTrace();
}
}).setBulkActions(docs.size()).setConcurrentRequests(250).build();
for (DBObject doc : docs) {
bulkProcessor.add(getClient().prepareIndex(indexName, typeName).setSource(doc.toMap()).request());
}
It starts responding fine for a 1,000 record batches like this:
Going to execute new bulk composed of {1001} actions
Executed bulk composed of {1001} actions
Then I started getting the following error:
transport:383 - [Stanley Stewart] failed to get node info for {#transport#-1}{10.0.0.10}{10.0.0.10:9300}, disconnecting... ReceiveTimeoutTransportException[[][10.0.0.10:9300][cluster:monitor/nodes/liveness] request_id [60] timed out after [5000ms]] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:679) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
Finally I got the following error:
bulk:148 - [Stanley Stewart] Failed to execute bulk request 1. NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{10.0.0.10}{10.0.0.10:9300}, {#transport#-2}{10.0.0.11}{10.0.0.11:9300}, {#transport#-3}{10.0.0.12}{10.0.0.12:9300}]] at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:290) at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:207) at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:55) at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:288) at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359) at org.elasticsearch.client.support.AbstractClient.bulk(AbstractClient.java:436) at org.elasticsearch.action.bulk.Retry$AbstractRetryHandler.execute(Retry.java:219) at org.elasticsearch.action.bulk.Retry.withAsyncBackoff(Retry.java:72) at org.elasticsearch.action.bulk.BulkRequestHandler$AsyncBulkRequestHandler.execute(BulkRequestHandler.java:121) at org.elasticsearch.action.bulk.BulkProcessor.execute(BulkProcessor.java:312) at org.elasticsearch.action.bulk.BulkProcessor.executeIfNeeded(BulkProcessor.java:303) at org.elasticsearch.action.bulk.BulkProcessor.internalAdd(BulkProcessor.java:285) at org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:268) at org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:264) at org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:250)
Can someone help me please figure out whats going on and how to fix it?
Upvotes: 0
Views: 2441
Reputation: 1070
It might be because the refresh interval of index it too low. Try setting refresh interval of index to -1 before bulk process. you can reset it once bulk process is completed.
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html#bulk
Upvotes: 0