Narayana Basetty
Narayana Basetty

Reputation: 141

Solr 6.0.0 - SolrCloud java example

I have solr installed on my localhost.

I started standard solr cloud example with embedded zookeepr.

collection: gettingstarted shards: 2 replication : 2

500 records/docs to process time took 115 seconds[localhost tetsing] - why is this taking this much time to process just 500 records. is there a way to improve this to some millisecs/nanosecs

NOTE:

I have tested the same on remote machine solr instance, localhost having data index on remote solr [inside java commented]

I started my solr myCloudData collection with Ensemble with single zookeepr.

2 solr nodes, 1 Ensemble zookeeper standalone

collection: myCloudData, shards: 2, replication : 2

Solr colud java code

package com.test.solr.basic;

import java.io.IOException;
import java.util.concurrent.TimeUnit;

import org.apache.solr.client.solrj.SolrClient;
import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.impl.CloudSolrClient;
import org.apache.solr.client.solrj.impl.HttpSolrClient;
import org.apache.solr.common.SolrInputDocument;

 public class SolrjPopulatorCloudClient2 {
  public static void main(String[] args) throws     IOException,SolrServerException {           


    //String zkHosts = "64.101.49.57:2181/solr";
    String zkHosts = "localhost:9983";
    CloudSolrClient solrCloudClient = new CloudSolrClient(zkHosts, true);
    //solrCloudClient.setDefaultCollection("myCloudData");
    solrCloudClient.setDefaultCollection("gettingstarted");
    /*
    // Thread Safe
    solrClient = new ConcurrentUpdateSolrClient(urlString, queueSize, threadCount);
    */
    // Depreciated - client
    //HttpSolrServer server = new HttpSolrServer("http://localhost:8983/solr");
    long start = System.nanoTime();
    for (int i = 0; i < 500; ++i) {
        SolrInputDocument doc = new SolrInputDocument();
        doc.addField("cat", "book");
        doc.addField("id", "book-" + i);
        doc.addField("name", "The Legend of the Hobbit part " + i);
        solrCloudClient.add(doc);
        if (i % 100 == 0)
            System.out.println(" Every 100 records flush it");
        solrCloudClient.commit(); // periodically flush
    }
    solrCloudClient.commit(); 
    solrCloudClient.close();
    long end = System.nanoTime();
    long seconds = TimeUnit.NANOSECONDS.toSeconds(end - start);
    System.out.println(" All records are indexed, took " + seconds + " seconds");

 }
}

Upvotes: 2

Views: 3855

Answers (1)

Matt Pearce
Matt Pearce

Reputation: 484

You are committing every new document, which is not necessary. It will run a lot faster if you change the if (i % 100 == 0) block to read

if (i % 100 == 0) {
    System.out.println(" Every 100 records flush it");
    solrCloudClient.commit(); // periodically flush
}

On my machine, this indexes your 500 records in 14 seconds. If I remove the commit() call from the for loop, it indexes in 7 seconds.

Alternatively, you can add a commitWithinMs parameter to the solrCloudClient.add() call:

solrCloudClient.add(doc, 15000);

This will guarantee your records are committed within 15 seconds, and also increase your indexing speed.

Upvotes: 3

Related Questions