Kamal Kishore
Kamal Kishore

Reputation: 325

solr performance with ram size variation

As per this article, the linux machine is preferred to have 1.5 times RAM with respect to index size. So, to verify this, I tried testing the solr performance in different volumes of RAM allocation keeping other configuration (i.e Solid State Drives, 8 core processor, 64-Bit) to be same in both the cases.

https://wiki.apache.org/solr/SolrPerformanceProblems

1) Initially, the linux machine had 32 GB RAm, out of which I allocated 14GB to solr.

export CATALINA_OPTS="-Xms2048m -Xmx14336m -XX:+UseConcMarkSweepGC -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:./logs/info_error/tomcat_gcdetails.log"

The average search time for 1000 queries 300ms.

2) After that, RAM was increased to 68 GB, out of which I allocated 40GB to Solr. Now, on a strange note, the average search time for the same set of queries was 3000ms.

Now, after this, I reduced solr allocated RAM to 25GB on 68GB machine. But, still the search time was higher as compared to first case.

What I am missing. Please suggest

Upvotes: 2

Views: 1662

Answers (1)

Ronald
Ronald

Reputation: 2932

In my opinion you are using a too high Java heap size.

RAM is very important for Solr but mainly to keep the index files in memory which happens outside the Java heap space.

Solr by default is using "MMapDirectory" which loads the index files into the OS disk cache from where it is then mapped into the virtual memory of the Solr process. The important thing again is that this happens outside of the Java heap space.

This is also said in the documentation you state:

A major driving factor for Solr performance is RAM. Solr requires sufficient memory for two separate things: One is the Java heap, the other is "free" memory for the OS disk cache.

...

For index updates, Solr relies on fast bulk reads and writes. For search, fast random reads are essential. The best way to satisfy these requirements is to ensure that a large disk cache is available.

To understand this better read http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

So how big should the Java heap space be? I would start with just 2 GB and then look at the GC log

This is one of those questions that has no generic answer. You want a heap that's large enough so that you don't have OutOfMemory (OOM) errors and problems with constant garbage collection, but small enough that you're not wasting memory or running into huge garbage collection pauses. The long version: You'll have to experiment.

Upvotes: 2

Related Questions