dtosato
dtosato

Reputation: 129

Memory usage keep growing while writing the Lucene.Net index

I open this discussion since googling about the Lucene.Net usage I have not found anything really useful. The issue is simple: I am experiencing a problem in building and updating the Lucene.Net index. In particular its memory usage keeps growing while even if I fix the SetRAMBufferSizeMB to 256, SetMergeFactor to 100 and SetMaxMergeDocs to 100000. Moreover I use carefully the Close() and Commit() methods every time the index is used.

To make lucene.Net works for my data I started from this tutorial: http://www.lucenetutorial.com/lucene-in-5-minutes.html

It seems that for 10^5 and 10^6 documents 1.8GB of ram are necessary. Therefore, why have I to set SetRAMBufferSizeMB parameter if the actual RAM usage is 7 times more? Does anyone really know how to keep the memory usage bound?

Moreover, I observed that to deal with 10^5 or 10^6 documents it is necessary to compile Lucene.Net for an x64 platform. Indeed if I compile the code for an x86 platform the indexing crash systematically touching 1.2GB of RAM. Does anyone is able to index the same amount of documents (or even more) using less RAM? In which hardware and software setting? My environment configuration is the following: - os := win7 32/64 bits. - sw := .Net framework 4.0 - hw := a 12 core Xeon workstation with 6GB of RAM. - Lucene.Net rel.: 2.9.4g (current stable). - Lucene.Net directory type: FSDirectory (the index is written into the disk).


OK, I tested the code using your advice on the re-usage of Document/Fields instances however the code performs exactly the same in terms of memory usage. Here I post few debugging lines for some parameters I have tracked during the indexing process of 1000000 documents.

DEBUG - BuildIndex – IndexWriter - RamSizeInBytes 424960B; index process dimension 1164328960B.  4% of the indexing process.
DEBUG - BuildIndex – IndexWriter - RamSizeInBytes 457728B; index process dimension 1282666496B.  5% of the indexing process.
DEBUG - BuildIndex – IndexWriter - RamSizeInBytes 457728B; index process dimension 1477861376B.  6% of the indexing process.

The index process dimension is obtained as follows:

It is easy to observe how fast the process grows in RAM (~1.5GB at the 6% of the indexing process) even if the RAM buffer exploited by the IndexWriter is more or less unchanged. Therefore, the question is: is it possible to explicitly limit the RAM usage of the indexing process size? I do not care if the performances drop down during the searching phase and if I have to wait for a while for a complete index, but I need to be sure that the indexing process does not hit an OOM or a stack overflow error indexing a large number of documents. How can I do that if it is impossible to limit the memory usage?

For completeness, I post the code used for the debugging:

// get the current process
Process currentProcess = System.Diagnostics.Process.GetCurrentProcess();
// get the physical mem usage of the index writer 
long totalBytesOfIndex = writer.RamSizeInBytes();
// get the physical mem usage
long totalBytesOfMemoryUsed = currentProcess.WorkingSet64;

Upvotes: 3

Views: 3037

Answers (2)

dtosato
dtosato

Reputation: 129

Finally, I found the bug. it is contained into the ItalianAnalyzer (the analiser for the Italian Language) which has been built exploiting the Luca Gentili contribution (http://snowball.tartarus.org/algorithms/italian/stemmer.html). Indeed inside the ItalianAnalyzer class a file containing the stop words was open several times and after each usage it was not closed. This was the reason of the OOM problem for me. Solving this bug Lucene.Net is light-speed both to build the index and search.

Upvotes: 5

Jf Beaulac
Jf Beaulac

Reputation: 5246

The SetRAMBufferSizeMB is just one of the way to determine when to flush the IndexWriter to disk. It will flush segments data when XXX MB are writen to memory and ready to be flushed to disk.

There are lots of other objects in Lucene that will also use memory, and have nothing to do with the RamBuffer.

Usually, the first thing to try when you run OOM while indexing is to re-use Document/Fields instances. If you multithread indexation, make sure you only reuse them on the same thread. It happened to me to run OOM because of that, when the underlying IO is blazing fast and the .NET garbage collector just cant keep up with all the small objects created.

Upvotes: 0

Related Questions