Karunakar
Karunakar

Reputation: 2349

Elasticsearch heap getting filled for simple search

Executed the following simple search request: http://localhost:9200/_search?size=100000

My heap usage suddenly getting increased. Since the query doesn't contain any filter fields , aggregations, etc. there are no chances for field data cache and filtered cache to fill the heap. I don't understand what are the reasons behind heap usage increase.

I am suspecting it is because of _source field of documents. But I am not sure. I would like to know what is happening behind scenes.

Upvotes: 0

Views: 123

Answers (1)

Andrei Stefan
Andrei Stefan

Reputation: 52368

No, it's because size=100000. Elasticsearch will allocate memory for that number of documents to store as a result. And 100000 is a big number. If you lower that to 1000, for example, probably you will not fill the heap so much.

In any case, using size=100000 is a strong no-no. Don't do that. This is not how Elasticsearch is supposed to work. Elasticsearch will give you back results in a paginated way, page by page. Or use scan&scroll. In any case, never do size=LARGE_NUMBER. For a high enough number you can bring down the cluster, by making each node run out of memory and throw OutOfMemoryError.

Upvotes: 3

Related Questions