Vivek
Vivek

Reputation: 326

Easticsearch stopping with outofmemory

We are setting up ELK for our enterprise and everything is setup. The hardware / software configuration is as follows:

Total RAM - 192G JDK = Java HotSpot(TM) 64-Bit Server VM

for injecting the datafiles we are using Logstash filebeat plugin and the indices are built properly and things seems to be working properly until we got the following error

java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method) ~[?:1.8.0_144]
        at java.lang.Thread.start(Thread.java:717) ~[?:1.8.0_144]
        at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957) ~[?:1.8.0_144]
        at java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1025) ~[?:1.8.0_144]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167) ~[?:1.8.0_144]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_144]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]

first the impression was that it could have been caused by the narrow -Xms and -Xmx setting and we changed it to 20g

but the problems persists. The Elasticsearch starts normally, rebuilds the indices and then ...

Based on a few threads - we tried the following:

  1. Changed the the Xss setting in jvm.options file from 1m to 228k
  2. Increased the ulimit to 65536

but, nothing seems to work.

Upvotes: 0

Views: 100

Answers (3)

Vivek
Vivek

Reputation: 326

I agree that it has nothing to with the heapsize as 20g is more than enough for any decent application and also the error shows unable to create new native thread

I problem was solved (as of now) by changing the 'max user processes' from 1024 to 65536

Upvotes: 0

alr
alr

Reputation: 1804

This has nothing to do with heap, as the error message indicates, that the JVM is not able to create a native operating system thread. Please ensure via ulimit that new processes can be started.

On the other hand, this could also show a misconfiguration (i.e. wrong configured threadpools that try to spawn too many threads).

Upvotes: 1

srk
srk

Reputation: 91

You may try increasing Xmx to 30GB. Also enable JMX on elasticsearch jvm to check which is consuming more space on heap.

Wondering, how many indices you have and its how much disk space those indices occupy?

Upvotes: 0

Related Questions