Reputation: 43
I have a 3 nodes cluster with 1 master and 2 data nodes each is set for 1TB I have increased both -Xms24g -Xmx24g to half my ram (48GB total) I than successfully upload 140mb file from Kibana to elk from the GUI after increasing it from 100mb to 1GB when I tried to upload same file with only logstash the process was stuck and broke elastic my pipeline is fairly simple
input {
file {
path => "/tmp/*_log"
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
small files works great. I'm not able to push big files. log contains 1 million rows I set all fields in /etc/security/limits.conf to unlimited any ideas what I'm missing?
Upvotes: 0
Views: 1492
Reputation: 43
you will need to increase memory sizing in /etc/logstash/jvm.options
The recommended heap size for typical ingestion scenarios should be no less than 4GB and no more than 8GB.
CPU utilization can increase unnecessarily if the heap size is too low, resulting in the JVM constantly garbage collecting. You can check for this issue by doubling the heap size to see if performance improves. Do not increase the heap size past the amount of physical memory. Some memory must be left to run the OS and other processes. As a general guideline for most installations, don’t exceed 50-75% of physical memory. The more memory you have, the higher percentage you can use.
Set the minimum (Xms) and maximum (Xmx) heap allocation size to the same value to prevent the heap from resizing at runtime, which is a very costly process.
You can make more accurate measurements of the JVM heap by using either the jmap command line utility distributed with Java or by using VisualVM
Upvotes: 1