Shawn
Shawn

Reputation: 2799

Very Low Performance With LogStash

I have a LogStash configuration coupled with a REDIS broker which works perfectly with light traffic (10 Messages/Second). Each ELK services runs on its respective independent 2GB server.

Tomcat(Log4j) -> LogStash -> Redis ->LogStash->ES->Kibana.. I now have a new requirement to log about (200 messages/second).

Tomcat->LogStash->Redis works fast enough(250+ messages/second), However the second LogStash doesn't appear to be fast enough to Consume 100+ messages/second from the Redis. Its currently doing about 10 messages/second. Could this be due to my message size (I'm logging a 20kb Xml string in each message).

My second lumberjack.conf looks like the following after much tweaking (I have also applied -w 10 to the command line) for parallel processing. I also commented all multiline filters as they are not-thread-safe.

    input {
     #Read Log4J messages from the Redis Broker (general errors).
     redis {
      host => "192.168.0.231"
      type => "qalogs"
      port => 6379
      data_type => "list"
      key => "lsqalogs"
      batch_count => 100
      threads => 8
      codec => "json"
    }
    output {
     if [type] == "avail" {
      if [push_index] {
        elasticsearch {
          index => "%{push_index}-%{push_type}-%{+YYYY.MM.dd}"
          hosts => ["192.168.0.230:9200"]
          flush_size => 50
          manage_template => false
          workers => 40
        }
      } else {
        elasticsearch {
          index => "log-%{type}-%{+YYYY.MM.dd}"
          hosts => ["192.168.0.230:9200"]
          flush_size => 50
          manage_template => false
          workers => 40
        }
      }
     }
   }

I have been working on this for a few months now, and have automated the complete stack installation.. The only problem I have is the performance is terrible.

The second LogStash server runs with 0.3 Load-Avg hence I believe it can certainly handle the heat of 100+/messages/second.

I'm using LogStash 2.1v, ES 2.1, Redis3 on separate 2GB servers. I would really appreciate some light on this area?

Thanks In Advance.

Upvotes: 2

Views: 1370

Answers (1)

Will Barnwell
Will Barnwell

Reputation: 4089

Try reducing your workers in your elasticsearch output.

From a blog post by elastic on Logstash optimization(emphasis mine):

...modify configuration variables on your output (most notably the “workers” option on the Elasticsearch output which will probably be best at the number of cores your machine has)...

Another thing brought up by the article is the fact that the bottleneck may be your elasticsearch. 2GB of memory is tiny for an elasticsearch node and the problem may lie in a resource choked elasticsearch rather than a misconfigured logstash.

Upvotes: 3

Related Questions