Swaraj Giri
Swaraj Giri

Reputation: 4037

What would be a good approach for sending logs from multiple servers a centralized logging server?

I am trying to send logs, lots of logs, from a php application hosted on multiple ec2 instances.

Instead of going with the standard approach of having logstash installed on each server and using logstash-forwarder to send the logs to a logging server with logstash parsing the logs and feeding it to elasticsearch, would it be a better approach to write apache/ nginx logs to syslog and have rsylog send it to logstash which then feeds it to elasticsearch?

Long question short- What would be a better approach?

  1. Apache/Nginx -> logstash-forwarder -> logstash -> redis (optional) -> elasticsearch

    OR

  2. Apache/Nginx -> syslog -> rsyslog -> logstash -> redis (optional) -> elastic search

Upvotes: 1

Views: 388

Answers (2)

Alain Collins
Alain Collins

Reputation: 16362

I prefer option one. It has fewer moving parts, would all be covered by a support contract that you could buy from Elasticsearch, and works well. I have well over 500 servers configured like this now, with thousands more planned for this year.

logstash will throttle if elasticsearch is busy. logstash-forwarder will throttle if logstash is busy. With that, there's no need for a broker.

Note that you would need a broker if you used an input that didn't throttle (e.g. tcp, snmptrap, netflow, etc).

Upvotes: 2

Quentin
Quentin

Reputation: 455

According to me :

  • First approach, more simple and works really well for a small infra, less scalable.
  • Second approach, more complex but more efficient and scalable, useful in big infra (used in mine : about 500 servers).

Upvotes: 0

Related Questions