Reputation: 4037
I am trying to send logs, lots of logs, from a php application hosted on multiple ec2 instances.
Instead of going with the standard approach of having logstash installed on each server and using logstash-forwarder to send the logs to a logging server with logstash parsing the logs and feeding it to elasticsearch, would it be a better approach to write apache/ nginx logs to syslog and have rsylog send it to logstash which then feeds it to elasticsearch?
Long question short- What would be a better approach?
Apache/Nginx -> logstash-forwarder -> logstash -> redis (optional) -> elasticsearch
OR
Apache/Nginx -> syslog -> rsyslog -> logstash -> redis (optional) -> elastic search
Upvotes: 1
Views: 388
Reputation: 16362
I prefer option one. It has fewer moving parts, would all be covered by a support contract that you could buy from Elasticsearch, and works well. I have well over 500 servers configured like this now, with thousands more planned for this year.
logstash will throttle if elasticsearch is busy. logstash-forwarder will throttle if logstash is busy. With that, there's no need for a broker.
Note that you would need a broker if you used an input that didn't throttle (e.g. tcp, snmptrap, netflow, etc).
Upvotes: 2
Reputation: 455
According to me :
Upvotes: 0