Reputation: 731
Could anyone suggest the best pattern of gathering metrics from a cluster of nodes (each node is Tomcat Docker Container with Java app)?
We're planning to use ELK stack (ElasticSearch, Logstash, Kibana) as a visualization tool but the question for us is how metrics should be delivered to Kibana?
We're using DropWizard metrics library and it provides per instance metrics (gauges, timers, histograms).
Some metrics, obviously, should be gathered per instance (e.g. cpu, memory, etc..) - it doesn't make any sense to aggregate them per cluster.
But for such metrics as an average API response times, database calls durations we want a clear global picture - i.e. not per concrete instance.
And here is where we hesitating. Should we:
Thanks in advance,
Upvotes: 1
Views: 359
Reputation: 10859
You will want to use Metricbeat. It supports modules for the system level, Docker API, and Dropwizard. This will collect the events for you (without any pre-aggregation).
For the aggregation and visualization I'd use the time-series visual builder, where you can aggregate per container, node, service, everything,... It should be very flexible to get the right data granularity for you.
Upvotes: 2