Velkan
Velkan

Reputation: 7582

How to continuously delete old fluentd logs from elasticsearch?

A Fluentd log collector writes to Elasticsearch which eventually fills up the disk. How to limit them to a month, for example?

Part of the Fluentd config (using Kubernetes):

<match kubernetes.**>
  @type elasticsearch_dynamic
  host elasticsearch.default.svc.cluster.local
  port 9200
  include_tag_key true
  logstash_format true
  logstash_prefix kubernetes-${record['kubernetes']['pod_name']}
</match>

"Curator" for Elasticsearch, can delete "indexes", but I don't know what indexes Fluentd creates, when it stops using them and what does it mean to delete an index when there are still useful new logs in it?

Upvotes: 0

Views: 3609

Answers (1)

untergeek
untergeek

Reputation: 863

Curator will delete indices for you, regardless of whether Logstash, fluentd, or some other app made them. This example will work with the index pattern you provided in the above comments.

---
actions:
  1:
    action: delete_indices
    description: >-
      Delete indices older than 30 days (based on index name), for kubernetes-elasticsearch-
      prefixed indices. Ignore the error if the filter does not result in an
      actionable list of indices (ignore_empty_list) and exit cleanly.
    options:
      ignore_empty_list: true
      disable_action: true
    filters:
      - filtertype: pattern
        kind: prefix
        value: kubernetes-elasticsearch-
      - filtertype: age
        source: name
        direction: older
        timestring: '%Y.%m.%d'
        unit: days
        unit_count: 30

Upvotes: 1

Related Questions