YoussHark
YoussHark

Reputation: 608

Ship filebeat logs to logstash to index with docker metadata

Iam trying to index in elastichsearch with the help of filebeat and logstash. Here is the filebeat.yml :

filebeat.inputs:
- type: docker
  combine_partial: true
  containers:
    path: "/usr/share/dockerlogs/data"
    stream: "stdout"
    ids:
      - "*"
  exclude_files: ['\.gz$']
  ignore_older: 10m

processors:
  # decode the log field (sub JSON document) if JSON encoded, then maps it's fields to elasticsearch fields
- decode_json_fields:
    fields: ["log", "message"]
    target: ""
    # overwrite existing target elasticsearch fields while decoding json fields
    overwrite_keys: true
- add_docker_metadata:
    host: "unix:///var/run/docker.sock"

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

# setup filebeat to send output to logstash
output.logstash:
  hosts: ["xxx.xx.xx.xx:5044"]

# Write Filebeat own logs only to file to avoid catching them with itself in docker log files
logging.level: info
logging.to_files: false
logging.to_syslog: false
loggins.metrice.enabled: false
logging.files:
  path: /var/log/filebeat
  name: filebeat
  keepfiles: 7
  permissions: 0644
ssl.verification_mode: none

And here is the logstash.conf:

input
  {
    beats {
      port => 5044
      host => "0.0.0.0"
    }
  }

output
  {
    stdout {
      codec => dots
    }
    elasticsearch {
      hosts => "http://xxx.xx.xx.x:9200"
      index => "%{[docker][container][labels][com][docker][swarm][service][name]}-%{+xxxx.ww}"
    }
  }

Iam trying to index with the docker name so it would be more readable and more clear than the usual pattern we see all the time like "filebeat-xxxxxx.some-date". I tried several things:

- index => "%{[docker][container][labels][com][docker][swarm][service][name]}-%{+xxxx.ww}"
- index => "%{[docker][container][labels][com][docker][swarm][service][name]}-%{+YYYY.MM}"
- index => "%{[docker][swarm][service][name]}-%{+xxxx.ww}"

But nothing worked. What am i doing wrong ? Maybe iam doing something wrong or missing anthing in filebeat.yml file. It could be that too. Thanks for any help or any lead.

Upvotes: 3

Views: 2789

Answers (1)

justkind
justkind

Reputation: 149

Looks like you're unsure of what docker metadata fields are being added. It might be a good idea to just get successful indexing first with the default index name (ex. "filebeat-xxxxxx.some-date" or whatever) and then view the log events to see the format of your docker metadata fields.

I don't have the same setup as you, but for reference, I'm on AWS ECS so the format of my docker fields are:

"docker": {
  "container": {
    "name": "",
    "labels": {
      "com": {
        "amazonaws": {
          "ecs": {
            "cluster": "",
            "container-name": "",
            "task-definition-family": "",
            "task-arn": "",
            "task-definition-version": ""
          }
        }
      }
    },
    "image": "",
    "id": ""
  }
}

After seeing the format and fields available, I was able to add a custom "application_name" field using the above. This field is being generated in my input plugin which is redis in my case, but all input plugins should have the add_field option (https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html#plugins-inputs-beats-add_field):

input {
  redis {
    host => "***"
    data_type => "list"
    key       => "***"
    codec     => json
    add_field => {
      "application_name" => "%{[docker][container][labels][com][amazonaws][ecs][task-definition-family]}"
    }
  }
}

After getting getting this new custom field, I was able to run specific filters (grok, json, kv, etc) for different "application_name" fields as they had different log formats, but the important part for you is that you could use it in your output to Elasticsearch for index names:

output {
  elasticsearch {
      user => ***
      password => ***
      hosts => [ "***" ]
      index => "logstash-%{application_name}-%{+YYY.MM.dd}"
  }
}

Upvotes: 2

Related Questions