Reputation: 189
I am trying to lookup a key from a record and use it as logstash prefix in fluent bit. But that's not happening and Logstash_Prefix
is not being replaced by Logstash_Prefix_Key
even though the specified key exists in the enriched log from kubernetes
filter.
The ideal behaviour of a kubernetes filter is to enrich the logs read from input path via input
plugin with kubernetes data such as pod name, pod id, namespace name etc. And when the logs after applying the filter is pushed to output source via es
output plugin. I used Logstash_Prefix_Key
to get the key kubernetes.pod_name
and gave Logstash_Prefix
as pod_name
. Even though I'm able to see kubernetes.pod_name
key in Kibana, the logs are getting stored in the prefix pod_name
(which means Logstash_Prefix_Key
is not found tn log records so it uses Logstash_Prefix
).
Code sample
input-kubernetes.conf: |
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
Parser docker
DB /var/log/flb_kube.db
Mem_Buf_Limit 2GB
Skip_Long_Lines On
Refresh_Interval 10
filter-kubernetes.conf: |
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc.cluster.local:443
Merge_Log Off
K8S-Logging.Parser On
output-elasticsearch.conf: |
[OUTPUT]
Name es
Match kube.*
Host ${FLUENT_ELASTICSEARCH_HOST}
Port ${FLUENT_ELASTICSEARCH_PORT}
HTTP_User ${FLUENT_ELASTICSEARCH_USER}
HTTP_Passwd ${FLUENT_ELASTICSEARCH_PASSWORD}
Logstash_Format On
Logstash_Prefix pod_name
Logstash_Prefix_Key kubernetes.pod_name
Retry_Limit False
Since I am new to EFK stack, Could someone help me with this
Upvotes: 3
Views: 8232
Reputation: 740
You can use:
Logstash_Prefix_Key kubernetes['pod_name']
This is working on my machine using the docker image: fluent/fluent-bit:1.7
Upvotes: 1
Reputation: 161
Was trying to do the same recently and though what Max Lobur said above is true about fluentbit not having support for this prior to the not yet released version 1.7. However, I was still able to achieve this with the current version using the nest
filter, see https://docs.fluentbit.io/manual/pipeline/outputs/elasticsearch .under the Logstash_Prefix_Key
it says
When included: the value in the record that belongs to the key will be looked up and over-write the Logstash_Prefix for index generation. If the key/value is not found in the record then the Logstash_Prefix option will act as a fallback. Nested keys are not supported (if desired, you can use the nest filter plugin to remove nesting)
The last sentence is about nested keys not supported however you can still use them if you use the nest filter to lift them up a level.
In your case the pod_name is nested under kubernetes, to still be able to use it, you would have to lift it out of that level. see nest example here.
Here's how to make it work in your case:
filter-kubernetes.conf: |
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc.cluster.local:443
Merge_Log Off
K8S-Logging.Parser On
[FILTER]
Name nest
Match *
Operation lift
Nested_under kubernetes
Add_prefix kubernetes_
output-elasticsearch.conf: |
[OUTPUT]
Name es
Match kube.*
Host ${FLUENT_ELASTICSEARCH_HOST}
Port ${FLUENT_ELASTICSEARCH_PORT}
HTTP_User ${FLUENT_ELASTICSEARCH_USER}
HTTP_Passwd ${FLUENT_ELASTICSEARCH_PASSWORD}
Logstash_Format On
Logstash_Prefix pod_name
Logstash_Prefix_Key kubernetes_pod_name
Retry_Limit False
what we are doing here is lifting everything inside the kubernetes object up a level and prefixing them with kubernetes_, so your pod_name
will become kubernetes_pod_name
. You then pass the kubernetes_pod_name
to Logstash_prefix_key
. The value of kubernetes_pod_name
is then use for index generation and would only resort back to logstash_prefix
if no key/value pair exist for the kubernetes_pod_name
Upvotes: 2
Reputation: 6040
UPD: it's now supported! https://github.com/fluent/fluent-bit/issues/421#issuecomment-766912018 Should be in Fluent Bit v1.7 release!
Dynamic ElasticSearch indexes are not supported in FluentBit at the moment. Here's a related issue: https://github.com/fluent/fluent-bit/issues/421. You can only specify a string (hardcoded) prefixes.
The workaround is to use a fluentd log collector instead, which supports dynamic indexes: https://docs.fluentd.org/output/elasticsearch#index_name-optional. There's a community chart for it: https://github.com/helm/charts/tree/master/stable/fluentd
UPD: it's now supported! https://github.com/fluent/fluent-bit/issues/421#issuecomment-766912018 Should be in Fluent Bit v1.7 release!
Upvotes: 3