Reputation: 1
I’m currently using Fluent Bit version 3.1.8-amd64 for log forwarding from Azure Kubernetes Service (AKS) to Elasticsearch. I’m encountering issues with properly configuring Fluent Bit’s multiline parsing for logs. The challenge is that many of the logs generated in the AKS environment span multiple lines, and Fluent Bit doesn’t seem to correctly aggregate these into a single event before forwarding to Elasticsearch.
I’ve already tried configuring the multiline.parser option, but I’m still seeing log events split across multiple entries in Elasticsearch, which is impacting our ability to search and analyze logs effectively.
This is my conf file, thank you in advance
config:
service: |
[SERVICE]
Flush 1
Log_Level info
parsers_File custom_parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port {{ .Values.metricsPort }}
inputs: |
[INPUT]
Name tail
Path /var/log/containers/*klix-api*.log
Tag kube.*
read_from_head true
multiline.parser cri
Skip_Long_Lines On
Skip_Empty_Lines On
Refresh_Interval 10
Buffer_Chunk_Size 5M
Buffer_Max_Size 5M
Mem_Buf_Limit 100m
filters: |
[FILTER]
Name multiline
Match *
Multiline.key_content log
Multiline.parser multiline-regex
#[FILTER]
# Name nest
# Match kube.*
# Operation lift
# Nested_under kubernetes
# Add_prefix kubernetes_
#
#[FILTER]
# Name nest
# Match kube.*
# Operation lift
# Nested_under kubernetes_labels
# Add_prefix kubernetes_labels_
outputs: |
[OUTPUT]
Name stdout
Match *
customParsers: |
[PARSER]
Name cri
Format regex
Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<log>.*)$
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
[MULTILINE_PARSER]
name multiline-regex
type regex
flush_timeout 1000
rule "start_state" "^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d{3})(.*)$" "cont"
rule "cont" "^(?!(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d{3}))(.*)$" "cont"
'''
Upvotes: 0
Views: 53