Reputation: 216
We are using Fluent-bit to process our docker container logs, I can use Tail to specify container log path, but there are multiple different containers in that log path /var/lib/docker/containers/*/*.log
. The first '*' can be interpreted to many different container_ids.
How can I acquire these ids? If I could acquire these ids, how can I map them to docker container names? I want to use the container name to apply different filters to different container logs?
Upvotes: 2
Views: 4703
Reputation: 216
You can use lua scripts to achieve this. container_name can be retrieved from the container's config file the same folder with the log file.
A sample configuration would be:
[INPUT]
name tail
Path /var/lib/docker/containers/*/*.log
Path_Key filepath
Parser json
Skip_Empty_Lines true
Tag container_logs
Docker_Mode true
Read_from_Head true
# Filter that uses lua scripts to extract container_id from file name and add it as an new field to log
[FILTER]
Name lua
Match container_logs
script read_container_id_and_name.lua
call get_container_id
# Filter that uses lua scripts to read container's config file and extract container_name then add it as a new field to log
[FILTER]
Name lua
Match container_logs
script read_container_id_and_name.lua
call get_container_name
# now you can classify your container logs by container_name using rewrite_tag
# Filter that change the tag based on logs' container_name
[FILTER]
Name rewrite_tag
Match container_logs
Rule $container_name ^container_name_a$ a_logs false
Rule $container_name ^container_name_b$ b_logs false
Rule $container_name ^container_name_c$ c_logs false
Emitter_Name re_emitted
# now you can use filters to different tags, like for a_logs tag
[FILTER]
Name parser
Match a_logs
Key_Name log
Parser a_logs_parser
# Reserve all the fields except log.
Reserve_Data On
The lua scripts can be like this:
-- read container_id from filepath field and add it as a new field
function get_container_id(tag, timestamp, record)
path = record["filepath"]
-- s = "./var/lib/docker/containers/a3118c5d7ff06b70100f0aee279b4811804453971bebad127a689e5cc5c8d7d8/a3118c5d7ff06b70100f0aee279b4811804453971bebad127a689e5cc5c8d7d8-json.log"
container = {}
for s in string.gmatch(path, "([^/]*)/") do
table.insert(container, s)
end
record["container_id"] = container[6]
return 2, timestamp, record
end
-- extract container_name from container's config file by regex and add it as a new field
-- this if useful for us to apply different filters to different container logs
function get_container_name(tag, timestamp, record)
id = record["container_id"]
file = "./var/lib/docker/containers/" .. id .. "/config.v2.json"
if not file_exists(file) then return {} end
local lines = ""
for line in io.lines(file) do
lines = lines .. line
end
pattern="\"LogPath\":\"[^\"]*\",\"Name\":\"[/]?([^\"]+)\""
record["container_name"] = string.match(lines, pattern)
return 2, timestamp, record
end
-- tell if a file exists on file system
function file_exists(file)
local f = io.open(file, "rb")
if f then f:close() end
return f ~= nil
end
Note that this essentially apply IO and regex to each log entry Fluent-bit processed, it might cause performance impact. You might need to find the mapping before Fluent-bit start and pass it as env var to Fluent-bit
Upvotes: 2