Reputation: 487
I'm working with Filebeat 7.9.3 as a daemonset on k8s. I'm not able to parse docker container logs of a Springboot app that writes logs to stdout in json. The fact is that the every row of the Springboot app logs is written in this way:
{ "@timestamp": "2020-11-16T13:39:57.760Z", "log.level": "INFO", "message": "Checking comment 'se' done = true", "service.name": "conduit-be-moderator", "event.dataset": "conduit-be-moderator.log", "process.thread.name": "http-nio-8081-exec-2", "log.logger": "it.koopa.app.ModeratorController", "transaction.id": "1ed5c62964ff0cc2", "trace.id": "20b4b28a3817c9494a91de8720522972"}
But the corresponding docker log file under /var/log/containers/ writes log in this way:
{
"log": "{\"@timestamp\":\"2020-11-16T11:27:32.273Z\", \"log.level\": \"INFO\", \"message\":\"Checking comment 'a'\", \"service.name\":\"conduit-be-moderator\",\"event.dataset\":\"conduit-be-moderator.log\",\"process.thread.name\":\"http-nio-8081-exec-4\",\"log.logger\":\"it.koopa.app.ModeratorController\",\"transaction.id\":\"9d3ad972dba65117\",\"trace.id\":\"8373edba92808d5e838e07c7f34af6c7\"}\n",
"stream": "stdout",
"time": "2020-11-16T11:27:32.274816903Z"
}
I always receive this on filebeat logs
Error decoding JSON: json: cannot unmarshal number into Go value of type map[string]interface {}
This is my filebeat config that tries to parse json log message from docker logs where I'm using decode_json_fields to try to catch Elasticsearch standard fields (I'm using co.elastic.logging.logback.EcsEncoder)
filebeat.yml: |-
filebeat.inputs:
- type: container
#json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: true
json.message_key: log
paths:
- /var/log/containers/*.log
include_lines: "conduit-be-moderator"
processors:
- decode_json_fields:
fields: ["log"]
overwrite_keys: true
- add_kubernetes_metadata:
host: ${NODE_NAME}
in_cluster: true
matchers:
- logs_path:
logs_path: "/var/log/containers/"
processors:
- add_cloud_metadata:
- add_host_metadata:
How can I do this???
Upvotes: 1
Views: 3107
Reputation: 409
container
input already does the JSON decode.
You then get a message
field with your nested json that you might want to decode further.
But you are telling the container input to decode the log field twice.
And then you try to decode the log field for the third time in the processor.
https://github.com/elastic/beats/issues/20053#issuecomment-1899155624
Upvotes: 1
Reputation: 19
As processors are applied before the JSON parser of the input, you will need to first configure the decode_json_fields
processors which will allow you to decode your json.log field. You will then be able to apply the json configuration fo the inputs on the message
fields. Something like:
filebeat.yml: |-
filebeat.inputs:
- type: container
json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: true
json.message_key: message
paths:
- /var/log/containers/*.log
include_lines: "conduit-be-moderator"
processors:
- decode_json_fields:
fields: ['log']
expand_keys: true
- add_kubernetes_metadata:
host: ${NODE_NAME}
in_cluster: true
matchers:
- logs_path:
logs_path: "/var/log/containers/"
processors:
- add_cloud_metadata:
- add_host_metadata:
This configuration assumes that all your logs use JSON format. Else you will probably need to add an exclude or include regex pattern.
Upvotes: 1