user70192
user70192

Reputation: 14204

Using JSON with LogStash

I'm going out of my mind here. I have an app that writes logs to a file. Each log entry is a JSON object. An example of my .json file looks like the following:

{"Property 1":"value A","Property 2":"value B"}
{"Property 1":"value x","Property 2":"value y"}

I'm trying desperately to get the log entries into LogStash. In an attempt to do this, I've created the following LogStash configuration file:

input {
  file {
    type => "json"
    path => "/logs/mylogs.log"
    codec => "json"
  }
}
output {
  file {
    path => "/logs/out.log"
  }
}

Right now, I'm manually adding records to mylogs.log to try and get it working. However, they appear oddly in the stdout. When I look open out.log, I see something like the following:

{"message":"\"Property 1\":\"value A\", \"Property 2\":\"value B\"}","@version":"1","@timestamp":"2014-04-08T15:33:07.519Z","type":"json","host":"ip-[myAddress]","path":"/logs/mylogs.log"}

Because of this, if I send the message to ElasticSearch, I don't get the fields. Instead I get a jumbled mess. I need my properties to still be properties. I do not want them crammed into the message portion or the output. I have a hunch this has something to do with Codecs. Yet, I'm not sure. I'm not sure if I should change the codec on the logstash input configuration. Or, if I should change the input on the output configuration.

Upvotes: 40

Views: 107480

Answers (3)

vzamanillo
vzamanillo

Reputation: 10374

Try removing the json codec and adding a json filter:

input {
  file {
    type => "json"
    path => "/logs/mylogs.log"
  }
}
filter{
    json{
        source => "message"
    }
}
output {
  file {
    path => "/logs/out.log"
  }
}

You do not need the json codec because you do not want decode the source JSON but you want filter the input to get the JSON data in the @message field only.

Upvotes: 49

Newt
Newt

Reputation: 887

Try with this one:

filter {
  json {
        source => "message"
        target => "jsoncontent" # with multiple layers structure
  }
}

Upvotes: 3

Tinkaal Gogoi
Tinkaal Gogoi

Reputation: 4888

By default tcp put everything to message field if json codec not specified.

An workaround to _jsonparsefailure of the message field after we specify the json codec also can be rectified by doing the following:

input {
  tcp {
    port => '9563'
  }
}
filter{
  json{
    source => "message"
    target => "myroot"
  }
  json{
    source => "myroot"
  }

}
output {
    elasticsearch {
      hosts => [ "localhost:9200" ]
    }
}

It will parse message field to proper json string to field myroot and then myroot is parsed to yield the json.

We can remove the redundant field like message as

filter {
  json {
    source => "message"
    remove_field => ["message"]
  }
}

Upvotes: 14

Related Questions