Reputation: 11185
I have the following json file, each line is a diferent json:
{"s":"some address","c":"some city"}
{"s":"some address1","c":"some city1"}
{"s":"some address2","c":"some city2"}
I have the following job:
input {
file {
start_position => "beginning"
path => "/sources/someFile.txt"
}
}
filter {
json {
source => "a"
target => "addresses[0].street"
}
mutate {
remove_field => ["message", "@timestamp", "host", "path", "@version"]
}
}
output {
elasticsearch {
hosts => "http://elasticsearch:9200"
index => "store"
}
}
I want to write to to the index as the following (each address go to a different doc as the forst element in an array):
{
"addresses": [{"street" : "some address", "city" : "some city"}]
}
{
"addresses": [{"street" : "some address2", "city" : "some city1"}]
}
{
"addresses": [{"street" : "some address3", "city" : "some city2"}]
}
The attached job is not working. no error and not doing anything.
Thanks
Upvotes: 0
Views: 745
Reputation: 4072
You cannot use that field reference in the target option of the json filter. In any version of logstash from the last couple of years I would expect that to result in a _jsonparsefailure tag and the error
Exception caught in json filter {:exception=>"Invalid FieldReference: `addresses[0].street`"
If you change the reference to be [addresses][0] then it will run without error, but the reference will be interpreted as the "0" entry in the "addresses" hash, not the first entry in the addresses array.
Your incoming JSON has the wrong field names, so you will have to rename the fields. I think it is easiest to do it in a ruby filter
json { source => "message" target => "[@metadata][json]" }
ruby {
code => '
json = event.get("[@metadata][json]")
event.set("addresses", [ { "street" => json["s"], "city" => json["c"] } ] )
'
}
which produces
"addresses" => [
[0] {
"city" => "some city",
"street" => "some address"
}
],
The original JSON is placed inside the [@metadata] field so that is available but not indexed by the output.
Upvotes: 1