Reputation: 11
I'm trying to setup logstash, elasicsearch and Kibana for visualization the logs. The log should be send through TCP to logstash, filtered, output to ES-index and than shown in kibana.
The message i'm sending to logstash :
msg_to_tcp="id=1324 type=error name=system_name"
logstash.conf:
input{
tcp {
host => localhost
port => 55555
}
}
filter {
kv {}
mutate {
convert => ["id" , "integer"]
}
}
output {
elasticsearch {
host => localhost
port => 9200
}
}
usage: logstash 1.4.2 , elasticsearch 1.4.4 and kibana 4
Unfortunately the converting of id to an integer doesn't work. Kibana shows me that it is still a string.
I also tried to use in kibana the "scripted filters", it just causes error.
Integer.parseInt(doc["id"].value)
Can someone help me to convert the "id" to integer?
Upvotes: 1
Views: 467
Reputation: 431
Delete the existing index from elasticsearch and create a new one with proper mapping i.e, if you want id as integer, create the id column data type as integer while create index.
PUT indexname
{
"mappings": {
"mappingname": {
"properties": {
"id": {
"type": "integer"
},
"name": {
"type": "string"
}
}
}
}
}
Then create the grok filter for logstash to identify the field value as below
input{
tcp {
host => localhost
port => 55555
}
}
filter {
grok {
match => {
"message" => "id=%{NUMBER:id}\s*type=%{WORD:type}\s*name=%{WORD:name}"
}
}
}
output {
elasticsearch {
host => localhost
port => 9200
}
}
Now try with the above solution I hope it will work!!
Upvotes: 3
Reputation: 13104
Did you delete any already existing index before making this change? If not, you cannot change the template used by an ElasticSearch index, and that includes the type of each field. Instead, you have to create a new index and populate data into it. If you want to use the same index name, then you need to delete the existing index first.
Upvotes: 0