Reputation: 1044
I have log file with apache logs that i want to show in Kibana. The logs start with IP. I have debuged my pattern and it passes. I'm trying to add fields in the beats input configuration file, but are not show in Kibana even after refresh of the fields. Here is the configuration file
filter {
if[type] == "apache" {
grok {
match => { "message" => "%{HOST:log_host}%{GREEDYDATA:remaining}" }
add_field => { "testip" => "%{log_host}" }
add_field => { "data_left" => "%{remaining}" }
}
}
...
Just to add that I have restarted all the services: logstash, elasticsearch, kibana after the new configuration.
Upvotes: 2
Views: 2178
Reputation: 22342
The issue could be that your grok
pattern is using too rigid of patterns.
HOST
should be IPORHOST
based on your test_ip
field's name.Assuming that the data is actually coming in with the type defined as apache
, then it should be:
filter {
if [type] == "apache" {
grok {
match => {
message => "%{IPORHOST:log_host}%{GREEDYDATA:remaining}"
}
add_field => {
testip => "%{log_host}"
data_left => "%{remaining}"
}
}
}
}
Having said that, your usage of add_field
is completely unnecessary. The grok
pattern itself is creating two fields: log_host
and remaining
, so there's no need to define extra fields called testip
and data_left
.
Perhaps even more usefully, you don't need to craft your own Apache web log grok
pattern. The COMBINEDAPACHELOG
pattern already exists, which gives all of the standard fields automatically.
filter {
if [type] == "apache" {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
# Set @timestamp to the log's time and drop the unneeded timestamp
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
remove_field => "timestamp"
}
}
}
You can see this in a more complete example in the Logstash documentation here.
Upvotes: 1