Reputation: 11
I have a log file as below which I need to parse using grok filter. Please guide me on what will be the filter
log
id:twsoper AIX230
JOB:load_data /jobs/system/load_data.bat 2017-05-14
trying to connect to database
connected to database Target_DB
Expected Filter
ID: twsoper
server : AIX230
Date : 2017-05-14
database : Target_DB
Upvotes: 0
Views: 637
Reputation: 581
When working with grok you need to know regex (for pattern matching).
Also, you need to play around with the pattern before you put it into logstash, for that here are some online pattern testers
Now on to your example. Assuming all of these are individual lines within the file, you are going to end up with a document in elastic search for all of these. Unless you look into multiline in filebeat or logstash to merge multiple lines into one single message.
filter {
grok {
# get the entire message
match => ["message", "%{GREEDYDATA:message}"]
overwrite => [ "message" ]
# get ID and server
match => ["message", "id:%{WORD:ID}\s+%{WORD:server}"]
# get Date
match => ["message", "JOB.+%{DATE:Date}"]
#get database
match => ["message", "connected to database %{WORD:database}"]
}
}
if you do not want to use multiline, you will need if statements to match messages and then match fields, like so:
filter {
#if line starts with id
if [message] =~ /^id/ {
grok {
# get ID and server
match => ["message", "id:%{WORD:ID}\s+%{WORD:server}"]
}
#if line starts with JOB
if [message] =~ /^JOB/ {
grok {
# get Date
match => ["message", "JOB.+%{DATE:Date}"]
}
.
.
.
.
}
Upvotes: 1