Reputation: 1591
My abc.log contains below entries (snippet):
...
INFO #my-service# #add# id=67986324423 isTrial=true
INFO #my-service# #add# id=43536343643 isTrial=false
INFO #my-service# #add# id=43634636365 isTrial=true
INFO #my-service# #add# id=67986324423 isTrial=true
INFO #my-service# #delete# id=43634636365 isTrial=true
INFO #my-service# #delete# id=56543435355 isTrial=false
...
I want to count the lines which are having unique ids with #add#
attribute in them & having isTrial=true
.
For above snippet, the output should be 2
Can any one proivide me linux command which I can run against above log file?
Upvotes: 0
Views: 7605
Reputation: 85775
Using just awk
:
# Count unique line
$ awk '$3~"add"&&$5~"true"&&!u[$4]++{++c}END{print c}' file
2
# Print unique lines
$ awk '$3~"add"&&$5~"true"&&!u[$4]++' file
INFO #my-service# #add# id=67986324423 isTrial=true
INFO #my-service# #add# id=43634636365 isTrial=true
Or just sort
and grep
$ sort -uk4,4 file | grep "#add#.*true"
INFO #my-service# #add# id=67986324423 isTrial=true
INFO #my-service# #add# id=43634636365 isTrial=true
$ sort -uk4,4 file | grep -c "#add#.*true"
2
Upvotes: 4
Reputation: 195039
this one-liner gives you result 2
awk -F'#add# id=' '$2~/true/{a[$2]}END{print length(a)}' abc.log
this one-liner gives you the two unique lines
awk -F'#add# id=' '$2~/true/&&!a[$2]++' abc.log
Upvotes: 1
Reputation: 265154
Combine grep
, cut
, sort
and wc
:
grep '#add#.*isTrial=true$' /path/to/file | cut -f4 | sort -u | wc -l
Customize the regular expression to your liking (being more strict/less strict about what lines it matches).
(cut -f4 -d' '
if your delimiter is space instead of tab)
Upvotes: 0
Reputation: 6568
this will count too
grep "isTrial=true" abc.log | grep "#add#" | awk ' { print $4 }' | sort | uniq | wc -l
Upvotes: -1