Reputation: 105
I have a large collection of log files where each file contains records of the form ...
2015-06-07 23:59:53 [uid:123] {success,1} .
For each file, I want to count how many unique UIDs are present.
So in this file snippet we see the uids 123 and 124 ...
2015-06-07 23:59:53 [uid:123] {success,1}
2015-06-07 23:59:53 [uid:123] {success,1}
2015-06-07 23:59:53 [uid:123] {success,1}
2015-06-07 23:59:53 [uid:124] {success,1}
so the result of my count for this file would be 2.
How can I get the data using bash and/or awk?
I tried
cat 20150607.log | awk '{print $3}' | sort | uniq | wc -l
This worked well, but the problem is I have so many files and I do not want to run the above command one by one.
Is there a simpler way of getting this count accross multiple files?
Upvotes: 2
Views: 487
Reputation: 203324
Using GNU awk for ENDFILE
and length(array)
:
awk '{unq[$3]} ENDFILE{print FILENAME, length(unq); delete unq}' *.log
With other awks:
awk '
!seen[FILENAME,$3]++ { unq[FILENAME]++ }
END { for (i=1;i<ARGC;i++) print ARGV[i], unq[ARGV[i]]+0 }
' *.log
Upvotes: 8