Reputation: 33
I am trying to get the number of the specific occurrences of the IP addresses found in nginx access.log. The access.log format is as follows
xxx.xxx.xxx.xxx - - [21/Dec/2021:12:59:30 +0100] "GET /<some/path/on/webserver>" 200 1028 "<referrer>" "Mozilla/5.0 (Linux; Android 11; SM-A202F) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.104 Mobile Safari/537.36" "-"
The awk which I'm currently using is
awk '$7 ~ /^\/rest\/default\/V1\/products-render-info?/ {print $1, $5}' /var/log/nginx/access.log.1 | sort -u > test.txt
And the result saved in text file is, with only unique IP addresses,
127.0.0.1
/rest/default/V1/products-render-info?searchCriteria.... <snip>
However, I would like to know the number occurrences of the IP addresses as well something like
127.0.0.1
<number of times this IP address has been found in the access.log>
/rest/default/V1/products-render-info?searchCriteria.... <snip>
Any help is highly appreciated!
Thanks
Upvotes: 0
Views: 1362
Reputation: 17551
grep "^[0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+" test.txt | awk -F" " '{print $1}' | sort | uniq -c
??? Pardon me?
Well, let's start with the regular expression:
^ : beginning of line
[0-9]\+ : a list of digits (at least one)
\. : a dot
So, your line should start with a beginning of line (obvious), then a list of at least one digit, a dot, ... and this four times (but without the dot at the end).
Like this you have found your IP addresses.
Then you parse that, using a space as a field seperator (awk -F" "
), and you show the first column '{print $1}'
.
Now you have shown a list of IP addresses, which you would want to count.
Therefore you first sort them (sort
) and once you're finished, the count the unique results (uniq -c
).
Easy, isn't it? :-)
Upvotes: 4