Reputation: 261
I want to print out the last update of a log file and nothing above it (old logs). Every 5 minutes the log is updated/appended to, and there is no option to overwrite instead of append. The amount of lines per update don't vary now, but I don't want to have to change the script if and when new fields are added. Each appendage starts with "Date: ...."
This is my solution so far. I'm finding the line number of the last occurrence of "Date" and then trying to send that to "awk 'NR>line_num_here filename" -
line=$(grep -n Date stats.log | tail -1 | cut --delimiter=':' --fields=1) | awk "NR>$line" file.log
However, I cannot update $line! It always holds the very first value from the very first time I ran the script. Is there a way to correctly update $line? Or are there any other ways to do this? Maybe a way to directly pipe into awk instead of making a variable?
Upvotes: 1
Views: 536
Reputation: 157992
The problem in your solution is that you need to replace the pipe in front of awk
by a ;
. These are two separate commands which would normally appear on two separate lines:
line=$(...)
awk -v "NR>$line" file
However, you can separate them by a ;
if the should appear on the same line:
line=$(...); awk -v "NR>$line" file
But anyway you can significantly simplify the command. Simply use twice awk
twice, like this:
awk -v ln="$(awk '/Date/{l=NR}END{print l}' a.log)" 'NR>ln' a.log
I'm using
awk '/Date/{l=NR}END{print l}' a.log
to obtain the line number of the last occurrence of Date. This value get's passed via -v ln=...
to the outer awk command.
Upvotes: 1
Reputation: 74615
Here's a way you could do it, in one invocation of awk and only reading the file once:
awk '/Date/ { n = 1 } { a[n++] = $0 } END { for (i = 1; i < n; ++i) print a[i] }' file
This writes each line to an array a
, resetting the counter n
back to 1
every time the pattern /Date/
matches. It then loops through the array once the file has been read, printing all the most recently saved values.
Upvotes: 0