Reputation: 4544
I noticed when doing a row count using below command multiple times cached results is shown, any ideas why?
grep "xxx" "filename.log" | wc -l
This returns the count value when run first time, If run again it still gives the same count value, even though the file has more matching values.
what could be the reason.
PS - I am using ubuntu 16.04 LTS
Update - grep -c "xxx" filename.log
is returning the correct count. Still wondering why the command above don't give updated result.
How to ensure the buffers are written to file at regular intervals?
FYI - I am checking this on a nginx access log file, which is continously being updated with the request calls, with average write speed of 10 lines/sec.
Upvotes: 2
Views: 2252
Reputation: 3836
If your new xxx
occurrences are on the same lines as the old ones, that's OK because grep
outputs whole lines by default. You can use grep -o
to output individual matches on separate lines. By the way, grep -c
(or grep -o -c
) can be used for counting (which is faster since it involves less writing).
However, if you don't see new lines in your file after you think they are written (which can be continuously checked with tail -f
or with less
: press F
to read new data and Ctrl-C
to stop reading), the likely reason is buffering. (Regarding your comment on 24 hours: note that buffers don't flush simply over time but only they are overflown or explicitly flushed.) You can try to call stdbuf -o0 program ...
instead of program ...
.
Upvotes: 1