ado
ado

Reputation: 1471

Perl: A script that groups files into a single file

I have many logs, that are stored daily, under the following names in my var/log directory:

log20130601 log20130602 log20130603 ...

Each log has many lines. For example, if I open log20130529 I find:

    2013-05-29T15:55:05 [INFO] access_time:1369810505, item_id:1, start, 
    2013-05-29T15:55:05 [INFO] access_time:1369810505, item_id:2, start, 
    ....

What I want to do is to make a file that groups the last 7 files. For example, if today we are 20130611, by running the script, I should be able to have a temp file were the content from log20130611 log20130610 log 20130609 log20130608 log20130607 log20130606 and log201305 are inside. So if every file, say, had 4 lines, the new temp file should have 28 lines.

So far what I know is how to read the last 7 files with "glob":

    my @file_locations = reverse sort glob("/home/adrian/app/var/log/log*");                                                                               
    if ( @file_locations > 7 ) { $#file_locations = 6; }    

But I do not know how to group them into a single file. Any ideas?

Upvotes: 0

Views: 72

Answers (2)

Julian Fondren
Julian Fondren

Reputation: 5619

If I wasn't already using Perl, if I wasn't wanting to add this to an existing script, then I'd just do something like this:

cat $(ls /home/adrian/app/var/log/log*|head -7) > /home/adrian/app/var/log/combined.log

Otherwise, ikegami's solution is fine. If strace reveals that perl is using too many syscalls per too little I/O for you, you can drop down to sysread/syswrite with a buffer size of your choosing.

Upvotes: 0

ikegami
ikegami

Reputation: 385819

for my $qfn_in (@file_locations) {
    open(my $fh_in, '<', $qfn_in) or die $!;
    print($fh_out $_) while <$fh_in>;
}

As a one-liner:

perl -pe'BEGIN {
   @ARGV = reverse sort @ARGV;
   splice(@ARGV, 7);
}' /home/adrian/app/var/log/log* > combined

Upvotes: 3

Related Questions