pkr13
pkr13

Reputation: 107

Any Perl Module for repetitive search and extract of a file's content?

I need to parse some log files where the data is repetitive in a particular pattern. I need to search for particular 'keywords' in the data and then extract data from next lines. I need to continue this for the whole file. I know this can be done using basic perl scripting. But do we have any perl module that simplifies this kind of feature?

Upvotes: 1

Views: 177

Answers (3)

pkr13
pkr13

Reputation: 107

Thanks for suggesting other options. Actually i found that using 'flip-flop' operator with 'if' solves my problem very aptly. And after using this only i realized that asking a 'module' for a trivial task like this is too much from my side :).

Upvotes: 0

RET
RET

Reputation: 9188

You could have a look at cgrep, which is an example of exactly this type of processing. It can be used in a pipeline, i.e.

cat mylog | cgrep -w0:1 'regexp' | grep -v 'regexp' | sed 's/.../.../'

In other words grep for regexp, outputting 0 lines before the match and one after, then remove the original matches, and format the result. You may not want to use sed for the last step, it's just an example.

cgrep appears in the earliest editions of the Programming Perl (Camel) book. It's pretty easy to find.

Upvotes: 0

tuomassalo
tuomassalo

Reputation: 9111

Probably there's no such module, because the code is quite trivial, and OTOH the details are quite problem-specific.

I've had this similar problem many times. The input has been something like:

Date: 2011-11-10
<an interesting line>
<another interesting line>
Date: 2011-11-11
<more interesting lines>

And I've needed to extract all "interesting lines" while knowing the date for each. I think oneliners or short throwaway scripts have been very successful for the purpose. With oneliners, it's good to be familiar with useful things like -l and -a. perl -wlane '...' it's something I've written a thousand time.

Upvotes: 1

Related Questions