Reputation: 483
I have a file with several million lines (Apple's EPF) What I need is to filter this using few search terms, but at the same time I need the number of matched lines to put in last row of output file. I thought of course about two runs - one that filter, and another that counts but that doesn't look like optimal solution because one filter can take few minutes.
for now I am testing something like this:
grep -f filtrowanie application_price_old > appprice_temp
perl -i -pe 's/#recordsWritten\:\d{7,8}/#recordsWritten:`grep -e '^\d' -c
appprice_temp`/ appprice_temp
Upvotes: 0
Views: 117
Reputation: 203493
Based on the text and code you provided in your question, this is probably what you're looking for:
awk '
NR==FNR { regexps[$0]; next }
{
for (regexp in regexps) {
if ($0 ~ regexp) {
print
cnt++
next
}
}
}
END {
print "#recordsWritten:" cnt+0
}
' filtrowanie application_price_old > appprice_temp
If that's not what you need then edit your question to clarify your requirements and provide concise, testable sample input and expected output.
Upvotes: 1