ilephant
ilephant

Reputation: 21

Read the n-th line of multiple files into a single output

I have some dump files called dump_mydump_0.cfg, dump_mydump_250.cfg, ..., all the way up to dump_mydump_40000.cfg. For each dump file, I'd like to take the 16th line out, read them, and put them into one single file.

I'm using sed, but I came across some syntax errors. Here's what I have so far:

for lineNo in 16 ;
for fileNo in 0,40000 ; do
    sed -n  "${lineNo}{p;q;}" dump_mydump_file${lineNo}.cfg >> data.txt
done

Upvotes: 0

Views: 252

Answers (3)

potong
potong

Reputation: 58578

This might work for you (GNU sed):

sed -ns '16wdata.txt' dump_mydump_{0..40000..250}.cfg

Upvotes: 0

rici
rici

Reputation: 241951

Alternatively, with (GNU) awk:

awk "FNR==16{print;nextfile}" dump_mydump_{0..40000..250}.cfg > data.txt

(I used the filenames as shown in the OP as opposed to the ones which would have been generated by the bash for loop, if corrected to work. But you can edit as needed.)

The advantage is that you don't need the for loop, and you don't need to spawn 160 processes. But it's not a huge advantage.

Upvotes: 1

Rubens
Rubens

Reputation: 14778

Considering your files are named with intervals of 250, you should get it working using:

for lineNo in 16; do
    for fileNo in {0..40000..250}; do
        sed -n "${lineNo}{p;q;}" dump_mydump_file${fileNo}.cfg >> data.txt
    done
done

Note both the bash syntax corrections -- do, done, and {0..40000..250} --, and the input file name, that should depend on ${fileNo} instead of ${lineNo}.

Upvotes: 1

Related Questions