Reputation: 66214
I have 100 files, each with 1000 lines:
$ cat 1.txt
line1.1
line1.2
...
line1.1000
$ cat 2.txt
line2.1
line2.2
...
line2.1000
...
$ cat 100.txt
line100.1
line100.2
...
line100.1000
What's the easiest way to interleave them so that I end up with 1000 files, each with 100 lines, such that the first file contains all first lines from the 100 files, the second file contains all second lines from the 100 files, and so on:
$ cat 1.txt
line1.1
line2.1
...
line100.1
$ cat 2.txt
line1.2
line2.2
...
line100.2
...
$ cat 1000.txt
line1.1000
line2.1000
...
line100.1000
I could write a Python script, but I was wondering if there is a clever solution that uses UNIX tools.
Upvotes: 0
Views: 610
Reputation: 274768
The following paste
and split
combo should work:
paste -d '\n' {1..100}.txt | split -l 100 -a 4 -d - out
Upvotes: 1
Reputation: 67301
awk '{if(count==1000){count=0;}count++;print >count".txt"}' *.txt
tested succcessfully with two lines:
> cat 1.txte
1
2
> cat 2.txte
1
2
> awk '{if(count==2){count=0;}count++;print >count".txt"}' *.txte
> cat 1.txt
1
1
> cat 2.txt
2
2
>
so you just have to change from count==2
to count==1000
since your files have 1000 lines.
Upvotes: 0
Reputation: 45670
How about using paste
to line things up?
$ paste 1.txt 2.txt 3.txt
line1.1 line2.1 line3.1
line1.2 line2.2 line3.2
line1.3 line2.3 line3.3
Upvotes: 0
Reputation: 468
making the assumption that your output file names don't clash with you input file names, I'd use the following. If you do have name clashes modify the following to use a temporary directory for accumulating outputfiles.
#!/bin/bash
for infilenum {1..100}
do
outfilenum=1
while read line
do
echo $line >> $outfilenum.out
let outfilenum=outfilenum+1
done < "$infilenum.txt"
done
Upvotes: 0