Reputation: 77
For example,
I have two files:
file1.log:
123
456
789
and
file2.log:
123
456
789
and I would like to sum them together quickly?
(My actual files have over a million lines each, but equal number of lines.)
The output would be:
246
912
1578
Upvotes: 3
Views: 238
Reputation: 3451
Here's a Perl solution:
perl -lne 'BEGIN{open $in1,shift; open $in2,shift} while($n1=<$in1> and $n2=<$in2>){print $n1+$n2}' file1.log file2.log
A more readable (but slightly slower) version:
open my $in1, "<", shift;
open my $in2, "<", shift;
while ( my $n1 = <$in1> and my $n2 = <$in2> ) {
print $n1 + $n2
print "\n";
}
close($in1);
close($in2);
Just for kicks, I benchmarked the above vs all other solutions listed here.
They all produce the same output.
The input files are lists of integers from 1 to 999999
seq 999999 > file1.log ; seq 999999 > file2.log
Benchmark results (10 runs each):
shell@thor 0.070/s
paste@cyrus 0.410/s
awk@anubhava 0.546/s
perl_5.6.1 1.06/s
paste@veerendra 1.32/s
perl_5.20 (readable version) 1.37/s
awk@karakfa (awk 3.1.5) 1.44/s
perl_5.20 1.75/s
Upvotes: 0
Reputation: 4472
This could be another option
paste file1.log file2.log | awk '{print $1 + $2;}'
#246
#912
#1578
Upvotes: 1
Reputation: 47099
I think paste
and bc
are the best solution here. Just for fun here is one way to do it with pure bash (adapted from this post on unix.sx):
while read n1 <&3 && read n2 <&4; do
echo $((n1 + n2))
done 3<file1.log 4<file2.log
Or use the -u fd
option with read
(thanks rici):
while read -u3 n1 && read -u4 n2; do
echo $((n1 + n2))
done 3<file1.log 4<file2.log
Output:
246
912
1578
Upvotes: 1
Reputation: 67467
awk to the rescue
awk '{getline t < "file2.log"; print $0+t}' file1.log
Upvotes: 3
Reputation: 88583
Try this:
paste -d + file1.log file2.log | bc
Output:
246 912 1578
Upvotes: 10
Reputation: 784988
Using awk you can do:
awk 'FNR==NR{a[FNR]=$1; next} {print $1 + a[FNR]}' file1 file2
246
912
1578
Upvotes: 2