Reputation: 171
I have a text file which is:
ABC 50
DEF 70
XYZ 20
DEF 100
MNP 60
ABC 30
I want an output which sums up individual values and shows them as a result. For example, total of all ABC values in the file are (50 + 30 = 80) and DEF is (100 + 70 = 170). So the output should sum up all unique 1st column names as -
ABC 80
DEF 170
XYZ 20
MNP 60
Any help will be greatly appreciated.
Thanks
Upvotes: 17
Views: 14211
Reputation: 5858
perl -lane '$_{$F[0]}+=$F[1]}print"$_ $_{$_}"for keys%_;{' file
And a little bit less straightforward:
perl -ape '$_{$F[0]}+=$F[1]}map$\.="$_ $_{$_}\n",keys%_;{' file
Upvotes: 2
Reputation: 6724
awk '{sums[$1] += $2} END { for (i in sums) printf("%s %s\n", i, sums[i])}' input_file | sort
if you don't need the results sorted alphabetically, just drop the | sort
part.
Upvotes: 6
Reputation: 139711
$ perl -lane \ '$sum{$F[0]} += $F[1]; END { print "$_ $sum{$_}" for sort grep length, keys %sum }' \ input ABC 80 DEF 170 MNP 60 XYZ 20
Upvotes: 5
Reputation: 343211
$ awk '{a[$1]+=$2}END{for(i in a) print i,a[i]}' file
ABC 80
XYZ 20
MNP 60
DEF 170
Upvotes: 19
Reputation: 30245
my %data;
while (<>) {
if (my ($key, $value) = /^(\w+) \s* (\d+)$/x) {
$data{$key} += $value;
}
}
printf "%s %s\n", $_, $data{$_} for keys %data;
Upvotes: 1