Reputation: 855
I have one issues that have to merge serveral different files.
Just for example two files *But Column is Same just different values. Each files has tab delimiter in side. Join all columns by ID information.
First file Test.txt
ID ID2 ID3 Name Telephone
1 A + John 011
1 B - Mike 012
2 C + Sam 013
3 A - Jena 014
4 B + Peter 015
Second file Test2.txt
ID ID2 ID3 Name Telephone
2 C + Henry 013
3 A - Ho 014
1 A + Jamy 011
1 B - Mark 012
4 B + Jung 015
Then Final result
ID ID2 ID3 Name Telephone Name Telephone
1 A + John 011 Jamy 011
1 B - Mike 012 Mark 012
2 C + Sam 013 Henry 013
3 A - Jena 014 Ho 014
4 B + Peter 015 Jung 015
So combine depends on ID 1 ID2 ID3,
I tried to use join like join -a1 -a2 -a3 Test1.txt Test2.txt > Test3.txt
Something like this but there is problem with Performance and multiple file join And I am not sure that is correctly join.
Does anyone have best Idea?
Upvotes: 1
Views: 171
Reputation: 77085
Using awk
you can just build your strings for the unique keys seen in your file. You can then pipe the output to column -t
for pretty printing.
I have used column 1, 2 and 3 as keys and building remaining columns from every file to the original line.
awk --re-interval -F"\t" '
{ key = $1 SUBSEP $2 SUBSEP $3 }
{
if (line[key]) {
sub (/([^\t]+\t+){3}/,"");
line[key] = line[key] FS $0
}
else {
line[key] = $0
}
}
END {
for (key in line) print line[key]
}' file* | column -t | sort -r
ID ID2 ID3 Name Telephone Name Telephone
4 B + Peter 015 Jung 015
3 A - Jena 014 Ho 014
2 C + Sam 013 Henry 013
1 B - Mike 012 Mark 012
1 A + John 011 Jamy 011
Note: If you are using GNU awk v4
or later or BSD awk
, then you need not specify --re-interval
.
If you are open to perl
then you can do it in single shot:
perl -F"\t" -lane '
$" = "\t";
$key = "@F[0..2]";
push @{ $line{$key} }, @F[3..$#F];
}{
print join "\t", $_, @{ $line{$_} } for grep { $_ =~ /ID/ } sort keys %line;
print join "\t", $_, @{ $line{$_} } for grep { not $_ =~ /ID/ } sort keys %line
' file*
Upvotes: 1
Reputation: 246754
awk -F"\t" -v OFS="\t" '
{key = $1 SUBSEP $2 SUBSEP $3}
FNR==NR {line[key]=$0; next}
key in line {print line[$1,$2,$3], $4, $5}
' Test.txt Test2.txt
ID ID2 ID3 Name Telephone Name Telephone
2 C + Sam 013 Henry 013
3 A - Jena 014 Ho 014
1 A + John 011 Jamy 011
1 B - Mike 012 Mark 012
4 B + Peter 015 Jung 015
If you want the output sorted, pipe the output into | { read header; echo "$header"; sort; }
With join
, you can only join on one field. You'd have to resort to something like
join -j1 -t$'\t' <(sed 's/\t/:/;s/\t/:/' Test.txt|sort) \
<(sed 's/\t/:/;s/\t/:/' Test2.txt|sort) |
sed 's/:/\t/;s/:/\t/'
and then, that leaves the header at the bottom (which you can fix with | tac | { read header; echo "$header"; tac; }
)
Response to comment:
awk -F"\t" '
{key = $1 FS $2 FS $3}
NR == 1 {header = key}
!(key in result) {result[key] = $0; next}
{ for (i=4; i <= NF; i++) result[key] = result[key] FS $i }
END {
print result[header]
delete result[header]
PROCINFO["sorted_in"] = "@ind_str_asc" # if using GNU awk
for (key in result) print result[key]
}
' Test.txt Test2.txt # ... and other files
Upvotes: 2
Reputation: 88573
With GNU bash, GNU core utilities and GNU awk:
join -j 5 <(sort -n Test.txt) <(sort -n Test2.txt) | awk '{print $2,$3,$4,$5,$1,$9,$1}' | column -t
Output:
ID ID2 ID3 Name Telephone Name Telephone
1 A + John 011 Jamy 011
1 B - Mike 012 Mark 012
2 C + Sam 013 Henry 013
3 A - Jena 014 Ho 014
4 B + Peter 015 Jung 015
Upvotes: 1