Paul
Paul

Reputation: 1117

filter out duplicates from text file

I have a text file as follows:

a b 1.25 3.5
a c 1.25 3.4
b c 3.4  3.5
d e 3.4  3.4
f g 4.5  6.7
a b 1.3  4.6

I would like to remove the rows which have duplicate entries based on the first column or second columns values. All the posts I have seen so far retain the first instance of the duplicate. The output should look something like

d e 3.4  3.4
f g 4.5  6.7

Upvotes: 0

Views: 42

Answers (1)

Ed Morton
Ed Morton

Reputation: 203522

$ awk 'NR==FNR{cnt1[$1]++; cnt2[$2]++; next} (cnt1[$1]==1) && (cnt2[$2]==1)' file file
d e 3.4  3.4
f g 4.5  6.7

Upvotes: 3

Related Questions