Reputation: 23
I have a file as below
cat file
a 1
a 2
b 3
I want to delete a 1 row and a 2 row as the first column of it is the same.
I tried cat file|uniq -f 1
, im getting the desired output. But I want to delete this from the file.
Upvotes: 0
Views: 34
Reputation: 195059
awk 'NR==FNR{a[$1]++;next}a[$1]==1{print}' file file
This one-liner works for your needs. no matter if your file was sorted or not.
This one-liner is gonna process the file twice, 1st go record (in a hashtable, key:1st col, value:occurences) the duplicated lines by the 1st column, in the 2nd run, check if the 1st col in the hashtable has value==1
, if yes, print. Because those lines are unique lines respect to the col1.
Upvotes: 1