Reputation: 1687
How can I find the unique lines and remove all duplicates from a file? My input file is
1
1
2
3
5
5
7
7
I would like the result to be:
2
3
sort file | uniq
will not do the job. Will show all values 1 time
Upvotes: 139
Views: 301382
Reputation: 311
You could also print out the unique value in "file" using the cat
command by piping to sort
and uniq
cat file | sort | uniq -u
Upvotes: 31
Reputation: 2113
I find this easier.
sort -u input_filename > output_filename
-u
stands for unique.
Upvotes: 22
Reputation: 632
While sort
takes O(n log(n)) time, I prefer using
awk '!seen[$0]++'
awk '!seen[$0]++'
is an abbreviation for awk '!seen[$0]++ {print}'
, print line(=$0) if seen[$0]
is not zero.
It take more space but only O(n) time.
Upvotes: 22
Reputation: 798
you can use:
sort data.txt| uniq -u
this sort data and filter by unique values
Upvotes: 14
Reputation: 31
sort -d "file name" | uniq -u
this worked for me for a similar one. Use this if it is not arranged. You can remove sort if it is arranged
Upvotes: 3
Reputation:
uniq
should do fine if you're file is/can be sorted, if you can't sort the file for some reason you can use awk
:
awk '{a[$0]++}END{for(i in a)if(a[i]<2)print i}'
Upvotes: 3
Reputation: 45
Instead of sorting and then using uniq
, you could also just use sort -u
. From sort --help
:
-u, --unique with -c, check for strict ordering;
without -c, output only the first of an equal run
Upvotes: 0
Reputation: 327
uniq -u has been driving me crazy because it did not work.
So instead of that, if you have python (most Linux distros and servers already have it):
#Python
#Assuming file has data on different lines
#Otherwise fix split() accordingly.
uniqueData = []
fileData = open('notUnique.txt').read().split('\n')
for i in fileData:
if i.strip()!='':
uniqueData.append(i)
print uniqueData
###Another option (less keystrokes):
set(open('notUnique.txt').read().split('\n'))
Just FYI, From the uniq Man page:
"Note: 'uniq' does not detect repeated lines unless they are adjacent. You may want to sort the input first, or use 'sort -u' without 'uniq'. Also, comparisons honor the rules specified by 'LC_COLLATE'."
One of the correct ways, to invoke with: # sort nonUnique.txt | uniq
$ cat x
3
1
2
2
2
3
1
3
$ uniq x
3
1
2
3
1
3
$ uniq -u x
3
1
3
1
3
$ sort x | uniq
1
2
3
Upvotes: 11
Reputation: 1687
This was the first i tried
skilla:~# uniq -u all.sorted
76679787
76679787
76794979
76794979
76869286
76869286
......
After doing a cat -e all.sorted
skilla:~# cat -e all.sorted
$
76679787$
76679787 $
76701427$
76701427$
76794979$
76794979 $
76869286$
76869286 $
Every second line has a trailing space :( After removing all trailing spaces it worked!
thank you
Upvotes: 0
Reputation: 65781
uniq
has the option you need:
-u, --unique
only print unique lines
$ cat file.txt
1
1
2
3
5
5
7
7
$ uniq -u file.txt
2
3
Upvotes: 125