Reputation: 10882
I have a utility script in Python:
#!/usr/bin/env python
import sys
unique_lines = []
duplicate_lines = []
for line in sys.stdin:
if line in unique_lines:
duplicate_lines.append(line)
else:
unique_lines.append(line)
sys.stdout.write(line)
# optionally do something with duplicate_lines
This simple functionality (uniq
without needing to sort first, stable ordering) must be available as a simple UNIX utility, mustn't it? Maybe a combination of filters in a pipe?
Reason for asking: needing this functionality on a system on which I cannot execute Python from anywhere.
Upvotes: 167
Views: 89873
Reputation: 34314
The UNIX Bash Scripting blog suggests:
awk '!x[$0]++'
This command is telling awk which lines to print. The variable $0
holds the entire contents of a line and square brackets are array access. So, for each line of the file, the node of the array x
is incremented and the line printed if the content of that node was not (!
) previously set.
Upvotes: 403
Reputation: 3083
uq
uq
is a small tool written in Rust.
It performs uniqueness filtering without having to sort the input first, therefore can apply on continuous stream.
There are two advantages of this tool over the top-voted awk solution and other shell-based solutions:
uq
remembers the occurence of lines using their hash values, so it doesn't use as much memory use when the lines are long.uq
can keep the memory usage constant by setting a limit on the number of entries to store (when the limit is reached, there is a flag to control either to override or to die), while the awk
solution could run into OOM when there are too many lines.Upvotes: 7
Reputation: 15986
A late answer - I just ran into a duplicate of this - but perhaps worth adding...
The principle behind @1_CR's answer can be written more concisely, using cat -n
instead of awk
to add line numbers:
cat -n file_name | sort -uk2 | sort -n | cut -f2-
cat -n
to prepend line numberssort -u
remove duplicate data (-k2
says 'start at field 2 for sort key')sort -n
to sort by prepended numbercut
to remove the line numbering (-f2-
says 'select field 2 till end')Upvotes: 110
Reputation: 1777
the uniq
command works in an alias even http://man7.org/linux/man-pages/man1/uniq.1.html
Upvotes: -3
Reputation: 6279
To remove duplicate from 2 files :
awk '!a[$0]++' file1.csv file2.csv
Upvotes: 10
Reputation: 1
I just wanted to remove all duplicates on following lines, not everywhere in the file. So I used:
awk '{
if ($0 != PREVLINE) print $0;
PREVLINE=$0;
}'
Upvotes: -1
Reputation: 21
Thanks 1_CR! I needed a "uniq -u" (remove duplicates entirely) rather than uniq (leave 1 copy of duplicates). The awk and perl solutions can't really be modified to do this, your's can! I may have also needed the lower memory use since I will be uniq'ing like 100,000,000 lines 8-). Just in case anyone else needs it, I just put a "-u" in the uniq portion of the command:
awk '{print(NR"\t"$0)}' file_name | sort -t$'\t' -k2,2 | uniq -u --skip-fields 1 | sort -k1,1 -t$'\t' | cut -f2 -d$'\t'
Upvotes: 2
Reputation: 23374
Michael Hoffman's solution above is short and sweet. For larger files, a Schwartzian transform approach involving the addition of an index field using awk followed by multiple rounds of sort and uniq involves less memory overhead. The following snippet works in bash
awk '{print(NR"\t"$0)}' file_name | sort -t$'\t' -k2,2 | uniq --skip-fields 1 | sort -k1,1 -t$'\t' | cut -f2 -d$'\t'
Upvotes: 5