Reputation: 7421
I've got a busybox system which doesn't have uniq
and I'd like to generate a unique list of duplicated lines.
A plain uniq
emulated in awk
would be:
sort <filename> | awk '!($0 in a){a[$0]; print}'
How can I use awk
(or sed
for that matter, not perl
) to accomplish:
sort <filename> | uniq -d
Upvotes: 3
Views: 1873
Reputation: 58401
This might work for you:
# make some test data
seq 25 >/tmp/a
seq 3 3 25 >>/tmp/a
seq 5 5 25 >>/tmp/a
# run old command
sort -n /tmp/a | uniq -d
3
5
6
9
10
12
15
18
20
21
24
25
# run sed command
sort -n /tmp/a |
sed ':a;$bb;N;/^\([^\n]*\)\(\n\1\)*$/ba;:b;/^\([^\n]*\)\(\n\1\)*/{s//\1/;P};D'
3
5
6
9
10
12
15
18
20
21
24
25
Upvotes: 0
Reputation: 360065
On a busybox system, you might need to save bytes. ;-)
awk ++a[\$0]==2
Upvotes: 6
Reputation: 79185
Could do this (needn't sort it):
awk '{++a[$0]; if(a[$0] == 2) print}'
Upvotes: 3