Deep
Deep

Reputation: 588

Finding duplicates in a dataframe and returning count of each duplicate record

I have a dataframe like

col1 col2 col3
A    B    C
A    B    C
A    B    B
A    B    B
A    B    C
B    C    A

I want to get an output in the below format:

col1 col2 col3 Count
A    B    C    3 Duplicates
A    B    B    2 Duplicates

I dont want to use any specific column in the function to find the duplicates.

That is the reason of not using add_count from dplyr.

Using duplicate will have

    col1 col2 col3 count
2   A    B    C    3
3   A    B    B    2
5   A    B    C    3

So not the desired output.

Upvotes: 6

Views: 174

Answers (2)

akrun
akrun

Reputation: 886938

We can use data.table

library(data.table)
setDT(df1)[, .(n =.N), names(df1)][n > 1]
#   col1 col2 col3 n
#1:    A    B    C 3
#2:    A    B    B 2

Or with base R

subset(aggregate(n ~ ., transform(df1, n = 1), FUN = sum), n > 1)
#  col1 col2 col3 n
#2    A    B    B 2
#3    A    B    C 3

Upvotes: 2

Ronak Shah
Ronak Shah

Reputation: 388807

We can use group_by_all to group by all columns and then remove the ones which are not duplicates by selecting rows which have count > 1.

library(dplyr)

df %>%
  group_by_all() %>%
  count() %>%
  filter(n > 1)

#  col1  col2  col3      n
# <fct> <fct> <fct>   <int>
#1 A     B     B         2
#2 A     B     C         3

Upvotes: 4

Related Questions