719016
719016

Reputation: 10431

R Removing duplicate entries in dataframe and keeping rows with fewer NAs and zeroes

I would like to deduplicate a data.frame I am generating from another part of my codebase without the ability to know the order of the columns and rows. The data.frame has some columns I want to compare for duplication, here A and B, but I would like to then choose the ones to keep from the rows that contain fewer NAs and zeros in other columns in the dataframe, here C, D and E.

tc=
 'Id  B   A   C  D  E
   1  62  12  0  NA  NA
   2  12  62  1  1  1                  
   3  2   62  1  1  1
   4  62  12  1  1  1
   5  55  23  0  0  0      '

df =read.table(textConnection(tc),header=T)

I can use duplicated, but since I cannot control the order of the columns and rows where my dataframe comes, I need a way to get the unique values with fewer NAs and zeros.

This will work in the example, but won't if the incoming data.frame has a different order:

df[!duplicated(data.frame(A=df$A,B=df$B),fromLast=TRUE),]
  Id  B  A C D E
2  2 12 62 1 1 1
3  3  2 62 1 1 1
4  4 62 12 1 1 1
5  5 55 23 0 0 0

Any ideas?

Upvotes: 3

Views: 2231

Answers (1)

Sven Hohenstein
Sven Hohenstein

Reputation: 81693

Here's an approach based on counting valid values and reordering the data frame.

First, count the NAs and 0s in the columns C, D, and E.

rs <- rowSums(is.na(df[c("C", "D", "E")]) | !df[c("C", "D", "E")])
# [1] 3 0 0 0 3

Second, order the data frame by A, B, and the new variable:

df_ordered <- df[order(df$A, df$B, rs), ]
#   Id  B  A C  D  E
# 4  4 62 12 1  1  1
# 1  1 62 12 0 NA NA
# 5  5 55 23 0  0  0
# 3  3  2 62 1  1  1
# 2  2 12 62 1  1  1

Now, you can remove duplicated rows and keep the row with the highest number of valid values.

df_ordered[!duplicated(df_ordered[c("A", "B")]), ]
#   Id  B  A C D E
# 2  2 12 62 1 1 1
# 3  3  2 62 1 1 1
# 4  4 62 12 1 1 1
# 5  5 55 23 0 0 0

Upvotes: 4

Related Questions