Mark Miller
Mark Miller

Reputation: 13103

R: selecting rows of data frame based on several criteria

I have created a data frame my.df and wish to select rows (or delete rows) based on several criteria. With this example data frame I want to keep rows 1, 2, 4, 7 and 8. Specifically, I want to:

  1. keep any row containing a number in columns 3, 4 or 5
  2. keep any row containing all missing observations in columns 3-5 if columns 1 and 2 are not blank and do not contain junk

I can do this, but my solution seems overly complex and I am hoping someone may suggest a more efficient approach.

my.df <- data.frame(C1 = c("group1", "group1",     "",     "", "junk", "junk", "group2",       ""),
                    C2 = c(     "A",      "B",     "",     "",     "", "junk",      "B",      "C"),
                    C3 = c(     100,       NA,     NA,     10,     NA,     NA,       NA,       NA),
                    C4 = c(     200,       NA,     NA,     20,     NA,     NA,      100,       NA),
                    C5 = c(     100,       NA,     NA,     30,     NA,     NA,       NA,        5))

my.df

# the number of missing observations in columns 3-5 is < 3 or
# when the number of missing observations in columns 3-5 is 3 neither column 1 nor 2 is either blank or 'junk'

df.2 <- my.df[ (rowSums(is.na(my.df[,3:5]))  < (ncol(my.df)-2)) | 
               (rowSums(is.na(my.df[,3:5])) == (ncol(my.df)-2) & my.df[,1] != 'junk' & my.df[,2] != 'junk'  & my.df[,1] != '' & my.df[,2] != '') , ]
df.2

With my actual data what qualifies as junk can be complex. So, here I generalize junk to junk1 and junk2 and I still want to keep rows 1, 2, 4, 7 and 8. The code below works.

my.df <- data.frame(C1 = c("group1", "group1",     "",     "", "junk2", "junk1", "group2",       ""),
                    C2 = c(     "A",      "B",     "",     "",      "", "junk1",      "B",      "C"),
                    C3 = c(     100,       NA,     NA,     10,      NA,      NA,       NA,       NA),
                    C4 = c(     200,       NA,     NA,     20,      NA,      NA,      100,       NA),
                    C5 = c(     100,       NA,     NA,     30,      NA,      NA,       NA,        5))

my.df

df.3 <- my.df[ (rowSums(is.na(my.df[,3:5]))  < (ncol(my.df)-2)) | 
               (rowSums(is.na(my.df[,3:5])) == (ncol(my.df)-2)  & 
                my.df[,1] != 'junk1' & my.df[,2] != 'junk1'     & 
                my.df[,1] != 'junk2' & my.df[,2] != 'junk2'     &
                my.df[,1] != '' & my.df[,2] != '') 

        , ]
df.3

Because strings that qualify as junk become quite varied and complex here I try to simplify the code a little using %in% to group junk, but I obtain an error.

all.junk <- c("", "junk1", "junk2")

my.df.1 <- my.df[,1]
my.df.2 <- my.df[,2]

my.df.1 <- as.character(my.df.1)
my.df.2 <- as.character(my.df.2)

df.4 <- my.df[ (rowSums(is.na(my.df[,3:5]))  < (ncol(my.df)-2)) | 
               (rowSums(is.na(my.df[,3:5])) == (ncol(my.df)-2) & 
                my.df.1[!(my.df.1%in%all.junk)] & my.df.2[!(my.df.2%in%all.junk)]) , ]
df.4

I could proceed with the functional code I have, adding a new line to df.3 for each character string that qualifies as junk, but I suspect there is a much more efficient solution.

I have found similar questions on Stackoverflow, but none that I have found seem to be dealing with as many or as complicated selection criteria as in this example.

Thank you for any suggestions, but particularly regarding the error in df.4.

Upvotes: 3

Views: 11476

Answers (2)

Superbest
Superbest

Reputation: 26592

If your concern is readability, you could refactor this code:

df.3 <- my.df[ (rowSums(is.na(my.df[,3:5]))  < (ncol(my.df)-2)) | 
           (rowSums(is.na(my.df[,3:5])) == (ncol(my.df)-2)  & 
            my.df[,1] != 'junk1' & my.df[,2] != 'junk1'     & 
            my.df[,1] != 'junk2' & my.df[,2] != 'junk2'     &
            my.df[,1] != '' & my.df[,2] != '') 
    , ]

Into:

# Rows I want
good.rows = (rowSums(is.na(my.df[,3:5]))  < (ncol(my.df)-2)) | 
           (rowSums(is.na(my.df[,3:5])) == (ncol(my.df)-2)  & 
            my.df[,1] != 'junk1' & my.df[,2] != 'junk1'     & 
            my.df[,1] != 'junk2' & my.df[,2] != 'junk2'     &
            my.df[,1] != '' & my.df[,2] != '') 

df.3 <- my.df[good.rows,]

And further:

sums.are.fine = (rowSums(is.na(my.df[,3:5]))  < (ncol(my.df)-2)) | 
           (rowSums(is.na(my.df[,3:5])) == (ncol(my.df)-2)

no.junk = my.df[,1] != 'junk1' & my.df[,2] != 'junk1'     & 
            my.df[,1] != 'junk2' & my.df[,2] != 'junk2'     &
            my.df[,1] != '' & my.df[,2] != '') 

good.rows = sums.are.fine & no.junk

df.3 <- my.df[good.rows,]

You could also write separate functions, such as check.if.sums.are.fine(table.to.check) which return a boolean, and a check.everything(table.to.check) function which calls all of these and gives the final result.

If the typing part bothers you, one thing I noticed is that for things like "Can't be junk1 or junk2 or null" you can make a bad.values = character() and populate this with every value you don't want, then simply check which values aren't contained in bad.values (this may take some work to add to your solution unless you use a for loop).

Upvotes: 1

Alex Brown
Alex Brown

Reputation: 42872

This is pretty compact: keep every row that isn't all junk/nas:

all.junk=c("junk","")
subset(my.df,!(C1%in%all.junk &
               C2%in%all.junk & 
               is.na(C3) & is.na(C4) & is.na(C5)))

outputs

      C1 C2  C3  C4  C5
1 group1  A 100 200 100
2 group1  B  NA  NA  NA
4            10  20  30
7 group2  B  NA 100  NA
8         C  NA  NA   5

Upvotes: 3

Related Questions