Uri Laserson
Uri Laserson

Reputation: 2451

How do I take subsets of a data frame according to a grouping in R?

I have an aggregation problem which I cannot figure out how to perform efficiently in R.

Say I have the following data:

group1 <- c("a","b","a","a","b","c","c","c","c",
            "c","a","a","a","b","b","b","b")
group2 <- c(1,2,3,4,1,3,5,6,5,4,1,2,3,4,3,2,1)
value  <- c("apple","pear","orange","apple",
            "banana","durian","lemon","lime",
            "raspberry","durian","peach","nectarine",
            "banana","lemon","guava","blackberry","grape")
df <- data.frame(group1,group2,value)

I am interested in sampling from the data frame df such that I randomly pick only a single row from each combination of factors group1 and group2.

As you can see, the results of table(df$group1,df$group2)

  1 2 3 4 5 6
a 2 1 2 1 0 0
b 2 2 1 1 0 0
c 0 0 1 1 2 1

shows that some combinations are seen more than once, while others are never seen. For those that are seen more than once (e.g., group1="a" and group2=3), I want to randomly pick only one of the corresponding rows and return a new data frame that has only that subset of rows. That way, each possible combination of the grouping factors is represented by only a single row in the data frame.

One important aspect here is that my actual data sets can contain anywhere from 500,000 rows to >2,000,000 rows, so it is important to be mindful of performance.

I am relatively new at R, so I have been having trouble figuring out how to generate this structure correctly. One attempt looked like this (using the plyr package):

choice <- function(x,label) {
    cbind(x[sample(1:nrow(x),1),],data.frame(state=label))
}

df <- ddply(df[,c("group1","group2","value")],
            .(group1,group2),
            pick_junc,
            label="test")

Note that in this case, I am also adding an extra column to the data frame called "label" which is specified as an extra argument to the ddply function. However, I killed this after about 20 min.

In other cases, I have tried using aggregate or by or tapply, but I never know exactly what the specified function is getting, what it should return, or what to do with the result (especially for by).

I am trying to switch from python to R for exploratory data analysis, but this type of aggregation is crucial for me. In python, I can perform these operations very rapidly, but it is inconvenient as I have to generate a separate script/data structure for each different type of aggregation I want to perform.

I want to love R, so please help! Thanks!

Uri

Upvotes: 3

Views: 575

Answers (2)

IRTFM
IRTFM

Reputation: 263481

One more way:

with(df, tapply(value, list( group1,  group2), length))
   1  2 3 4  5  6
a  2  1 2 1 NA NA
b  2  2 1 1 NA NA
c NA NA 1 1  2  1
# Now use tapply to sample withing groups
# `resample` fn is from the sample help page:
# Avoids an error with sample when only one value in a group.
resample <- function(x, ...) x[sample.int(length(x), ...)]
#Create a row index
df$idx <- 1:NROW(df)
rowidxs <- with(df,  unique( c(    # the `c` function will make a matrix into a vector
              tapply(idx, list( group1,  group2),
                            function (x) resample(x, 1) ))))
rowidxs
# [1]  1  5 NA 12 16 NA  3 15  6  4 14 10 NA NA  7 NA NA  8
df[rowidxs[!is.na(rowidxs)] , ]

Upvotes: -1

Ramnath
Ramnath

Reputation: 55735

Here is the plyr solution

set.seed(1234)
ddply(df, .(group1, group2), summarize, 
     value = value[sample(length(value), 1)])

This gives us

   group1 group2      value
1       a      1      apple
2       a      2  nectarine
3       a      3     banana
4       a      4      apple
5       b      1      grape
6       b      2 blackberry
7       b      3      guava
8       b      4      lemon
9       c      3     durian
10      c      4     durian
11      c      5  raspberry
12      c      6       lime

EDIT. With a data frame that big, you are better off using data.table

library(data.table)
dt = data.table(df)
dt[,list(value = value[sample(length(value), 1)]),'group1, group2']

EDIT 2: Performance Comparison: Data Table is ~ 15 X faster

group1 = sample(letters, 1000000, replace = T)
group2 = sample(LETTERS, 1000000, replace = T)
value  = runif(1000000, 0, 1)
df     = data.frame(group1, group2, value)
dt     = data.table(df)

f1_dtab = function() {
   dt[,list(value = value[sample(length(value), 1)]),'group1, group2']
}
f2_plyr = function() {ddply(df, .(group1, group2), summarize, value =          
   value[sample(length(value), 1)])
}

f3_by = function() {do.call(rbind,by(df,list(grp1 = df$group1,grp2 = df$group2),
  FUN = function(x){x[sample(nrow(x),1),]}))
}


library(rbenchmark)
benchmark(f1_dtab(), f2_plyr(), f3_by(), replications = 10)

      test  replications elapsed relative
  f1_dtab()           10   4.764  1.00000    
  f2_plyr()           10  68.261 14.32851    
    f3_by()           10  67.369 14.14127 

Upvotes: 7

Related Questions