Mark
Mark

Reputation: 10964

How do I sub sample data by group efficiently?

I do have a similar problem that is explained in this question. Similar to that question I have a data frame that has 3 columns (id, group, value). I want to take n samples with replacement from each group and produce a smaller data frame with n samples from each group.

However, I am doing hundreds of subsamples in a simulation code and the solution based on ddply is very slow to be used in my code. I tried to rewrite a simple code to see if I can get a better performance but it is still slow (not better than the ddply solution if not worse). Below is my code. I am wondering if it can be improved for performance

#Producing example DataFrame
dfsize <- 10
groupsize <- 7
test.frame.1 <- data.frame(id = 1:dfsize, group = rep(1:groupsize,each = ceiling(dfsize/groupsize))[1:dfsize], junkdata = sample(1:10000, size =dfsize))


#Main function for subsampling
sample.from.group<- function(df, dfgroup, size, replace){
  outputsize <- 1
  newdf <-df # assuming a sample cannot be larger than the original
  uniquegroups <- unique(dfgroup)
  for (uniquegroup in uniquegroups){
    dataforgroup <- which(dfgroup==uniquegroup)
    mysubsample <- df[sample(dataforgroup, size, replace),]
    sizeofsample <- nrow(mysubsample)
    newdf[outputsize:(outputsize+sizeofsample-1), ] <- mysubsample
    outputsize <- outputsize + sizeofsample
  }
  return(newdf[1:(outputsize-1),])
}

#Using the function
sample.from.group(test.frame.1, test.frame.1$group, 100, replace = TRUE)

Upvotes: 2

Views: 645

Answers (2)

hadley
hadley

Reputation: 103898

Here's two plyr based solutions:

library(plyr)

dfsize <- 1e4
groupsize <- 7
testdf <- data.frame(
  id = seq_len(dfsize),
  group = rep(1:groupsize, length = dfsize),
  junkdata = sample(1:10000, size = dfsize))

sample_by_group_1 <- function(df, dfgroup, size, replace) {
  ddply(df, dfgroup, function(x) {
    x[sample(nrow(df), size = size, replace = replace), , drop = FALSE]
  })
}

sample_by_group_2 <- function(df, dfgroup, size, replace) {
  idx <- split_indices(df[[dfgroup]])
  subs <- lapply(idx, sample, size = size, replace = replace)

  df[unlist(subs, use.names = FALSE), , drop = FALSE]
}

library(microbenchmark)
microbenchmark(
  ddply = sample_by_group_1(testdf, "group", 100, replace = TRUE),
  plyr = sample_by_group_2(testdf, "group", 100, replace = TRUE)
)

# Unit: microseconds
#   expr  min   lq median   uq   max neval
#  ddply 4488 4723   5059 5360 36606   100
#   plyr  443  487    507  536 31343   100

The second approach is much faster because it does the subsetting in a single step - if you can figure out how to do it in one step, it's usually any easy way to get better performance.

Upvotes: 3

Thomas
Thomas

Reputation: 44527

I think this is cleaner and possibly faster:

z <- sapply(unique(test.frame.1$group), FUN= function(x){ 
            sample(which(test.frame.1$group==x), 100, TRUE)
            })
out <- test.frame.1[z,]
out

Upvotes: 3

Related Questions