unknown
unknown

Reputation: 883

why is the pipe so slow in this case?

I've got a model in R that is running very slowly and I've figured out that it is purely due to a very small number of lines of code which use the pipe that are within an ode solver. Does anyone know why this is slowing it down so much? Is there a faster way to do it?

Example code:

IDS <- c(1,1,2)
output <- runif(3, 0, 100)
ID_df1 <- data.frame(IDS, output)
ID_df <- ID_df1 %>% 
  group_by(IDS) %>% 
  summarise(totals = sum(output))
portions <- ID_df1 %>% 
  left_join(ID_df, by = "IDS") %>% 
  mutate(portion = output/totals) %>% 
  select(IDS, portion)

Upvotes: 1

Views: 307

Answers (2)

chinsoon12
chinsoon12

Reputation: 25225

suggestion to use data.table. some timing comparison below:

library(data.table)
library(dplyr)
library(microbenchmark)

set.seed(30L)
N <- 1e6
IDS <- sample(LETTERS, N, replace=TRUE)
output <- runif(N, 0, 100)
ID_df1 <- data.frame(IDS, output)

microbenchmark::microbenchmark(
    #as suggested by @Tino
    mtd_dplyr=ID_df1 %>% group_by(IDS) %>% mutate(portion = output / sum(output)),
    mtd_data.table=setDT(ID_df1)[, .(portion = output / sum(output)), by=.(IDS)],
    mtd_base=do.call(rbind, by(ID_df1, ID_df1$IDS, function(x) data.frame(x$IDS, portion=x$output/sum(x$output)))),
    times=10L
)

# Unit: milliseconds
#            expr       min        lq       mean     median        uq       max neval
#       mtd_dplyr   96.9382   99.0871  117.64655  114.47100  133.0421  144.6009    10
#  mtd_data.table   15.7899   21.1913   30.93596   21.37835   25.5830   81.5951    10
#        mtd_base 1191.5829 1245.0176 1392.00927 1369.00735 1450.2118 1859.3708    10

Upvotes: 1

Benjamin
Benjamin

Reputation: 17369

base R's tapply will run faster.

ID_df1$portion <- unlist(tapply(ID_df1$output, ID_df1$IDS, function(x) x / sum(x)))

For comparison (I'm using a data frame of 10,000 rows and 100 distinct IDS)

set.seed(pi)
IDS <- sample(1:100, size = 10000, replace = TRUE)
output <- runif(3, 0, 10000)
ID_df1 <- data.frame(IDS, output)

library(data.table)
library(dplyr)
library(microbenchmark)
microbenchmark(
  orig = {
    ID_df <- ID_df1 %>% 
      group_by(IDS) %>% 
      summarise(totals = sum(output))
    portions <- ID_df1 %>% 
      left_join(ID_df, by = "IDS") %>% 
      mutate(portion = output/totals) %>% 
      select(IDS, portion)
  },
  tino = {
    ID_df1 %>% group_by(IDS) %>% mutate(portion = output / sum(output))
  },
  data.table = {
    setDT(ID_df1)[, .(portion = output / sum(output)), by=.(IDS)]
  },
  base = {
    ID_df1$portion <- unlist(tapply(ID_df1$output, ID_df1$IDS, function(x) x / sum(x)))
  }
)

# Unit: microseconds
#        expr       min        lq       mean     median         uq       max neval cld
#        orig 11936.111 12101.798 12705.9227 12310.2980 12914.0975 22949.662   100   c
#        tino  5224.230  5370.854  5636.8930  5558.3875  5734.9235  8466.684   100  b 
#  data.table   569.490   594.856   768.6615   724.4725   777.2565  3279.110   100 a  
#        base   497.937   524.623   606.8760   602.9200   650.2800  1933.098   100 a  

Upvotes: 2

Related Questions