tjebo
tjebo

Reputation: 23767

dplyr::case_when equivalent on one column by group

I have hardly ever used data.table before, but from so many threads it seems the solution for big data frames.

In a new column, I'd like to assign values depending on the presence of certain strings in any row by group (!), in a given order:

First look for string 1, then string 2, then string 3 etc... and assign values depending on result.

See desired output with dplyr::case_when (I am only showing the first 6 rows per group). It works for smaller data frames, but my code is freezing on my 2.5Mio row data frame with several thousand groups.

I was surprised not to have found any thread on that, and I feel that my question must be a dupe.

sample data

library(data.table)
set.seed(1)
mydat <- data.table(group = rep(letters[1:26], each =  3), value = sample(c('find','this','string'), 78, replace = TRUE))

desired result

library(dplyr)

des_dat <- mydat %>% 
             group_by(group) %>% 
               mutate(found = case_when(any(value == 'string') ~ 'A', 
                                        any(value == 'this') ~ 'B', 
                                        TRUE ~ 'C')) %>% ungroup

des_dat %>% group_by(found) %>% slice(1:6) #just to show the first 6 results by group
#> # A tibble: 18 x 3
#> # Groups:   found [3]
#>    group value  found
#>    <chr> <chr>  <chr>
#>  1 a     find   A    
#>  2 a     string A    
#>  3 a     find   A    
#>  4 b     this   A    
#>  5 b     find   A    
#>  6 b     string A    
#>  7 e     find   B    
#>  8 e     find   B    
#>  9 e     this   B    
#> 10 f     this   B    
#> 11 f     this   B    
#> 12 f     this   B    
#> 13 h     find   C    
#> 14 h     find   C    
#> 15 h     find   C    
#> 16 x     find   C    
#> 17 x     find   C    
#> 18 x     find   C

Created on 2019-11-08 by the reprex package (v0.3.0)

Upvotes: 0

Views: 351

Answers (3)

Henrik
Henrik

Reputation: 67778

You may use match and max to index a vector of 'found' values:

mydat[ , found := LETTERS[3:1][max(match(value, c("find", "this", "string")))],
       by = group]

all.equal(des_dat$found, mydat$found)
# TRUE

When using the larger 'mydat' in @chinsoon12's answer, speed was similar to their alternative.


Also note that fcase / case_when function for data.table is work in progress.

Upvotes: 1

chinsoon12
chinsoon12

Reputation: 25225

This should be quite fast as setorder uses radix sort and then it uses binary search to locate the strings and then it updates by reference:

library(data.table)
set.seed(1)
nr <- 2.5e6
ng <- 1e5
mydat <- data.table(group = sample(ng, nr, TRUE), 
    value = sample(c('find','this','string'), nr, TRUE))

system.time({

    setkey(mydat, value)[, found := "C"]

    mydat[group %in% mydat[.("this"), unique(group)], found := "B"][
        group %in% mydat[.("string"), unique(group)], found := "A"]
})
#   user  system elapsed 
#   0.86    0.03    0.52 

Please use %chin% instead of %in% if your group is of character class.


edit: actually a faster method is to use data.table::fifelse (available from version 1.12.4 onwards)

DT1[, found := fifelse(any(value=="string"), "A", 
        fifelse(any(value=="this"), "B", "C")), group] 

timing code:

library(data.table)
set.seed(1)
nr <- 2.5e6
ng <- 1e5
mydat <- data.table(group = sample(ng, nr, TRUE), 
    value = sample(c('find','this','string'), nr, TRUE))
DT0 <- copy(mydat)
DT1 <- copy(mydat)

mtd0 <- function() {
    setkey(DT0, value)[, found := "C"]
    DT0[group %in% DT0[.("this"), unique(group)], found := "B"][
        group %in% DT0[.("string"), unique(group)], found := "A"]
}

mtd1 <- function() {
   DT1[, found := fifelse(any(value=="string"), "A", 
            fifelse(any(value=="this"), "B", "C")), group] 
}

bench::mark(mtd0(), mtd1(), check=FALSE)
identical(setorder(mtd0(), group, value), setorder(mtd1(), group, value))
[1] TRUE

timings when ng <- 1e5:

# A tibble: 2 x 13
  expression      min   median `itr/sec` mem_alloc `gc/sec` n_itr  n_gc total_time result     memory   time  gc     
  <bch:expr> <bch:tm> <bch:tm>     <dbl> <bch:byt>    <dbl> <int> <dbl>   <bch:tm> <list>     <list>   <lis> <list> 
1 mtd0()        1.54s    1.54s     0.648   329.8MB     2.59     1     4      1.54s <df[,3] [~ <df[,3]~ <bch~ <tibbl~
2 mtd1()      361.1ms 362.72ms     2.76     33.5MB     0        2     0   725.44ms <df[,3] [~ <df[,3]~ <bch~ <tibbl~

And timings when ng <- 1e2:

# A tibble: 2 x 13
  expression      min   median `itr/sec` mem_alloc `gc/sec` n_itr  n_gc total_time result     memory   time  gc     
  <bch:expr> <bch:tm> <bch:tm>     <dbl> <bch:byt>    <dbl> <int> <dbl>   <bch:tm> <list>     <list>   <lis> <list> 
1 mtd0()        1.51s    1.51s     0.662   321.6MB     2.65     1     4      1.51s <df[,3] [~ <df[,3]~ <bch~ <tibbl~
2 mtd1()     120.89ms 466.93ms     2.14     48.1MB     1.07     2     1   933.86ms <df[,3] [~ <df[,3]~ <bch~ <tibbl~

And timings when ng <- 2.5e6:

# A tibble: 2 x 13
  expression    min  median `itr/sec` mem_alloc `gc/sec` n_itr  n_gc total_time result      memory     time  gc     
  <bch:expr> <bch:> <bch:t>     <dbl> <bch:byt>    <dbl> <int> <dbl>   <bch:tm> <list>      <list>     <lis> <list> 
1 mtd0()      2.78s   2.78s     0.360   282.2MB    1.08      1     3      2.78s <df[,3] [2~ <df[,3] [~ <bch~ <tibbl~
2 mtd1()      5.12s   5.12s     0.195    64.8MB    0.782     1     4      5.12s <df[,3] [2~ <df[,3] [~ <bch~ <tibbl

Upvotes: 3

bouncyball
bouncyball

Reputation: 10771

I wonder if it would be more efficient to calculate your summary table first, using data.table, and then merge this to your original data?

# make helper function
fun1 <- function(x) ifelse(max(x == 'string'), 'A', ifelse(max(x == 'this'), 'B', 'C'))
mydat_summary <- mydat[, (.found = fun1(value)), group]
newdat <- merge(mydat, mydat_summary)

Upvotes: 1

Related Questions