Reputation: 2575
How can I run comboGeneral with all possible m using data.table to get all possible variable combinations? Then, how can I calculate the distinct count in all the dataframes subsetted using these variable combinations?
Here is a purrr and dplyr version. I need nms and counts using data.table.
library(data.table); library(dplyr); library(magrittr); library(RcppAlgos); library(purrr)
num_m <- seq_len(ncol(mtcars))
nam_list <- names(mtcars)
nms <- map(num_m, ~comboGeneral(nam_list, m = .x, FUN = c)) %>% unlist(recursive = FALSE)
counts <- map_dbl(nms, ~(mtcars %>% select(.x) %>% n_distinct()))
Upvotes: 2
Views: 165
Reputation: 34763
It's not clear what you're hoping to accomplish by using data.table
specifically for the first part. comboGeneral
is from RccpAlgos
so I assume it's optimized pretty heavily... combn
in base
R is the alternative (this is not really something data.table
would have any implementation for...):
nms = unlist(lapply(num_m, combn, x = nam_list, simplify = FALSE), recursive = FALSE)
With that in hand, there's a few ways in data.table
:
mtcars = as.data.table(mtcars)
counts = sapply(nms, uniqueN, x = mtcars)
Or
sapply(nms, function(nm) nrow(mtcars[ , TRUE, keyby = nm]))
Or
sapply(nms, function(nm) nrow(unique(mtcars, by = nm)))
It seems that the first option is not only the most concise but also the most efficient:
library(microbenchmark)
microbenchmark(times = 100L,
map_dbl(nms, ~(mtcars %>% select(.x) %>% n_distinct())),
sapply(nms, uniqueN, x = mtcars),
sapply(nms, function(nm) nrow(mtcars[ , TRUE, keyby = nm])),
sapply(nms, function(nm) nrow(unique(mtcars, by = nm))))
# Unit: milliseconds
# expr min lq
# map_dbl(nms, ~(mtcars %>% select(.x) %>% n_distinct())) 2246.10862 2365.33801
# sapply(nms, uniqueN, x = mtcars) 66.16144 68.95391
# sapply(nms, function(nm) nrow(mtcars[, TRUE, keyby = nm])) 1659.20425 1701.79188
# sapply(nms, function(nm) nrow(unique(mtcars, by = nm))) 102.42203 106.87100
# mean median uq max neval
# 2469.50648 2448.44821 2544.00350 3530.6513 100
# 73.28518 71.54861 75.85161 118.5919 100
# 1796.30372 1766.59618 1825.97374 2881.2376 100
# 113.63032 111.28377 118.22441 174.2691 100
Regarding speeding up the first step, you can get about 10% speed-up by dropping the sugar of map
and going for raw lapply
:
microbenchmark(times = 1000L,
lapply(num_m, combn, x = nam_list, simplify = FALSE),
map(num_m, ~comboGeneral(nam_list, m = .x, FUN = c)),
lapply(num_m, function(m) comboGeneral(nam_list, m, FUN = c)))
# Unit: microseconds
# expr min lq
# lapply(num_m, combn, x = nam_list, simplify = FALSE) 1718.994 1847.3710
# map(num_m, ~comboGeneral(nam_list, m = .x, FUN = c)) 564.076 629.5120
# lapply(num_m, function(m) comboGeneral(nam_list, m, FUN = c)) 473.135 525.2655
# mean median uq max neval
# 2088.7454 1921.8840 2016.0275 7789.501 1000
# 713.8342 661.0455 709.4650 3800.253 1000
# 593.7732 550.2460 583.7005 5190.982 1000
Note: We cannot use lapply(num_m, comboGeneral, v = nam_list, FUN = c)
because FUN
will be interpreted as the argument to lapply
, not to comboGeneral
.
Upvotes: 3