Reputation: 362
I have a rollapply function that does something very simple, however over million data points this simple function is quite slow. I would like to know if it is possible to provide information to rollapply for how to make the next transition rather than defining the function itself.
Concretely, I am performing a rolling window for a basic statistical anomaly detection.
Roll apply function:
minmax <- function(x) { max(x) - min(x) }
invoked by:
mclapply(data[,eval(vars),with=F],
function(x) rollapply(x,width=winSize,FUN=minmax,fill=NA),
mc.cores=8)
Where data
is a 8 column data.table and winsize
is 300
This call takes about 2 mins on 8 cores. It is one of the major bottlenecks to the overall computing. However I can imagine that we can keep them sorted (by value and index) and then do Olog(n) comparisons each time we slide.
However I often see posts suggesting to move away from for loops and use lapply. What is a next logical step to further optimize?
Upvotes: 2
Views: 1260
Reputation: 9705
If you really want to edge out as much performance as you can, use Rcpp. Custom loops are a great use case for C++, especially when your function is pretty simple.
First results then code:
microbenchmark::microbenchmark(
minmax = zoo::rollapply(aa, width=100, FUN=minmax, fill=NA),
dblmax = zoo::rollmax(aa, k=100, fill=NA) + zoo::rollmax(-aa, k=100, fill=NA),
cminmax = crollapply(aa, width=width), times = 10
)
Unit: milliseconds
expr min lq mean median uq max neval cld
minmax 154.04630 162.728871 188.198416 173.13427 200.928005 298.568673 10 c
dblmax 37.38127 38.541603 44.818505 41.42796 50.001888 61.024250 10 b
cminmax 2.31766 2.363676 2.406835 2.39237 2.438109 2.512162 10 a
C++/Rcpp code:
#include <Rcpp.h>
#include <algorithm>
using namespace Rcpp;
// [[Rcpp::export]]
std::vector<double> crollapply(std::vector<double> aa, int width) {
if(width > aa.size()) throw exception("width too large :(");
int start_offset = (width-1) / 2;
int back_offset = width / 2;
std::vector<double> results(aa.size());
int i=0;
for(; i < start_offset; i++) {
results[i] = NA_REAL;
}
for(; i < results.size() - back_offset; i++) {
double min = *std::min_element(&aa[i - start_offset], &aa[i + back_offset + 1]);
double max = *std::max_element(&aa[i - start_offset], &aa[i + back_offset + 1]);
results[i] = max - min;
}
for(; i < results.size(); i++) {
results[i] = NA_REAL;
}
return results;
}
R code:
library(dplyr)
library(zoo)
library(microbenchmark)
library(Rcpp)
sourceCpp("~/Desktop/temp.cpp")
minmax <- function(x) max(x) - min(x)
aa <- runif(1e4)
width <- 100
x1 <- zoo::rollapply(aa, width=width, FUN=minmax, fill=NA)
x3 <- crollapply(aa, width=width)
identical(x1,x3)
width <- 101
x1 <- zoo::rollapply(aa, width=width, FUN=minmax, fill=NA)
x3 <- crollapply(aa, width=width)
identical(x1,x3)
microbenchmark::microbenchmark(
minmax = zoo::rollapply(aa, width=100, FUN=minmax, fill=NA),
dblmax = zoo::rollmax(aa, k=100, fill=NA) + zoo::rollmax(-aa, k=100, fill=NA),
cminmax = crollapply(aa, width=width), times = 10
)
Upvotes: 4
Reputation: 160447
Not sure if/how this would apply in the mclapply
environment, but you can gain a little speedup by employing zoo
's optimized rollmax
function. Since they don't have a complementing rollmin
, you'll need to adapt.
minmax <- function(x) max(x) - min(x)
aa <- runif(1e4)
identical(
zoo::rollapply(aa, width=100, FUN=minmax, fill=NA),
zoo::rollmax(aa, k=100, fill=NA) + zoo::rollmax(-aa, k=100, fill=NA)
)
# [1] TRUE
microbenchmark::microbenchmark(
minmax = zoo::rollapply(aa, width=100, FUN=minmax, fill=NA),
dblmax = zoo::rollmax(aa, k=100, fill=NA) + zoo::rollmax(-aa, k=100, fill=NA)
)
# Unit: milliseconds
# expr min lq mean median uq max neval
# minmax 70.7426 76.0469 84.81481 77.99565 81.8047 148.8431 100
# dblmax 15.6755 17.4501 19.09820 17.93665 18.8650 52.4849 100
(The improvement will depend on the window size, so your results might vary, but I think using an optimized function zoo::rollmax
will almost always out-perform calling a UDF each time.)
Upvotes: 3