Alvin
Alvin

Reputation: 351

How to do fast percentile calculation in C++/Rcpp

I have a large vector containing a bunch of double elements. Given an array of percentile vector, such as percentile_vec = c(0.90, 0.91, 0.92, 0.93, 0.94, 0.95). I am currently using Rcpp sort function to sort the large vector and then find the corresponding percentile value. Here is the main codes:

// [[Rcpp::export]]
NumericVector sort_rcpp(Rcpp::NumericVector& x)
{
  std::vector<double> tmp = Rcpp::as<std::vector<double>> (x);    // or NumericVector tmp = clone(x);
  std::sort(tmp.begin(), tmp.end());
  return wrap(tmp);
}

// [[Rcpp::export]]
NumericVector percentile_rcpp(Rcpp::NumericVector& x, Rcpp::NumericVector& percentile)
{
  NumericVector tmp_sort = sort_rcpp(x);
  int size_per = percentile.size();
  NumericVector percentile_vec = no_init(size_per);
  for (int ii = 0; ii < size_per; ii++)
  {
    double size_per = tmp_sort.size() * percentile[ii];
    double size_per_round;
    if (size_per < 1.0)
    {
      size_per_round = 1.0;
    }
    else
    {
      size_per_round = std::round(size_per);
    }
    percentile_vec[ii] = tmp_sort[size_per_round-1];  // For extreme case such as size_per_round == tmp_sort.size() to avoid overflow
  }
  return percentile_vec;
}

I also try to call R function quantile(x, c(.90, .91, .92, .93, .94, .95)) in Rcpp by using:

sub_percentile <- function (x)
{
  return (quantile(x, c(.90, .91, .92, .93, .94, .95)));
}  

source('C:/Users/~Call_R_function.R')

The test rests for x=runif(1E6) are listed below:

microbenchmark(sub_percentile(x)->aa, percentile_rcpp(x, c(.90, .91, .92, .93, .94, .95))->bb)
#Unit: milliseconds
              expr      min       lq     mean   median       uq       max   neval
  sub_percentile(x) 99.00029 99.24160 99.35339 99.32162 99.41869 100.57160   100
 percentile_rcpp(~) 87.13393 87.30904 87.44847 87.40826 87.51547  88.41893   100

I expect a fast speed percentile calculation, yet I assume std::sort(tmp.begin(), tmp.end()) slows down the speed. Is there any better way to get a fast result using C++, RCpp/RcppAramdillo? Thanks.

Upvotes: 8

Views: 7497

Answers (2)

E. Odj
E. Odj

Reputation: 91

Depending on how many percentiles you have to calculate and how large your vectors are, you can do much better (only O(N)) than sorting the whole vector (at best O(N*log(N))).

I had to calculate 1 percentile of vectors (>=160K) elements so what I did was the following:

void prctile_stl(double* in, const dim_t &len, const double &percent, std::vector<double> &range) {
// Calculates "percent" percentile.
// Linear interpolation inspired by prctile.m from MATLAB.

double r = (percent / 100.) * len;

double lower = 0;
double upper = 0;
double* min_ptr = NULL;
dim_t k = 0;

if(r >= len / 2.) {     // Second half is smaller
    dim_t idx_lo = max(r - 1, (double) 0.);
    nth_element(in, in + idx_lo, in + len);             // Complexity O(N)
    lower = in[idx_lo];
    if(idx_lo < len - 1) {
        min_ptr = min_element(&(in[idx_lo + 1]), in + len);
        upper = *min_ptr;
        }
    else
        upper = lower;
    }
else {                  // First half is smaller
    double* max_ptr;
    dim_t idx_up = ceil(max(r - 1, (double) 0.));
    nth_element(in, in + idx_up, in + len);             // Complexity O(N)
    upper = in[idx_up];
    if(idx_up > 0) {
        max_ptr = max_element(in, in + idx_up);
        lower = *max_ptr;
        }
    else
        lower = upper;
    }

// Linear interpolation
k = r + 0.5;        // Implicit floor
r = r - k;
range[1] = (0.5 - r) * lower + (0.5 + r) * upper;

min_ptr = min_element(in, in + len);
range[0] = *min_ptr;
}

Another alternative is the IQAgent Algorithm from Numerical Recepies 3rd. Ed. It was initially intended for data-streams but you can cheat it by splitting up your large datavector into smaller chunks (e.g. 10K elements) and calculate percentiles for each of the blocks (where a sort on the 10K chunks is used). If you process the blocks one at a time, each successive block will modify the values of the percentiles a bit, till you get a pretty good approximation at the end. The algorithm gave good results (up to 3rd or 4th decimal) but was still slower then the n-th element implementation.

Upvotes: 2

renonsz
renonsz

Reputation: 591

Branching in a loop could be surely optimized. Use std::min/max calls with ints.

I would solve percent calculation of array indices this way:

uint PerCentIndex( double pc, uint size )
{
    return 0.5 + ( double ) ( size - 1 ) * pc;
}

Only this line in the middle of the loop above:

percentile_vec[ii] 
 = tmp_sort[ PerCentIndex( percentile[ii], tmp_sort.size() ) ];

Upvotes: 1

Related Questions