Reputation: 1171
I rewrote a pure R function using Rcpp for reasons of computation speed.
Here is the function:
// [[Rcpp::export]]
double es_loss_c(NumericVector b, NumericVector Y, NumericMatrix X, double alpha, double delta) {
int n = X.nrow(); // Number of observations
int k = X.ncol(); // Number of parameters
NumericVector b_q = b[Range(0, k-1)]; // Quantile parameters
NumericVector b_e = b[Range(k, 2*k-1)]; // Expected shortfall parameters
// Initialize required variables
NumericVector Xi;
double out, Yi, Xb_q, Xb_e, u, exp_e, H;
for (int i = 0; i < n; i++) {
Yi = Y(i);
Xi = X(i,_);
// Pre-store some values
Xb_q = sum(Xi * b_q);
Xb_e = sum(Xi * b_e);
u = Yi - Xb_q;
exp_e = exp(Xb_e);
// Indicator function or its approximation
if (delta > 0) {
H = 1 / (1 + exp(delta * u));
} else {
H = u <= 0;
}
// Shortfall loss
out += (alpha - H) * u + exp_e / (1 + exp_e) * (-1/alpha * H * u + Xb_e - Xb_q) - log(1 + exp_e);
}
return out / n;
}
You can test the function with:
set.seed(1)
X <- cbind(1, rnorm(1000))
Y <- X %*% c(0, 1) + rnorm(1000)
b0 <- rnorm(4)
es_loss_c(b0, Y, X, 0.01, 100)
[1] 64.22035
This works so far and the result is the same (up to 1e-15) as with the pure R implementation.
But then, something really strange happens:
es_loss_c(b0, Y, X, 0.01, 100) - es_loss_c(b0, Y, X, 0.01, 100)
[1] -64.22035
es_loss_c(b0, Y, X, 0.01, 100) + es_loss_c(b0, Y, X, 0.01, 100)
[1] 192.6611
es_loss_c(b0, Y, X, 0.01, 100) / es_loss_c(b0, Y, X, 0.01, 100)
[1] 0.5
Can you explain to me what is happening here?
Upvotes: 1
Views: 112
Reputation: 25638
You forgot to initialize out = 0;
before using it. Just add it before for
loop and each call to es_loss_c
will produce identical results.
Upvotes: 2