Reputation: 517
I want to compute the external studentized residuals of a dataset {x,y} of size n in R given the following constraints:
The R code should be fast because it will be used extensively (10^9 times minimum) on multiple data sets with n in [10^3, 10^6]. This question is part of a larger work for estimating a custom statistic that requires the studentized residuals. The most computational part is the one presented here. Thus, solving this would dramatically improve the overall efficiency.
To gather the studentized external residuals, one typically runs lm()
then rstudent()
. The R function uses an aproach that avoid running n regressions for estimating the studentized residuals and that saves a lot of execution time. However, I prefer not to use lm()
because I only need the residuals without all that fancy additional stuff that comes with it (thus saving some more execution time).
When trying to decipher the R source code for the external residuals in the lm()
I found it somewhat obscur, as it seems to call sample code from other external files (an example is the influence()
function). Thus, at this time I failed at collecting enough information to replicate the code section using the source code only.
The following relevant topic has been found in Stack: How to compute Studentized Residuals in Python?
A R implementation of the Python procedure including a minimal example is given (corrected by @Stéphane Laurent, see answers):
n = 10
set.seed(1)
x = rnorm(n)
y = rnorm(n)
m = 2
mean_y = mean(y)
mean_x = mean(x)
diff_mean_sqr = (y - mean_y) %*% (y - mean_y)
beta_1 = ((y - mean_y) %*% (x - mean_x)) / diff_mean_sqr
beta_0 = mean_x - c(beta_1) * mean_y
x_hat = beta_0 + c(beta_1) * y
residuals = x - x_hat
h_ii = ((y - mean_y) ^ 2) / c(diff_mean_sqr) + (1 / n)
var_e = sqrt(vapply(1:n, function(i){
fit <- lm.fit(cbind(1, y[-i]), x[-i])
sum(fit$residuals^2)
}, numeric(1)) / (n-m-1))
SE_regression = var_e * (sqrt(1 - h_ii))
studentized_residuals = residuals / SE_regression
reg = rstudent(lm(x ~ y))
res = cbind(reg, studentized_residuals)
Produce the following differences:
index reg studentized_residuals
1 -0,595911898846465 -0,581348373714385
2 0,116208945967327 0,116097011762269
3 -2,04779452591111 -1,61939642040734
4 2,26350621688535 1,71995630000724
5 0,603322309518977 0,588222428131761
6 -1,5460639774285 -1,33486217871738
7 0,367900050364855 0,364393996552621
8 1,14745971090533 1,05271762293388
9 0,823888320713653 0,786630743176311
10 -0,449839343257121 -0,443475039943641
The following R attemp has been tested using arbitrary datasets, just for illustration purposes.
It uses lm()
/ rstudent()
and is way too slow for our practical application. The two parameters n1
and n2
correspond to the number of iterations and the size of the vector (denoted n in the above) respectively. To match our problem, we would typically pick n1
in [10^6, 10^9] and n2
in [10^3, 10^6] :
Stud = function(n1, n2){
res = data.frame(matrix(vector(), n2, n1))
for(i in 1 : n1){
x = rnorm(n2)
y = rnorm(n2)
reg = lm(x ~ y)
res[, i] = rstudent(reg)
}
}
Here we show a complete benchmark where various functions of Stack are tested against lm()
in the objective of gathering the studentized externals residuals. For gathering these residuals we need to run 'n' regressions. Results are given after the code for 100 and 500 replications.
#Packages
install.packages("Rcpp")
library(Rcpp)
install.packages("RcppArmadillo")
library(RcppArmadillo)
install.packages("RcppEigen")
library(RcppEigen)
install.packages("stats")
library(stats)
install.packages("speedglm")
library(speedglm)
install.packages("Rfast")
library(Rfast)
install.packages("rbenchmark")
library(rbenchmark)
## start from SEXP, most conversions, longest code
src <- '
Rcpp::List fLmSEXP(SEXP Xs, SEXP ys) {
Rcpp::NumericMatrix Xr(Xs);
Rcpp::NumericVector yr(ys);
int n = Xr.nrow(), k = Xr.ncol();
arma::mat X(Xr.begin(), n, k, false);
arma::colvec y(yr.begin(), yr.size(), false);
int df = n - k;
// fit model y ~ X, extract residuals
arma::colvec coef = arma::solve(X, y);
arma::colvec res = y - X*coef;
double s2 = std::inner_product(res.begin(), res.end(),
res.begin(), 0.0)/df;
// std.errors of coefficients
arma::colvec sderr = arma::sqrt(s2 *
arma::diagvec(arma::pinv(arma::trans(X)*X)));
return Rcpp::List::create(Rcpp::Named("coefficients")=coef,
Rcpp::Named("stderr") =sderr,
Rcpp::Named("df") =df,
Rcpp::Named("residuals") =res);
}
'
cppFunction(code=src, depends="RcppArmadillo")
## start from Rcpp types are early RcppArmadillo examples did
src <- '
Rcpp::List fLmTwoCasts(Rcpp::NumericMatrix Xr, Rcpp::NumericVector yr) {
int n = Xr.nrow(), k = Xr.ncol();
arma::mat X(Xr.begin(), n, k, false);
arma::colvec y(yr.begin(), yr.size(), false);
int df = n - k;
// fit model y ~ X, extract residuals
arma::colvec coef = arma::solve(X, y);
arma::colvec res = y - X*coef;
double s2 = std::inner_product(res.begin(), res.end(),
res.begin(), 0.0)/df;
// std.errors of coefficients
arma::colvec sderr = arma::sqrt(s2 *
arma::diagvec(arma::pinv(arma::trans(X)*X)));
return Rcpp::List::create(Rcpp::Named("coefficients")=coef,
Rcpp::Named("stderr") =sderr,
Rcpp::Named("df") =df,
Rcpp::Named("residuals") =res);
}
'
cppFunction(code=src, depends="RcppArmadillo")
## start from Armadillo types
src <- '
Rcpp::List fLmOneCast(arma::mat X, arma::colvec y) {
int df = X.n_rows - X.n_cols;
// fit model y ~ X, extract residuals
arma::colvec coef = arma::solve(X, y);
arma::colvec res = y - X*coef;
double s2 = std::inner_product(res.begin(), res.end(),
res.begin(), 0.0)/df;
// std.errors of coefficients
arma::colvec sderr = arma::sqrt(s2 *
arma::diagvec(arma::pinv(arma::trans(X)*X)));
return Rcpp::List::create(Rcpp::Named("coefficients")=coef,
Rcpp::Named("stderr") =sderr,
Rcpp::Named("df") =df,
Rcpp::Named("residuals") =res);
}
'
cppFunction(code=src, depends="RcppArmadillo")
## start from Armadillo types passed as constant references
src <- '
Rcpp::List fLmConstRef(const arma::mat & X, const arma::colvec & y) {
int df = X.n_rows - X.n_cols;
// fit model y ~ X, extract residuals
arma::colvec coef = arma::solve(X, y);
arma::colvec res = y - X*coef;
double s2 = std::inner_product(res.begin(), res.end(),
res.begin(), 0.0)/df;
// std.errors of coefficients
arma::colvec sderr = arma::sqrt(s2 *
arma::diagvec(arma::pinv(arma::trans(X)*X)));
return Rcpp::List::create(Rcpp::Named("coefficients")=coef,
Rcpp::Named("stderr") =sderr,
Rcpp::Named("df") =df,
Rcpp::Named("residuals") =res);
}
'
cppFunction(code=src, depends="RcppArmadillo")
#Benchmark
data = benchmark("OneCast" = {
n = 15
set.seed(1)
y = rnorm(n)
x <- rnorm(n)
m=2
mean_data = mean(y)
mean_x = mean(x)
diff_mean_sqr = (y - mean_data) %*% (y - mean_data)
beta_1 = ((y - mean_data) %*% (x - mean_x)) / diff_mean_sqr
beta_0 = mean_x - c(beta_1) * mean_data
x_hat = beta_0 + c(beta_1) * y
residuals = x - x_hat
h_ii = ((y - mean_data) ^ 2) / c(diff_mean_sqr) + (1 / n)
var_e = sqrt(vapply(1:n, function(i){
fit <- fLmOneCast(cbind(1, y[-i]), x[-i])
sum(fit$residuals^2)
}, numeric(1)) / (n-m-1))
SE_regression = var_e * (sqrt(1 - h_ii))
studentized_residuals = residuals / SE_regression
},
"TwoCast" = {
n = 15
set.seed(1)
y = rnorm(n)
x <- rnorm(n)
m=2
mean_data = mean(y)
mean_x = mean(x)
diff_mean_sqr = (y - mean_data) %*% (y - mean_data)
beta_1 = ((y - mean_data) %*% (x - mean_x)) / diff_mean_sqr
beta_0 = mean_x - c(beta_1) * mean_data
x_hat = beta_0 + c(beta_1) * y
residuals = x - x_hat
h_ii = ((y - mean_data) ^ 2) / c(diff_mean_sqr) + (1 / n)
var_e = sqrt(vapply(1:n, function(i){
fit <- fLmTwoCasts(cbind(1, y[-i]), x[-i])
sum(fit$residuals^2)
}, numeric(1)) / (n-m-1))
SE_regression = var_e * (sqrt(1 - h_ii))
studentized_residuals = residuals / SE_regression
},
"Const" = {
n = 15
set.seed(1)
y = rnorm(n)
x <- rnorm(n)
m=2
mean_data = mean(y)
mean_x = mean(x)
diff_mean_sqr = (y - mean_data) %*% (y - mean_data)
beta_1 = ((y - mean_data) %*% (x - mean_x)) / diff_mean_sqr
beta_0 = mean_x - c(beta_1) * mean_data
x_hat = beta_0 + c(beta_1) * y
residuals = x - x_hat
h_ii = ((y - mean_data) ^ 2) / c(diff_mean_sqr) + (1 / n)
var_e = sqrt(vapply(1:n, function(i){
fit <- fLmConstRef(cbind(1, y[-i]), x[-i])
sum(fit$residuals^2)
}, numeric(1)) / (n-m-1))
SE_regression = var_e * (sqrt(1 - h_ii))
studentized_residuals = residuals / SE_regression
},
"Sexp" = {
n = 15
set.seed(1)
y = rnorm(n)
x <- rnorm(n)
m=2
mean_data = mean(y)
mean_x = mean(x)
diff_mean_sqr = (y - mean_data) %*% (y - mean_data)
beta_1 = ((y - mean_data) %*% (x - mean_x)) / diff_mean_sqr
beta_0 = mean_x - c(beta_1) * mean_data
x_hat = beta_0 + c(beta_1) * y
residuals = x - x_hat
h_ii = ((y - mean_data) ^ 2) / c(diff_mean_sqr) + (1 / n)
var_e = sqrt(vapply(1:n, function(i){
fit <- fLmSEXP(cbind(1, y[-i]), x[-i])
sum(fit$residuals^2)
}, numeric(1)) / (n-m-1))
SE_regression = var_e * (sqrt(1 - h_ii))
studentized_residuals = residuals / SE_regression
},
"Fast" = {
n = 15
set.seed(1)
y = rnorm(n)
x <- rnorm(n)
m=2
mean_data = mean(y)
mean_x = mean(x)
diff_mean_sqr = (y - mean_data) %*% (y - mean_data)
beta_1 = ((y - mean_data) %*% (x - mean_x)) / diff_mean_sqr
beta_0 = mean_x - c(beta_1) * mean_data
x_hat = beta_0 + c(beta_1) * y
residuals = x - x_hat
h_ii = ((y - mean_data) ^ 2) / c(diff_mean_sqr) + (1 / n)
var_e = sqrt(vapply(1:n, function(i){
fit <- fastLm(x[-i] ~ y[-i])
sum(fit$residuals^2)
}, numeric(1)) / (n-m-1))
SE_regression = var_e * (sqrt(1 - h_ii))
studentized_residuals = residuals / SE_regression
},
"Speed" = {
n = 15
set.seed(1)
y = rnorm(n)
x <- rnorm(n)
m=2
mean_data = mean(y)
mean_x = mean(x)
diff_mean_sqr = (y - mean_data) %*% (y - mean_data)
beta_1 = ((y - mean_data) %*% (x - mean_x)) / diff_mean_sqr
beta_0 = mean_x - c(beta_1) * mean_data
x_hat = beta_0 + c(beta_1) * y
residuals = x - x_hat
h_ii = ((y - mean_data) ^ 2) / c(diff_mean_sqr) + (1 / n)
var_e = sqrt(vapply(1:n, function(i){
fit <- speedlm(x[-i] ~ y[-i], fitted = T)
sum((x[-i] - fit$fitted.values)^2)
}, numeric(1)) / (n-m-1))
SE_regression = var_e * (sqrt(1 - h_ii))
studentized_residuals = residuals / SE_regression
},
".Fit" = {
n = 15
set.seed(1)
y = rnorm(n)
x <- rnorm(n)
m=2
mean_data = mean(y)
mean_x = mean(x)
diff_mean_sqr = (y - mean_data) %*% (y - mean_data)
beta_1 = ((y - mean_data) %*% (x - mean_x)) / diff_mean_sqr
beta_0 = mean_x - c(beta_1) * mean_data
x_hat = beta_0 + c(beta_1) * y
residuals = x - x_hat
h_ii = ((y - mean_data) ^ 2) / c(diff_mean_sqr) + (1 / n)
var_e = sqrt(vapply(1:n, function(i){
fit <- lm.fit(cbind(1, y[-i]), x[-i])
sum(fit$residuals^2)
}, numeric(1)) / (n-m-1))
SE_regression = var_e * (sqrt(1 - h_ii))
studentized_residuals = residuals / SE_regression
},
"Fit" = {
n = 15
set.seed(1)
y = rnorm(n)
x <- rnorm(n)
m=2
mean_data = mean(y)
mean_x = mean(x)
diff_mean_sqr = (y - mean_data) %*% (y - mean_data)
beta_1 = ((y - mean_data) %*% (x - mean_x)) / diff_mean_sqr
beta_0 = mean_x - c(beta_1) * mean_data
x_hat = beta_0 + c(beta_1) * y
residuals = x - x_hat
h_ii = ((y - mean_data) ^ 2) / c(diff_mean_sqr) + (1 / n)
var_e = sqrt(vapply(1:n, function(i){
fit <- lmfit(cbind(1, y[-i]), x[-i])
sum(fit$residuals^2)
}, numeric(1)) / (n-m-1))
SE_regression = var_e * (sqrt(1 - h_ii))
studentized_residuals = residuals / SE_regression
},
"Lm" = {
n = 15
set.seed(1)
y = rnorm(n)
x <- rnorm(n)
m=2
mean_data = mean(y)
mean_x = mean(x)
diff_mean_sqr = (y - mean_data) %*% (y - mean_data)
beta_1 = ((y - mean_data) %*% (x - mean_x)) / diff_mean_sqr
beta_0 = mean_x - c(beta_1) * mean_data
x_hat = beta_0 + c(beta_1) * y
residuals = x - x_hat
h_ii = ((y - mean_data) ^ 2) / c(diff_mean_sqr) + (1 / n)
var_e = sqrt(vapply(1:n, function(i){
fit <- lm(x[-i] ~ y[-i])
sum(fit$residuals^2)
}, numeric(1)) / (n-m-1))
SE_regression = var_e * (sqrt(1 - h_ii))
studentized_residuals = residuals / SE_regression
},
"Basic" = {
n = 15
set.seed(1)
y = rnorm(n)
x <- rnorm(n)
reg <- lm(x ~ y)
reg_stud <- rstudent(reg)
},
replications = 500,
columns = c("test", "elapsed", "replications"))
Results:
On this single benchmark, the rstudent(lm())
is much faster than everything else:
test elapsed replications
7 .Fit 13.84 100
10 Basic 0.25 100
3 Const 7.37 100
5 Fast 99.84 100
8 Fit 7.06 100
9 Lm 105.25 100
1 OneCast 7.61 100
4 Sexp 7.66 100
6 Speed 184.76 100
2 TwoCast 7.17 100
7 .Fit 63.63 500
10 Basic 0.93 500
3 Const 34.44 500
5 Fast 438.95 500
8 Fit 31.11 500
9 Lm 471.37 500
1 OneCast 34.29 500
4 Sexp 33.48 500
6 Speed 794.73 500
2 TwoCast 33.51 500
Interpretation
It seems that R uses an analytical alternative that avoid using 'n' regressions, resulting in a much faster computation.
Thus, the question still remains: How to be competitive in regards to rstudent(lm())
, and how to reverse-engeering the original source code (that is difficult to gather) ?
We compared the solutions of @Onyambu, @tester and @Stéphane Laurent. We found the solution of @Onyambu to be the fastest one for different vector sizes, while providing results exactly equal to those of rstudent()
.
Upvotes: 4
Views: 1651
Reputation: 79318
the edit is to indicate that a faster_rstudent function than the previously give was found:
fast_rstudent <-function(X, y, intercept = TRUE){
mqr <- .Call(stats:::C_Cdqrls, cbind(intercept, X), y, tol, FALSE)
res <- .Call(stats:::C_influence, mqr, mqr$residuals, 1e-12)
mqr$residuals/(res$sigma*sqrt(1-res$hat))
}
So far this function is very fast.
Since you are using R, you could use a qr
decomposition to solve this. Your aim is to write a rstudent
function that is faster than the inbuilt function by getting rid of the overhead function calls etc. That means that you should only use the necessary internal functions. Below is a quick way to do this:
my_rstudent <- function (X, y, intercept = TRUE) {
X <- cbind(intercept, X)
u <- .Call(stats:::C_Cdqrls, X, y, 1e-7, FALSE)
d <- dim(X)
n <- as.integer(d[1L])
k <- as.integer(d[2L])
df_res <- n - k
z <- .Internal(diag(1, n, k))
v <- .Fortran(.F_dqrqy, as.double(u$qr), n, k, as.double(u$qraux),
z, k, qy = z)$qy
h_ii <-.Internal(rowSums(v^2, n, k, FALSE))
rstand <- u$residuals/sqrt(sum(u$residuals**2)/df_res)/sqrt(1-h_ii)
rstand * sqrt((df_res - 1)/( df_res - rstand^2))
}
In a way this function misuses R by almost removing the overhead functions entirely. This assumes that what is being given to the function is correct.
Results:
n = 10
set.seed(1)
x = rnorm(n)
y = rnorm(n)
cbind(mine=my_rstudent(x, y), from_R=rstudent(lm(y~x)))
mine from_R
1 0.92113157 0.92113157
2 0.15753536 0.15753536
3 -1.69587949 -1.69587949
4 -3.59182456 -3.59182456
5 0.98274664 0.98274664
6 -0.85765961 -0.85765961
7 -0.07768369 -0.07768369
8 1.05874766 1.05874766
9 0.80181623 0.80181623
10 0.11418833 0.11418833
benchmark:
microbenchmark::microbenchmark(my_rstudent(x, y),rstudent(lm(y~x)),unit="relative", times = 100)
Unit: relative
expr min lq mean median uq max neval
my_rstudent(x, y) 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 100
rstudent(lm(y ~ x)) 45.36667 37.20755 26.89753 24.29545 22.39587 11.31733 100
With a small dataset, the overhead functions quit slow down the computation of rstudent.
Relatively large dataset:
n = 1000
set.seed(1)
x = rnorm(n)
y = rnorm(n)
microbenchmark::microbenchmark(my_rstudent(x, y),rstudent(lm(y~x)),unit="relative", times = 100)
Unit: relative
expr min lq mean median uq max neval
my_rstudent(x, y) 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 100
rstudent(lm(y ~ x)) 8.530228 8.059269 7.700426 7.848123 7.616909 3.877305 100
huge dataset
n = 1000000
set.seed(1)
x = rnorm(n)
y = rnorm(n)
microbenchmark::microbenchmark(my_rstudent(x, y),rstudent(lm(y~x)),unit="relative", times = 10)
Unit: relative
expr min lq mean median uq max neval
my_rstudent(x, y) 1.000000 1.000000 1.000000 1.000000 1.000000 1.00000 10
rstudent(lm(y ~ x)) 1.510198 1.560989 1.486083 1.666609 1.603455 1.01154 10
Very huge dataset
n = 10000000
set.seed(1)
x = rnorm(n)
y = rnorm(n)
microbenchmark::microbenchmark(my_rstudent(x, y),rstudent(lm(y~x)),unit="relative", times = 10)
Unit: relative
expr min lq mean median uq max neval
my_rstudent(x, y) 1.000000 1.000000 1.000000 1.00000 1.000000 1.000000 10
rstudent(lm(y ~ x)) 1.603652 1.603881 1.534455 1.58802 1.560724 1.305315 10
microbenchmark::microbenchmark(my_rstudent(x, y),rstudent(lm(y~x)), times = 10)
Unit: seconds
expr min lq mean median uq max neval
my_rstudent(x, y) 1.584408 1.619822 1.727310 1.658917 1.757311 2.213203 10
rstudent(lm(y ~ x)) 2.458445 2.619609 2.705212 2.696705 2.776588 2.949799 10
Upvotes: 1
Reputation: 1692
I think the solution to your problem will be dropping all necessary overhead for the functions first, if that is not fast enough, try to convert the code to C++ and run it with Rccp. It is very likely that you'll be able to improve on my results, if you compute the residuals from .lm.fit
using your own implementation, instead of using lm.fit
, as I did.
I also checked, if there's a difference in the studentized residuals depending on the function you are going to use (lm
, lm.fit
, .lm.fit
), it turns out that this is the case. However, the residuals from my function here are equal to those produced by MASS::studres
for a regression of y ~ x
with x having only one column.
Here's my code and a benchmark versus the fastest version from above called "Basic":
library(rbenchmark)
library(microbenchmark)
library(MASS)
set.seed(1)
x <- matrix(rnorm(500), ncol = 1)
y <- matrix(rnorm(500), ncol = 1)
myFunc <- function(x, y, n = 500){
# tmp <- .lm.fit(x, y) # linear model fit
object <- lm.fit(x = x, y = y)
resid <- object$residuals
hat <- lm.influence(object, do.coef = FALSE)$hat
# hat <- hat[hat > 0] # remove checks
# ok <- !(is.na(resid)) # remove checks
# n.miss <- sum(!ok) # remove checks
# resid <- resid[ok] # remove checks
# n <- length(resid)
# p <- object$rank # equal to one
p <- 1
rdf <- n - 1
studres <- resid
stddev <- sqrt(sum(resid^2)/rdf)
sr <- resid/(sqrt(1 - hat) * stddev)
stdres <- sr
studres <- sr/sqrt((n - p - sr^2)/(n - p - 1))
studres <- naresid(object$na.action, studres)
return(studres)
}
test1 <- stats::rstudent(lm(x ~ y)) # rstudent doesn't work with lm.fit
test2 <- MASS::studres(lm(x ~ y))
test3 <- MASS::studres(lm.fit(x, y))
test4 <- myFunc(x, y, n = 500)
> head(cbind(test1, test2, test3, test4))
test1 test2 test3 test4
1 -0.6368094 -0.6368094 0.04696790 0.04696790
2 0.1493050 0.1493050 -0.27286396 -0.27286396
3 -0.8941217 -0.8941217 -1.15505676 -1.15505676
4 1.5598965 1.5598965 0.07729179 0.07729179
5 0.3440252 0.3440252 0.95155123 0.95155123
6 -0.7714317 -0.7714317 1.47600416 1.47600416
####################################
mbm <- microbenchmark("lm" = {rstudent(lm(y~x)) },
"MASS_lm" = {
MASS::studres(lm(y~x))
},
"MASS_lm.fit" = {
MASS::studres(lm.fit(x = x , y = y))
},
"myFunc" = {myFunc(x, y, n = 500)},
times = 100
)
> mbm
Unit: microseconds
expr min lq mean median uq max neval
lm 767.001 869.1510 1188.023 977.1505 1185.5010 8279.801 100
MASS_lm 704.601 909.2000 1085.261 997.3515 1168.8505 2052.202 100
MASS_lm.fit 168.001 195.0510 282.166 212.9510 254.1015 2912.201 100
myFunc 147.901 168.8015 234.261 190.0010 249.7515 1193.701 100
Please note, that you'll have to specify n
according to the length of the vector x or y.
Upvotes: 1
Reputation: 84639
One gets the same results by replacing your var_e
with
var_e = vapply(1:n, function(i){
sigma(lm(x[-i] ~ y[-i]))
}, numeric(1))
To get that efficiently, do not use lm
but lm.fit
:
var_e = sqrt(vapply(1:n, function(i){
fit <- lm.fit(cbind(1, y[-i]), x[-i])
sum(fit$residuals^2)
}, numeric(1)) / (n-m-1))
Upvotes: 3