Reputation: 1
The code below runs for long time. It takes the data of 400k rows and 5 variables as target variables.Can it be optimized to reduce the processing time?
maxn <- function(n) function(x) order(x, decreasing = TRUE)[n]
for (i in 1:nrow(x)) {
for (j in 1:(5-x$count[i])) {
if (x$count[i]<5) {
x[,j+17][i]<-colnames(x[,2:6])[maxn(j)(x[i,12:16])]
} #else {x[,j+17][i]<-0}
}
}
Upvotes: 0
Views: 53
Reputation:
library(microbenchmark)
big <- 100
x.orig <- matrix(sample(1:10, big * 22, replace = TRUE), nrow = big, ncol= 22)
x.orig <- as.data.frame(x)
x.orig$count <- sample(1:5, big, replace = TRUE)
x.orig[,17:22] <- NA
colnames(x.orig)[2:6] <- letters[1:5]
This is what my sample data looks like:
head(x.orig)
V1 a b c d e V7 V8 V9 V10 V11 V12 V13 V14 V15 V16 V17 V18 V19 V20 V21 V22 count
1 4 2 4 10 2 8 3 7 3 7 2 3 4 4 5 1 NA NA NA NA NA NA 1
2 6 4 2 5 7 6 1 6 6 8 1 6 6 10 4 6 NA NA NA NA NA NA 5
3 5 9 8 9 6 2 6 6 10 5 10 9 9 7 5 6 NA NA NA NA NA NA 5
4 3 1 1 4 4 1 8 7 1 3 9 4 6 9 5 5 NA NA NA NA NA NA 3
5 2 10 6 10 9 1 3 7 8 8 7 2 2 10 6 8 NA NA NA NA NA NA 3
6 8 2 4 3 3 2 10 6 7 3 2 2 3 5 10 7 NA NA NA NA NA NA 2
Let's test your code:
maxn <- function(n) function(x) order(x, decreasing = TRUE)[n]
microbenchmark({
x <- x.orig
for (i in 1:nrow(x)) {
for (j in 1:(5-x$count[i])) {
if (x$count[i]<5) {
x[,j+17][i]<-colnames(x[,2:6])[maxn(j)(x[i,12:16])]
} #else {x[,j+17][i]<-0}
}
}
}, times = 10)
# min lq mean median uq max neval
# 134.2846 142.5086 163.6631 144.2383 159.6705 326.5948 10
So what's happening here?
head(x[,12:23])
V12 V13 V14 V15 V16 V17 V18 V19 V20 V21 V22 count
1 3 4 4 5 1 NA d b c a NA 1
2 6 6 10 4 6 NA <NA> <NA> <NA> <NA> NA 5
3 9 9 7 5 6 NA <NA> <NA> <NA> <NA> NA 5
4 4 6 9 5 5 NA c b <NA> <NA> NA 3
5 2 2 10 6 8 NA c e <NA> <NA> NA 3
6 2 3 5 10 7 NA d e c <NA> NA 2
I get it, you're reporting the biggest 5 - count numbers from columns 12:16.
microbenchmark({
x1 <- x.orig
output <- apply(x1[,c('count', paste0('V', 12:16))], 1, function (y) {
ct <- y[1]
if (ct >= 5) return(rep(NA, 5))
res <- order(y[2:6], decreasing = TRUE)
res[(6 - ct):5] <- NA
res
})
output <- t(output)
output[] <- colnames(x)[2:6][output]
x1[, 18:22] <- output
}, times = 10)
# min lq mean median uq max neval
# 3.244582 3.438222 3.695123 3.616348 4.015643 4.282772 10
About 100 x faster.
head(x1[,12:23])
V12 V13 V14 V15 V16 V17 V18 V19 V20 V21 V22 count
1 3 4 4 5 1 NA d b c a <NA> 1
2 6 6 10 4 6 NA <NA> <NA> <NA> <NA> <NA> 5
3 9 9 7 5 6 NA <NA> <NA> <NA> <NA> <NA> 5
4 4 6 9 5 5 NA c b <NA> <NA> <NA> 3
5 2 2 10 6 8 NA c e <NA> <NA> <NA> 3
6 2 3 5 10 7 NA d e c <NA> <NA> 2
Looks the same. I checked this for 10000 elements and it still runs in about 1/10 second.
What's the trick?
Upvotes: 2