Boog
Boog

Reputation: 223

Get first value that matches condition (loop too slow)

I've got a lot of matrices similar to this but with thousands of rows :

r <- 10
c <- 2
set.seed(333)

m1 <- matrix(runif(r*c)+1, r, c)

> m1
          [,1]     [,2]
 [1,] 1.467001 1.393902
 [2,] 1.084598 1.474218
 [3,] 1.973485 1.891222
 [4,] 1.571306 1.665011
 [5,] 1.020119 1.736832
 [6,] 1.723557 1.911469
 [7,] 1.609394 1.637850
 [8,] 1.306719 1.864651
 [9,] 1.063510 1.287575
[10,] 1.305353 1.129959

I've got a loop that tells me, for each value of the first column, what is the index of the first value in the second column that is 10% higher like so :

result <- 1:nrow(m1)

for (i in 1:nrow(m1)){
    result[i] <- which(m1[,2]>(1.1*m1[,1][i]))[1]
}
> result
 [1]  3  1 NA  3  1  6  3  2  1  2

I've got so much matrices that it's taking hours, and after profiling my code, the biggest time consuming task by far is this loop. What is, according to you, the fastest way to do it ?

For example, with r = 30000 :

start_time <- Sys.time()

for (i in 1:nrow(m1)){
    result[i] <- which(m1[,2]>(1.1*m1[,1][i]))[1]
}

end_time <- Sys.time()
a <- end_time - start_time

> a
Time difference of 11.25815 secs

Thanks for you help !

Upvotes: 19

Views: 1332

Answers (5)

lroha
lroha

Reputation: 34291

I don't imagine this is the fastest way but it will be somewhat faster than using the current for loop approach.

plus.10 <- m1[, 1] * 1.1
m2 <- m1[,2]
result <- sapply( plus.10, function(x) which.min(m2 < x))
result[plus.10 > max(m2) ] <- NA

result
[1]  3  1 NA  3  1  6  3  2  1  2

Edit: As requested by Ronak, microbenchmark results of the solutions proposed so far on 10000 rows:

Unit: milliseconds
   expr        min        lq       mean      median          uq         max neval   cld
     h1 335.342689 337.35915 361.320461  341.804840  347.856556  516.230972    25  b   
 sindri 672.587291 688.78673 758.445467  713.240778  811.298608 1049.109844    25    d 
     op 865.567412 884.99514 993.066179 1006.694036 1026.434344 1424.755409    25     e
   loco 675.809092 682.98591 731.256313  693.672064  807.007358  821.893865    25    d 
 dmitry 420.869493 427.56492 454.439806  433.656519  438.367480  607.030825    25   c  
    jad   4.369628   4.41044   4.735393    4.503657    4.556527    7.488471    25 a  

Upvotes: 9

JAD
JAD

Reputation: 2170

There are some shortcuts you can take here. You are looking for the first value in column 2 that is higher than some other value. This means that it is never worth looking at values that are lower than what we have previously seen in column 2.

In your example with 10 rows, that would be as follows:

> cummax(m1[, 2])
 [1] 1.393902 1.474218 1.891222 1.891222 1.891222 1.911469 1.911469 1.911469 1.911469 1.911469
> which(cummax(m1[, 2]) == m1[, 2])
[1] 1 2 3 6

And as you can see, these are the only values in your result vector.

A second optimisation that could be made is to order the first column. If you start looking for the lowest value first, and work your way up, you don't have to look through the second column each time. You only have to step to the next row there if there are no matches with the left row anymore.

This does bear the cost of sorting the matrix, but afterwards the result can be found using a single pass through both columns.

dostuff <- function(m1){
  orderColumn1 <- order(m1[, 1])

  plus.10 <- m1[, 1] * 1.1

  results <- rep(NA, length(plus.10))

  IndexColumn1 <- 1
  IndexColumn2 <- 1
  row2CurrentMax <- 0
  while(IndexColumn2 <= nrow(m1)){
    row2Current <- m1[IndexColumn2, 2]
    if(row2Current > row2CurrentMax){
      row2CurrentMax <- row2Current
      while(TRUE){
        row1Current <- plus.10[orderColumn1[IndexColumn1]]
        if(row1Current <= row2CurrentMax){
          results[orderColumn1[IndexColumn1]] <- IndexColumn2
          IndexColumn1 <- IndexColumn1 + 1
        } else {
          break
        }
      }
    }
    IndexColumn2 <- IndexColumn2 + 1
  }
  results
}

With 30000 rows:

> result <- dostuff(m1)
> end_time <- Sys.time()
> a <- end_time - start_time
> a
Time difference of 0.0600059 secs

Upvotes: 11

Dmitriy
Dmitriy

Reputation: 939

The best way to optimize your code is to use data.table package

This code gives you > 2x speed up.

library(data.table);

setDTthreads(0);

r <- 30000;
c <- 2;
set.seed(333);

m1 <- matrix(runif(r*c)+1, r, c);
result1 <- rep(NA, nrow(m1));

start_time <- Sys.time();

for (i in 1:nrow(m1))
{
    result1[i] <- which(m1[,2]>(1.1*m1[,1][i]))[1];
}

#result1

end_time <- Sys.time()
a <- end_time - start_time
a


start_time <- Sys.time()

tstDT <- data.table(m1);
#result2 <- tstDT[, sapply(V1, function(elem) { which(V2 > 1.1*elem)[1] })]
result2 <- tstDT[, sapply(V1, function(x) match(TRUE, V2 > 1.1*x) )]

#result2

end_time <- Sys.time()
a <- end_time - start_time
a

Little comment - I use data.table compiled by gcc with march=native and O3. Possible O2 and march=core (as in standard package by installation) speed up will be less, but...

Result:

> library(data.table);
> 
> setDTthreads(0);
> 
> r <- 30000;
> c <- 2;
> set.seed(333);
> 
> m1 <- matrix(runif(r*c)+1, r, c);
> result1 <- rep(NA, nrow(m1));
> 
> start_time <- Sys.time();
> 
> for (i in 1:nrow(m1))
+ {
+     result1[i] <- which(m1[,2]>(1.1*m1[,1][i]))[1];
+ }
> 
> #result1
> 
> end_time <- Sys.time()
> a <- end_time - start_time
> a
Time difference of 8.738938 secs
> 
> 
> start_time <- Sys.time()
> 
> tstDT <- data.table(m1);
> #result2 <- tstDT[, sapply(V1, function(elem) { which(V2 > 1.1*elem)[1] })]
> result2 <- tstDT[, sapply(V1, function(x) match(TRUE, V2 > 1.1*x) )]
> 
> #result2
> 
> end_time <- Sys.time()
> a <- end_time - start_time
> a
Time difference of 3.582921 secs
> 
> 
> 
> 

Upvotes: 2

s_baldur
s_baldur

Reputation: 33498

Here is an attempt using match() which reduces the time compared to the r = 30000 example in the original post by about 25%.

sapply(m1[, 1] * 1.1, function(x) match(TRUE, m1[, 2] > x))

[1]  3  1 NA  3  1  6  3  2  1  2

Upvotes: 3

LocoGris
LocoGris

Reputation: 4480

I propose these:

r <-30000
c <- 2
set.seed(333)

m1 <- matrix(runif(r*c)+1, r, c)
x2 <-m1[, 2]



start_time <- Sys.time()

result <- lapply(m1[, 1], function(x) {
  min(which(m1[,2]>(1.1*x)))
})
end_time <- Sys.time()
a <- end_time - start_time
a


start_time <- Sys.time()

result <- lapply(m1[, 1], function(x) {
            min(which(x2>(1.1*x)))
})
end_time <- Sys.time()
a <- end_time - start_time
a

First one: 8.6 s Second one: 6.4 s

Upvotes: 1

Related Questions