J.Z.
J.Z.

Reputation: 435

Why is the computation faster with a loop?

Update:

As pointed out by @Dave2e, moving the start_time statement (in code 2) out of the for loop will make the running time comparable to code 1. Namely:

start_time <- Sys.time()
for (j in 1:1) {
   n <- 100
   result <- 1
   for (i in 1:n) {
     result <- result * i
   }
   result
   ## [1] 9.332622e+157
   end_time <- Sys.time()
}
end_time - start_time

Does the for loop really improve the performance or is it fake?


Original Post:

I have two pieces of code as follows:

Code 1:

start_time <- Sys.time()
n <- 100
result <- 1
for (i in 1:n) {
   result <- result * i
}
result
## [1] 9.332622e+157
end_time <- Sys.time()
end_time - start_time

Code 2:

for (j in 1:1) {
   start_time <- Sys.time()
   n <- 100
   result <- 1
   for (i in 1:n){
      result <- result * i}
      result
      ## [1] 9.332622e+157
      end_time <- Sys.time()
   }
   end_time - start_time

I was expecting these two codes run similarly, but code 2 constantly runs significantly faster than code 1. On my computer, code 1 takes about 10^-2 seconds, while code 2 takes about 5*10^-6 seconds. Any insight on how this could happen? If just adding for loop to the entire codes can decrease program running time, I will use it on all my codes in the future.

Upvotes: 0

Views: 131

Answers (1)

Ben Bolker
Ben Bolker

Reputation: 226182

I don't think your comparison is very robust. It's very hard to tell anything about the relative timing of very fast code without running it many times to get an average - too many uncontrollable factors can change the running time slightly.

The conclusion I would draw from the benchmarks below is that encapsulating a fairly trivial computation in a redundant for loop doesn't hurt very much, but that any apparent advantage is trivial and probably just an effect of noise.

I encapsulated each of your code chunks in a function (with_loop and without_loop) by putting function() { ... } around each of them. (Note that this means I'm not basing the timings on your Sys.time() comparisons, but on the built-in timing in the microbenchmark package.)

The microbenchmark package is more suitable for benchmarking, especially for very short computational tasks: from ?microbenchmark::microbenchmark:

‘microbenchmark’ serves as a more accurate replacement of the often seen ‘system.time(replicate(1000, expr))’ expression. It tries hard to accurately measure only the time it takes to evaluate ‘expr’. To achieved this, the sub-millisecond (supposedly nanosecond) accurate timing functions most modern operating systems provide are used. Additionally all evaluations of the expressions are done in C code to minimize any overhead.

library(microbenchmark)
m1 <- microbenchmark(with_loop, without_loop)
library(ggplot2)
autoplot(m1)+scale_y_log10()

The quantiles (lq, median, uq) are practically identical.

Unit: nanoseconds
         expr min lq   mean median uq   max neval cld
    with_loop  36 38  48.56     39 40   972   100   a
 without_loop  36 39 177.81     40 41 13363   100   a

enter image description here

The code without the loop is indeed slower on average (i.e. it has a larger mean), but this is almost entirely driven by a couple of outliers.

Now focus just on values less than 50 nanoseconds:

autoplot(m1)+scale_y_log10(limits=c(NA,50))

enter image description here

If we do this again with times=1e6 (a million iterations), we get almost identical results: the mean with the loop is 3 nanoseconds faster (again probably almost entirely driven by small fluctuations in the upper tail).

Unit: nanoseconds
         expr min lq     mean median uq     max neval cld
    with_loop  32 39 86.44248     41 61 2474675 1e+06   a
 without_loop  35 39 89.86294     41 61 2915836 1e+06   a

If you need to run your loop a billion times, this will correspond to a 3-second difference in run time. Probably not worth worrying about.

Upvotes: 3

Related Questions