yvanpapa
yvanpapa

Reputation: 11

R: Speed up a for loop on a very large data frame?

I have a huge set of coordinates with associated Z-values. Some of the pairs of coordinates are repeated several times with different Z values. I want to obtain the mean of all Z-values for each unique pair of coordinates.

I wrote a small line of code that works perfectly fine on a small data frame. The problem is that my actual data frame has more than 2 millions rows and the computation takes >10 hours to complete. I was wondering if there could be a way to make it more efficient and reduce the computation time.

Here is what my df looks like:

> df
           x        y         Z                                 xy
1  -54.60417 4.845833 0.3272980 -54.6041666666667/4.84583333333333
2  -54.59583 4.845833 0.4401644 -54.5958333333333/4.84583333333333
3  -54.58750 4.845833 0.5788663          -54.5875/4.84583333333333
4  -54.57917 4.845833 0.6611844 -54.5791666666667/4.84583333333333
5  -54.57083 4.845833 0.7830828 -54.5708333333333/4.84583333333333
6  -54.56250 4.845833 0.8340629          -54.5625/4.84583333333333
7  -54.55417 4.845833 0.8373666 -54.5541666666667/4.84583333333333
8  -54.54583 4.845833 0.8290986 -54.5458333333333/4.84583333333333
9  -54.57917 4.845833 0.9535526 -54.5791666666667/4.84583333333333
10 -54.59583 4.837500 0.0000000           -54.5958333333333/4.8375
11 -54.58750 4.845833 0.8582580          -54.5875/4.84583333333333
12 -54.58750 4.845833 0.3857006          -54.5875/4.84583333333333

You can see that some xy coordinates are the same (e.g. row 3,11,12 or 4 and 9) and I want the mean Z values of all these identical coordinates. So here is my script:

mean<-vector(mode = "numeric",length = length(df$x))

for (i in 1:length(df$x)){
  mean(df$Z[which(df$xy==df$xy[i])])->mean[i]
} 
mean->df$mean
df<-df[,-(3:4)]
df<-unique(df)

And I get something like this:

> df
           x        y      mean
1  -54.60417 4.845833 0.3272980
2  -54.59583 4.845833 0.4401644
3  -54.58750 4.845833 0.6076083
4  -54.57917 4.845833 0.8073685
5  -54.57083 4.845833 0.7830828
6  -54.56250 4.845833 0.8340629
7  -54.55417 4.845833 0.8373666
8  -54.54583 4.845833 0.8290986
10 -54.59583 4.837500 0.0000000

That does the work, but surely there is a way to speed up this process (probably without the for loop) for a df with a much larger number of rows?

Upvotes: 1

Views: 476

Answers (2)

Hugh
Hugh

Reputation: 16090

Welcome! In future it would be best to offer a quick way for us to copy and paste some code that generates the essential features of the dataset you're working with. Here is an example I think:

DF <- data.frame(x = sample(c(-54.1, -54.2), size = 10, replace = TRUE),
                 y = sample(c(4.8, 4.4), size = 10, replace = TRUE),
                 z = runif(10))

This looks to be just a split apply combine approach:

set.seed(1)
df <- data.frame(x = sample(c(-54.1, -54.2), size = 10, replace = TRUE),
                 y = sample(c(4.8, 4.4), size = 10, replace = TRUE),
                 z = runif(10))

library(data.table)
DT <- as.data.table(df)
DT[, .(mean_z = mean(z)), keyby = c("x", "y")]
#>        x   y    mean_z
#> 1: -54.2 4.4 0.3491507
#> 2: -54.2 4.8 0.4604533
#> 3: -54.1 4.4 0.3037848
#> 4: -54.1 4.8 0.5734239

library(dplyr)
#> 
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:data.table':
#> 
#>     between, first, last
#> The following objects are masked from 'package:stats':
#> 
#>     filter, lag
#> The following objects are masked from 'package:base':
#> 
#>     intersect, setdiff, setequal, union
df %>%
  group_by(x, y) %>%
  summarise(mean_z = mean(z))
#> # A tibble: 4 x 3
#> # Groups:   x [?]
#>       x     y mean_z
#>   <dbl> <dbl>  <dbl>
#> 1 -54.2   4.4  0.349
#> 2 -54.2   4.8  0.460
#> 3 -54.1   4.4  0.304
#> 4 -54.1   4.8  0.573

Created on 2018-09-21 by the reprex package (v0.2.1)

Upvotes: 2

neilfws
neilfws

Reputation: 33772

You could try dplyr::summarise.

library(dplyr)
df %>%
  group_by(x, y) %>%
  summarise(meanZ = mean(Z))

I'd guess this would take less than a minute, depending on your machine.

Someone else might provide a data.table answer, which may be even quicker.

Upvotes: 1

Related Questions