dww
dww

Reputation: 31452

fastest way to conduct arithmetic on large rasters

I have a large number of large rasters (global extent, 250 m resolution; around 1e10 floating point cells)-- the filenames are in a vector deltaX.files. I want to add each of these to another raster, filename X.tif. Since this operation could take days to complete, I am wondering which is the fastest way to add rasters to make this as fast as possible.

I can think of a few methods, but I am unsure which is the most efficient option, or whether there is another better option than any of these.

So, my question is whether there is a way to optimize or significantly speed up arithmetic on large rasters. Note that I have a CUDA enabled NVidia GPU, so solutions that can parallelize this on a GPU are very welcome. Note that I am on a Linux ystsem.


Some example methods:

Note the following code block to be inserted before each of them, to determine default output file compression, memory allocation, and start parallel cluster

rasterOptions(chunksize = 1e10, maxmemory = 4e10)
f.opt = '-co COMPRESS=ZSTD -co PREDICTOR=2'
f.type = 'FLT4S'
beginCluster()

Option (1)

for (f in deltaX.files) {
  s = stack('X.tif', f)
  calc(s, sum, filename = paste0('new_', f), datatype = f.type, options = f.opt)
}

Option (2)

X = raster('X.tif')
for (f in deltaX.files) {
  dX = raster(f)
  overlay(X, dX, fun=sum, filename = paste0('new_', f), datatype = f.type, options = f.opt)
}

Option (3)

X = raster('X.tif')
for (f in deltaX.files) {
  dX = raster(f)
  Y = X + dX
  writeRaster(Y, filename = paste0('new_', f), datatype = f.type, options = f.opt)
}

Option (4): Use gdal_calc.py instead of R

for (f in deltaX.files) {
  system(cmd)
  cmd = paste0("gdal_calc.py -A X.tif ", "-B ", f, " --outfile=", 'temp.tif', ' --calc="A+B"')
  system(cmd)
  system(paste('gdal_translate -ot Float32', f.opt, 'temp.tif', paste0('new_', f)))
  system('rm temp.tif')
}

Note that I've had trouble getting this last version successfully produce fully compressed output files so the additional step of using gdal_translate on each file to compress it is also required. However, on a few test runs it seems to produce corrupted values, so I am really most interested in an R solution rather than using gdal_calc.py.


Some dummy data to make this reproducible

X = raster(vals = rnorm(65000 * 160000), ncol = 160000, nrow = 65000)
writeRaster(X, 'X.tif',  datatype = f.type, options = f.opt)
for (i in 1:10) {
  dX = raster(vals = rnorm(65000 * 160000), ncol = 160000, nrow = 65000)
  writeRaster(X, paste0('dX', i, '.tif'),  datatype = f.type, options = f.opt)
}
deltaX.files = paste0('dX', 1:10, '.tif')

Upvotes: 4

Views: 998

Answers (1)

Robert Hijmans
Robert Hijmans

Reputation: 47436

I would suggest using terra (a new package that aims to replace raster ---- it is simpler and faster). It is now available from CRAN, but for the cutting edge you can install from github

Probably the best approach is

library(terra)
r <- rast(c('X.tif')
for (f in deltaX.files) {
    s <- rast(f)
    x <- c(r, s)
    y <- app(x, sum, filename=paste0('new_', f), datatype="INT2S",
           wopt=list(gdal="COMPRESS=LZW") )
 }

perhaps the below is a bit faster; but the catch is that it has no filename argument. But you can work around that

library(terra)
r <- rast(c('X.tif')
for (f in deltaX.files) {
    s <- rast(f)
    x <- r + s
    tempfile <- sources(x)$source[1]
    file.rename(tempfile, paste0('new_', f))
 }

Alternatively, in one step (that would create a single enormous file --- probably not desired):

r <- rast(c('X.tif')
s <- rast(deltaX.files)
# combine them as separate sub-datasets
x <- sds(r, s)
y <- sum(x, filename="file.tif")

Or like this (fast, but it goes to a temp file, that you can rename when done, but you cannot set all the write options)

 z <- r + s 

There is no GPU support yet ...

Upvotes: 3

Related Questions