Reputation: 23
Let's say we have an array modeldata
(data comes from a terrestrial model), whose dimensions are:
> dim(modeldata)
[1] 67420 518
The first dimension includes the cells of the grid, the second a timeseries from 1500:2017
The unusual length of the first dimension is due to the presence of the terrestrial cells alone to save space.
In the raster
package I was dealing with it the following way:
> coords
[,1] [,2]
[1,] -179.75 -16.25
[2,] -179.75 65.25
[3,] -179.75 65.75
[4,] -179.75 66.25
[...,] ... ...
[67420,] 179.75 71.25
> wgs84 <- sp::CRS("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs")
> modeldata_spdf <- sp::SpatialPixelsDataFrame(coords,
data = data.frame(modeldata),
proj4string = wgs84)
> modeldata_brick <- raster::brick(modeldata_spdf)
Please do not judge me for this way,
I am more interested in an comparable (performant) approach using the terra package.
Another approach which is also be fine would be to use a SpatRaster
mask layer instead of the coordinates.
Thanks :-)
Upvotes: 1
Views: 1502
Reputation: 47491
The approach below is for situations where you compute values for a subset of locations, the cells of interest, on a known regular raster. For other cases see values<-
, rast(, xyz=TRUE)
and rasterize
.
Example data
library(terra)
r <- rast(res=10)
set.seed(0)
cells <- sample(ncell(r), 5)
xy <- xyFromCell(r, cells)
xy
# x y
#[1,] -165 -25
#[2,] 25 55
#[3,] -135 -55
#[4,] -155 -45
#[5,] -75 5
In this case we have 5 locations, and the model data below, md
, has 2 time steps (layers) of 5 values each
md <- cbind(A=1:5, B=5:1)
# compute cell numbers (not needed here as we have them already)
cells <- cellFromXY(r, xy)
# create empty raster
x <- rast(r, nlyr=2)
# assign values
x[cells] <- md
In your case it may be more efficient to keep the cells numbers instead of the coordinates.
The above should also work for raster
with minor modifications.
Upvotes: 0