Reputation: 5590
I'm working with some fairly large image files (aerial survey mosaics, generally > 1 billion pixels), such that loading an entire image into memory would be a problem on my system. I would like to bring them into R piece-by-piece, such that I can process them in "grid-wise" sections.
NOTE: I'm not tied to a particular image format, so tiff
, png
, bmp
etc would all be fine as inputs.
I can do something along these lines with readJPEG
, but this requires loading the entire file into memory first, so it doesn't really solve my problem, but hopefully shows what I'm trying to achieve.
image.file <- "~/Desktop/penguins.jpg"
grid.size <- 100
v <- 3
h <- 1
library( jpeg )
image <- readJPEG( image.file )[ seq.int( (v-1)*grid.size+1, (v)*grid.size, 1 ),
seq.int( (h-1)*grid.size+1, h*grid.size, 1 ), ]
The above loads in only a sample of the image, designated by grid.size
, v
, and h
, such that it would be easy to build this into a loop to analyse an image in sections.
Is it possible to achieve this without loading the entire image into memory? Something like read.csv
, making use of the skip
and n
parameters would be reasonable (it would at least only load the vertical sections one at a time, so much less memory needed than readJPEG
).
Upvotes: 0
Views: 891
Reputation: 1545
You can easily achieve this entirely in R for almost any image format with the help of RBioFormats, which can be obtained from GitHub.
devtools::install_github("aoles/RBioFormats")
The chunk size can be specified in the subset
argument to read.image()
. The following example illustrates how to process an image piece-wise without ever loading the whole file into memory.
library(RBioFormats)
filename <- system.file("images", "sample-color.png", package="EBImage")
## first, get image dimensions from metadata
meta <- coreMetadata(read.metadata(filename))
xdim <- meta$sizeX
ydim <- meta$sizeY
## set chunk size
chunksize <- 300
## itarate over image chunks row-wise
for(i in 1:ceiling(ydim/chunksize)) {
for(j in 1:ceiling(xdim/chunksize)) {
x1 <- (j-1) * chunksize + 1
x2 <- min( j * chunksize, xdim )
y1 <- (i-1) * chunksize + 1
y2 <- min( i * chunksize, ydim )
cat(sprintf("[%d:%d, %d:%d] ", x1, x2, y1, y2))
img <- read.image(filename, subset = list(X=x1:x2, Y=y1:y2))
## perform the actual image processing
## here we just print the min and max pixel intensities
cat(range(img), "\n")
}
}
You might also want to check out EBImage, an image processing toolbox for R. It provides functionality to view images and perform various transformations and filtering.
Upvotes: 4
Reputation: 13118
If you have ImageMagick installed, you can crop the image from the command line before reading it into R. An example using this image: http://www.worldatlas.com/worldmaps/worldpoliticallarge.jpg
To create the cropped image:
x <- 800 ## x and y are offsets
y <- 400
w <- 200 ## width and height of cropped image
h <- 100
filename <- "worldpoliticallarge.jpg"
outname <- "crop.jpg"
cmd <- sprintf("jpegtran -crop %dx%d+%d+%d -copy none %s > %s", w, h, x, y, filename, outname)
system(cmd)
Check to see if the new image contains the region we want:
library(jpeg)
original <- readJPEG(filename)
cropped <- readJPEG(outname)
all.equal(original[(y+1):(y+h), (x+1):(x+w), ], cropped)
# [1] TRUE
Upvotes: 3