Lawrence Wagerfield
Lawrence Wagerfield

Reputation: 6621

How can I prevent OOMs when resizing JPGs with ImageMagick (without increasing memory)?

It seems the -limit option is ignored when resizing JPGs with ImageMagick.

The following command uses docker to simulate running ImageMagick on a machine with limited free memory (100MB): the ImageMagick command is passed a 4958 × 6198 JPEG (original file size: 5.5MB) to resize, with -limit memory 5MiB.

The result is that ImageMagick gets killed by the container's OOM Killer, implying ImageMagick attempts to allocate over ~100MB instead of staying within the ~5MB limit it was instructed to. If you increase (or remove) the --memory 100m flag on docker run, the command will succeed:

# Download a 5.5MB JPEG (4958 × 6198)
curl 'https://images.unsplash.com/photo-1534970028765-38ce47ef7d8d?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&dl=trail-5yOnGsKUNGw-unsplash.jpg' \
     -o 5500kb-image.jpg

docker run --memory 100m \
           -v $(pwd):/imgs \
           dpokidov/imagemagick \
           /imgs/5500kb-image.jpg \
           -debug All -limit memory 5MiB -limit map 10MiB -resize 400 \
           /imgs/5500kb-image-resized.jpg

How can I memory-bound ImageMagick when resizing JPEGs?

Upvotes: 1

Views: 471

Answers (2)

jcupitt
jcupitt

Reputation: 11220

I think you have the arguments in the wrong order.

If you're on linux, /usr/bin/time is very convenient for measuring peak memory use. You are running this command:

$ /usr/bin/time -f %M:%e \
    convert \
        5500kb-image.jpg \
        -limit memory 5MiB -limit map 10MiB \
        -resize 400 5500kb-image-resized.jpg
346060:2.37

ie. 350mb of memory and 2.37s of CPU time.

If you put the limit first, you see:

$ /usr/bin/time -f %M:%e \
    convert \
        -limit memory 5MiB -limit map 10MiB \
        5500kb-image.jpg -resize 400 5500kb-image-resized.jpg
105468:2.64

Now it's just 105mb peak, and about the same runtime. You need to set the limits before you load the image.

As Mark says, vipsthumbnail is quite a bit faster and less memory hungry. I see:

$ /usr/bin/time -f %M:%e \
    vipsthumbnail 5500kb-image.jpg --size 400 -o 5500kb-image-resized.jpg
133676:0.28

130mb and 0.28s.

That's still rather high. Your image is a progressive JPG --- these must be loaded entirely into memory before thumbnailing can start, and it's not possible to exploit things like jpeg shrink-on-load.

If I convert your image to a regular jpeg, I see:

$ /usr/bin/time -f %M:%e \
    vipsthumbnail 5500kb-image-regular.jpg --size 400 -o 5500kb-image-resized.jpg
40200:0.09

40mb and 0.1s of CPU.

vipsthumbnail can resize very large images with only a small amount of memory, for example:

$ vipsheader st-francis.jpg 
st-francis.jpg: 30000x26319 uchar, 3 bands, srgb, jpegload
$ /usr/bin/time -f %M:%e \
    vipsthumbnail st-francis.jpg --size 400 -o 5500kb-image-resized.jpg
49692:2.52

50mb and 2.5s.

Upvotes: 4

Mark Setchell
Mark Setchell

Reputation: 208052

A 4958×6198 image has 30,729,684 pixels. Each will have an R, G and B channels making 92,189,052 samples. As each sample is normally stored in 16-bit (2 byte) resolution, you need 180+MB of RAM to hold it... along with space for the output image.

You will generally find libvips to be more frugal with memory if that is your overriding concern. Example here showing relative memory requirements and how to measure them.

Upvotes: 2

Related Questions