Rohan
Rohan

Reputation: 593

Android create Bitmap + crop causing OutOfMemory error(eventually)

I am taking 3 pictures in my app before uploading it to a remote server. The output is a byteArray. I am currently converting this byteArray to a bitmap, performing cropping on it(cropping the centre square). I eventually run out of memory(that is after exiting the app coming back,performing the same steps). I am trying to re-use the bitmap object using BitmapFactory.Options as mentioned in the android dev guide

https://www.youtube.com/watch?v=_ioFW3cyRV0&list=LLntRvRsglL14LdaudoRQMHg&index=2

and

https://www.youtube.com/watch?v=rsQet4nBVi8&list=LLntRvRsglL14LdaudoRQMHg&index=3

This is the function I call when I'm saving the image taken by the camera.

public void saveImageToDisk(Context context, byte[] imageByteArray, String photoPath, BitmapFactory.Options options) {
    options.inJustDecodeBounds = true;
    BitmapFactory.decodeByteArray(imageByteArray, 0, imageByteArray.length, options);
    int imageHeight = options.outHeight;
    int imageWidth = options.outWidth;
    int dimension = getSquareCropDimensionForBitmap(imageWidth, imageHeight);
    Log.d(TAG, "Width : " + dimension);
    Log.d(TAG, "Height : " + dimension);
    //bitmap = cropBitmapToSquare(bitmap);
    options.inJustDecodeBounds = false;

    Bitmap bitmap = BitmapFactory.decodeByteArray(imageByteArray, 0,
            imageByteArray.length, options);
    options.inBitmap = bitmap;

    bitmap = ThumbnailUtils.extractThumbnail(bitmap, dimension, dimension,
            ThumbnailUtils.OPTIONS_RECYCLE_INPUT);
    options.inSampleSize = 1;

    Log.d(TAG, "After square crop Width : " + options.inBitmap.getWidth());
    Log.d(TAG, "After square crop Height : " + options.inBitmap.getHeight());
    byte[] croppedImageByteArray = convertBitmapToByteArray(bitmap);
    options = null;

    File photo = new File(photoPath);
    if (photo.exists()) {
        photo.delete();
    }


    try {
        FileOutputStream e = new FileOutputStream(photo.getPath());
        BufferedOutputStream bos = new BufferedOutputStream(e);
        bos.write(croppedImageByteArray);
        bos.flush();
        e.getFD().sync();
        bos.close();
    } catch (IOException e) {
    }

}


public int getSquareCropDimensionForBitmap(int width, int height) {
    //If the bitmap is wider than it is tall
    //use the height as the square crop dimension
    int dimension;
    if (width >= height) {
        dimension = height;
    }
    //If the bitmap is taller than it is wide
    //use the width as the square crop dimension
    else {
        dimension = width;
    }
    return dimension;
}


 public Bitmap cropBitmapToSquare(Bitmap source) {
    int h = source.getHeight();
    int w = source.getWidth();
    if (w >= h) {
        source = Bitmap.createBitmap(source, w / 2 - h / 2, 0, h, h);
    } else {
        source = Bitmap.createBitmap(source, 0, h / 2 - w / 2, w, w);
    }
    Log.d(TAG, "After crop Width : " + source.getWidth());
    Log.d(TAG, "After crop Height : " + source.getHeight());

    return source;
}

How do I correctly recycle or re-use bitmaps because as of now I am getting OutOfMemory errors?

UPDATE :

After implementing Colin's solution. I am running into an ArrayIndexOutOfBoundsException.

My logs are below

08-26 01:45:01.895    3600-3648/com.test.test E/AndroidRuntime﹕ FATAL EXCEPTION: pool-3-thread-1
Process: com.test.test, PID: 3600
java.lang.ArrayIndexOutOfBoundsException: length=556337; index=556337
        at com.test.test.helpers.Utils.test(Utils.java:197)
        at com.test.test.fragments.DemoCameraFragment.saveImageToDisk(DemoCameraFragment.java:297)
        at com.test.test.fragments.DemoCameraFragment_.access$101(DemoCameraFragment_.java:30)
        at com.test.test.fragments.DemoCameraFragment_$5.execute(DemoCameraFragment_.java:159)
        at org.androidannotations.api.BackgroundExecutor$Task.run(BackgroundExecutor.java:401)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:422)
        at java.util.concurrent.FutureTask.run(FutureTask.java:237)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:152)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:265)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1112)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:587)
        at java.lang.Thread.run(Thread.java:818)

P.S : I had thought of cropping byteArrays before, but I did not know how to implement it.

Upvotes: 1

Views: 682

Answers (2)

Colt McAnlis
Colt McAnlis

Reputation: 3886

You shouldn't need to do any conversion to bitmaps, actually. Remember that your bitmap image data is RGBA_8888 formatted. Meaning that every 4 contiguous bytes represents one pixel. As such:

// helpers to make sanity
int halfWidth = imgWidth >> 1;
int halfHeight = imgHeight >> 1;
int halfDim = dimension >> 1;

// get our min and max crop locations
int minX = halfWidth - halfDim;
int minY = halfHeight - halfDim;
int maxX = halfWidth + halfDim;
int maxY = halfHeight + halfDim;

// allocate our thumbnail; It's WxH*(4 bits per pixel)
byte[] outArray = new byte[dimension * dimension * 4]

int outPtr = 0;
for(int y = minY; y< maxY; y++)
{
    for(int x = minX; x < maxX; x++)
    {
        int srcLocation = (y * imgWidth) + (x * 4);
        outArray[outPtr + 0] = imageByteArray[srcLocation +0]; // read R
        outArray[outPtr + 1] = imageByteArray[srcLocation +1]; // read G
        outArray[outPtr + 2] = imageByteArray[srcLocation +2]; // read B
        outArray[outPtr + 3] = imageByteArray[srcLocation +3]; // read A
        outPtr+=4;
    }   
}
//outArray now contains the cropped pixels.

The end result is that you can do cropping by hand by just copying out the pixels you're looking for, rather than allocating a new bitmap object, and then converting that back to a byte array.

== EDIT:

Actually; The above algorithm is assuming that your input data is the raw RGBA_8888 pixel data. But it sounds like, instead, your input byte array is the encoded JPG data. As such, your 2nd decodeByteArray is actually decoding your JPG file to the RGBA_8888 format. If this is the case, the proper thing to do for re-sizing is to use the techniques described in "Most memory efficient way to resize bitmaps on android?" since you're working with encoded data.

Upvotes: 3

Adam Fręśko
Adam Fręśko

Reputation: 1064

Try setting more and more variables to null - this helps reclaiming that memory;

after

 byte[] croppedImageByteArray = convertBitmapToByteArray(bitmap);

do:

bitmap= null;

after

 FileOutputStream e = new FileOutputStream(photo.getPath());

do

photo = null;

and after

 try {
        FileOutputStream e = new FileOutputStream(photo.getPath());
        BufferedOutputStream bos = new BufferedOutputStream(e);
        bos.write(croppedImageByteArray);
        bos.flush();
        e.getFD().sync();
        bos.close();
    } catch (IOException e) {
    }

do:

e = null;
bos = null;

Edit #1

If this fails to help, your only real solution actually using memory monitor. To learn more go here and here

Ps. there is another very dark solution, very dark solution. Only for those who know how to navigate thru dark corners of ofheapmemory. But you will have to follow this path on your own.

Upvotes: 0

Related Questions