Ladislav
Ladislav

Reputation: 349

Android Camera2 combine burst capture images from camera with shorter maximum exposure time to emulate long exposure times

I am trying to implement a long exposure Android Camera2 solution described in this answer.

Basically it tries to workaround the limitation of the short maximum exposure time (100 ms) reported by my Samsung Galaxy S20+ Camera2 device by a burst capture of for example 100 images, where 100 x 100ms = 10 000 ms = 10s. Taking a photo with exposure time of 10s should be enough to see the stars of the night sky on the final image.

What I did so far is that I am taking hardcoded 100 images in YUV format in manual mode burst capture with ISO 800 and 100 ms settings.

When a YUV image is captured and is available, i am converting it using the RenderScript Intrinsics Replacement Toolkit via its Toolkit.yuvToRgbBitmap method to a temporary Bitmap image. This step works perfectly (and it's super fast) and the converted single image is displayed correctly while taken for example inside my room!

Once I have also the second YUV => Bitmap image, I am using the Toolkit.blend method with Blend mode Add to combine these images to a single Bitmap image. Once the next new image arrives from the burst capture, I am repeating the above step, means blending it to the previously combined image, and so on...

The problem is, that when I test the above solution outside under the night sky, the final image is still fully black/dark and I see no stars on it which is obviously wrong, because 10s of total exposure time with ISO 800 should be more than enough! The builtin Samsung Camera app with ISO800/10s gives me a very nice and bright result.

I also tried another blending modes like Multiply, etc but the result is same: Final Bitmap image is totally black when taken outside under the night sky.

Any ideas, why it does not give the expected result?

The related part of my code (in C# language):

public void OnImageAvailable(ImageReader? reader) {
    var img = reader?.AcquireLatestImage();
    if (img == null) return;

    switch (img.Format) {
        case ImageFormatType.Yuv420888:
            // converting the incoming YUV image to byte array
            var planes = img.GetPlanes();
            var yBuffer = planes[0].Buffer;
            var vuBuffer = planes[2].Buffer;
            var ySize = yBuffer.Remaining();
            var vuSize = vuBuffer.Remaining();

            var nv21 = new byte[ySize + vuSize];
            _ = yBuffer.Get(nv21, 0, ySize);
            _ = vuBuffer.Get(nv21, ySize, vuSize);
            
            // convering the byte array to an RGBA Bitmap image
            var TempBmp = Toolkit.Instance.YuvToRgbBitmap(nv21, img.Width, img.Height, YuvFormat.Nv21);
            
            // combining the incoming images to a single Bitmap image,
            // in case of the very first incoming image use it as a base
            if (FinalBitmap == null) FinalBitmap = TempBmp;
            else Toolkit.Instance.Blend(BlendingMode.Add, FinalBitmap, TempBmp);

            img.Close();
            break;
        default:
            // format not supported!!!
            break;
    }
}

Upvotes: 0

Views: 284

Answers (1)

Eddy Talvala
Eddy Talvala

Reputation: 18137

There are several problems here.

First, by using 'blend', you're averaging your images together. Each image has 1/100 of the signal level you want, so the average will still only be 1/100 of the signal, no matter how many images you blend together.

You need to add the images together - then their exposure times add together (ideally) and you get your desired 10-second exposure. I don't know if there's a standard mode for this, but it's also just a for loop.

Second, the signal level is quite low! Let's say you photograph something that in the 10-second exposure comes out as middle gray - rgb value (128,128,128). Each individual image will only have a value 1/100 of that, so something like (1,1,1) +- 1. The conversion to 8-bits and the noise reduction done by the camera device when it produces YUV data may well erase a signal of that minimal magnitude. So you should at least turn off all noise reduction.

But ideally, you need to do this with RAW image buffers. Then you have at least 10 bits to work with (so each image would have values around (5,5,5) at least). That of course requires a lot more work on your part, since converting from RAW to RBG is much more tedious.

Upvotes: 0

Related Questions