Reputation: 151
I have used the official sample for using the Camera2 API to capture a RAW sensor frame. The code is in java but I transformed it to Kotlin with the help of Android Studio. I tested it, and I'm capable of taking and saving a dng picture to my phone. Not a problem so far.
But what I really want is to be able to retrieve some information about the picture, I don't care about saving it. I want to do the processing directly with my smartphone.
What I tried so far, is to get the byte array of the image.
In the function dequeueAndSaveImage
, I retrieve the image from a ImageReader : image = reader.get()!!.acquireNextImage()
.
I suppose that is here that I have to process the image. I tried to log the image.width
, image.height
and the image.planes.count
and there was no problem.
By the way, since the format is RAW_SENSOR
, the image.planes.count
is 1, corresponding to a single plane of raw sensor image data, with 16 bits per color sample.
But when I'm trying to log the image.planes[0].buffer.array().size
for example, I'm getting a FATAL EXCEPTION: CameraBackground
with java.lang.UnsupportedOperationException
.
And if I'm trying to log the same thing, but in the function that saves the image to a dng file, I'm getting another type of error : FATAL EXCEPTION: AsyncTask #1
with java.lang.UnsupportedOperationException
Am I even going the right way in order to retrieve information about the image? For example the the intensity of the pixels, the average, the standard deviation for each channel of color, etc...
EDIT : I think that I found the problem, although not the solution.
When I log image.planes[0].buffer.hasArray()
, it returns false, that's why calling array()
throws an exception.
But then, how do I get the data from the image?
Upvotes: 1
Views: 1155