user816328
user816328

Reputation:

Maximum camera grab and display performance

I've researched about this topic a lot, but it seems I'm doing something wrong or my understanding is somehow off.

I simply want to achieve the best performance (e.g. measured in FPS) for grabbing a high quality image from my Android driven SmartPhone and showing it directly to the user without any modifications.

Since I have a pretty capable SmartPhone (Nexus 4), I assumed it should be a trivial task. But all my efforts didn't pay off. E.g. I only achieved about 10 FPS for a 800x480 stream using the latest OpenCV framework.

So, is it possible to achieve >25FPS for just grabbing and displaying high quality video from my phone's camera? If so, what is the best strategy to do so? Device specific considerations are welcome as well.

Resources I used:

Update: I've been able to increase the grabbing performance to nearly constant ~25FPS@1280x720 by simply setting the recording hint while using the SurfaceTexture and TextureView as Camera sink.

Yet I'm wondering if it possible to increase the performance even further. I tried to use different preview formats, but without luck. Maybe there is implicit upper limit enforced on the grabbing performance I'm unaware of.

Nevertheless I'll continue to research and keep you informed. All sorts of information are still welcome!

Upvotes: 4

Views: 1120

Answers (1)

fadden
fadden

Reputation: 52353

The limiting factor is how quickly you can move a big pile of data around. There's a similar discussion here. You have three tasks: (1) acquire image, (2) save image ("capture"), and (3) display image. (If I've misunderstood your question, and you don't need #2, then the camera's Surface preview mode will do what you want at high speed.)

One memory-bandwidth-efficient approach, available on Android 4.3, is to feed the camera's Surface preview into the AVC encoder, save the encoded MPEG stream, and then decode the frames for display. The buffers from the camera can be fed into the MediaCodec encoder without having to copy them or convert the data to a different format. (See the CameraToMpegTest example.) This approach may be incompatible with one of your stated goals: the compression applied to each frame may reduce the quality below acceptable levels.

If you need to keep the frames "whole", you have to copy the data around, possibly multiple times, and write it to disk. The larger the frame, the more data you have to move, and the slower everything goes. For example, the camera captures the data and writes it into a native buffer; the native buffer is copied to a managed buffer for the Dalvik VM; the buffer is written to disk; YUV is converted to RGB; RGB is displayed on screen by uploading the data to a texture and rendering it.

Upvotes: 2

Related Questions