Reputation: 437
What't the best data-pass from usb-camera to opengl texture?
The only way I know is usb-camera -> (cv.capture()) cv_image -> glGenTexture(image.bytes)
Since CPU would parse the image for each frame, frame rate is lower.
Is there any better way?
I'm using nvidia jetson tx2, is there some way relative to the environment?
Upvotes: 1
Views: 187
Reputation: 162317
Since USB frames must be reassembled anyway by the USB driver and UVC protocol handler, the data is passing through the CPU anyway. The biggest worry is having redundant copy operations.
If the frames are transmitted in M-JPEG format (which almost all UVC compliant cameras do support), then you must decode it on the CPU anyway, since GPU video decoding acceleration HW usually doesn't cover JPEG (also JPEG is super easy to decode).
For YUV color formats it is advisable to create two textures, one for the Y channel, one for the UV channels. Usually YUV formats are planar (i.e. images of a single component per pixel each), so you'd make the UV texture a 2D array with two layers. Since chroma components may be subsampled you need the separate textures to support the different resolutions.
RGB data goes in is a regular 2D texture.
Use a pixel buffer object (PBO) for transfer. By mapping the PBO into host memory (glMapBuffer) you can decode the images coming from the camera directly into that staging PBO. After unmapping a call to glTexSubImage2D will then transfer the image to the GPU memory – in the case of a unified memory architecture this "transfer" might be as simple as shuffling around a few internal buffer references.
Since you didn't mention the exact API used to access the video device, it's difficult to give more detailed information.
Upvotes: 3