angel208
angel208

Reputation: 281

How to print a Kinect frame in OpenCV using OpenNI bindings

I'm trying to use OpenCV to Process depth images from a kinect. Im using Python and primesense's bindings (https://pypi.org/project/primesense/), but im having a lot of trouble just showing the images i get from openNI. Im usin

import numpy as np
import cv2
from primesense import openni2

openni2.initialize("./Redist")     # can also accept the path of the OpenNI     redistribution

dev = openni2.Device.open_any()

depth_stream = dev.create_color_stream()
depth_stream.start()

while(True):

    frame = depth_stream.read_frame()
    print(type(frame)) #prints <class 'primesense.openni2.VideoFrame'>

    frame_data = frame.get_buffer_as_uint8()
    print(frame_data) # prints <primesense.openni2.c_ubyte_Array_921600 object at 0x000002B3AF5F8848>

    image = np.array(frame_data, dtype=np.uint8) 


    print(type(image)) #prints <class 'numpy.ndarray'>
    print(image) #prints [12 24  3 ...  1  3 12], i guess this is the array that makes the image


    cv2.imshow('image', image)

depth_stream.stop()
openni2.unload()

this is the output im getting, just a window with no image:

enter image description here

there is no documentation at all on how to use these bindings, so im kind on a blind spot here. i thought that the frame.get_buffer_as_uint8() was giving me the array ready to print, but it just returns primesense.openni2.c_ubyte_Array_921600 object at 0x000002B3AF5F8848.

Actually, i looked at the binding's code, and found this:

def get_buffer_as_uint8(self):
    return self.get_buffer_as(ctypes.c_uint8)
def get_buffer_as_uint16(self):
    return self.get_buffer_as(ctypes.c_uint16)
def get_buffer_as_triplet(self):
    return self.get_buffer_as(ctypes.c_uint8 * 3)

has anyone used this bindings? any idea of how to makes them work? thank you in advance

Upvotes: 0

Views: 3096

Answers (1)

angel208
angel208

Reputation: 281

I found the solution:

Instead of using image = np.array(frame_data, dtype=np.uint8) for getting the image, you have to use frame_data = frame.get_buffer_as_uint16(). also, i was failing to set the image shape correctly.

FOR FUTURE REFERENCE

To take an image from a depth camera (the Kinect is not the only one), using the OpenNI bindings for Python, and process that image with OpenCV, the following code will do the trick:

import numpy as np
import cv2
from primesense import openni2
from primesense import _openni2 as c_api

openni2.initialize("./Redist")     # can also accept the path of the OpenNI redistribution

dev = openni2.Device.open_any()

depth_stream = dev.create_depth_stream()
depth_stream.start()

while(True):

    frame = depth_stream.read_frame()
    frame_data = frame.get_buffer_as_uint16()

    img = np.frombuffer(frame_data, dtype=np.uint16)
    img.shape = (1, 480, 640)
    img = np.concatenate((img, img, img), axis=0)
    img = np.swapaxes(img, 0, 2)
    img = np.swapaxes(img, 0, 1)

    cv2.imshow("image", img)
    cv2.waitKey(34)


depth_stream.stop()
openni2.unload()

to use the color camera, you can use dev.create_color_stream() instead.

Upvotes: 1

Related Questions