lol
lol

Reputation: 93

Is it possible to process images using GPU in OpenCV

So, i got from my university a python related task. we need to get s a set of images (from a folder), and then resize them as fast as possible. so i manage to do it using cv2 resize option. but, apparently we can do it a lot faster using the GPU. but unfortenally i was unable to find the best way to do it with the openCV module.

i found this code, but it isn't openCV realted.

import numpy as np
from timeit import default_timer as timer
from numba import vectorize

@vectorize(['float32(float32, float32)'], target='cuda')
def pow(a, b):
    return a ** b

def main():
    vec_size = 100000000

    a = b = np.array(np.random.sample(vec_size), dtype=np.float32)
    c = np.zeros(vec_size, dtype=np.float32)

    start = timer()
    c = pow(a, b)
    duration = timer() - start

    print(duration)

if __name__ == '__main__':
    main()

EDIT: i found something called "UMat" what's the benefits of using it? i tried to use in im my code in this way:

image = cv2.UMat(cv2.resize(image, (0, 0), fx=0.5, fy=0.5)) # Resize image by half

Upvotes: 6

Views: 5127

Answers (1)

kocica
kocica

Reputation: 6465

Yes, you can use GPU module in OpenCV, but unfortunately only in C++. There is no wrapper for Python.

Solutions:

Upvotes: 4

Related Questions