M.Selman SEZGİN
M.Selman SEZGİN

Reputation: 313

Using gpu while making prediction with keras Resnet50

I'm using Resnet50 model to vectorize images to find image similarities. In order to increase speed of the program I tried multiprocessing but I failed because of keras' backend logic. In the end I am running my vectorization code in single machine and performance of it is not bad but I need better. To decrease time consumed by this vectorization operation I can use my gpu included machine(s). But I couldn't find a way to use gpu while calling prediction method. Any help will be great for me.

Sample Code:

basemodel = tensorflow.keras.applications.ResNet50(weights='imagenet', include_top=False, pooling="avg", input_shape=self.input_shape) 
model = tensorflow.keras.models.Model(inputs=basemodel.input, outputs=basemodel.output)
img_data = img_to_array(image)
img_data = np.expand_dims(img_data, axis=0)
img_data = preprocess_input(img_data)
feature_vector = basemodel.predict(img_data)

I need to speed up basemodel.predict(img_data) part. Can I use gpu for that purpose?

Upvotes: 0

Views: 947

Answers (1)

malelis
malelis

Reputation: 26

Since you imported the model from tensorflow.keras, you don't have to change your code to use GPUs. You should just have the prerequisites: https://www.tensorflow.org/install/gpu#software_requirements. And you can check that your program sees a gpu device with something like this:

import tensorflow as tf;

print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))

Finally, you can check gpu utilisation running the command nvidia-smi on a terminal

Upvotes: 1

Related Questions