Reputation: 73
Using the Tensorflow CIFAR CNN demonstration, I verified that my TF was properly using my GPU. TF used the GPU to run model.fit(), and it saw about 50% usage in HWiNFO64. However, if I then add this cell to the notebook, which uses the model to predict the label of images in the test set:
import numpy as np
for img in test_images:
prediction = model.predict(np.expand_dims(img, axis=0)) # Here
print(class_names[np.argmax(prediction)])
I see only 1% GPU usage (which is used by Chrome and other processes). Is there a way for me to run model.predict() on a GPU, or are there any alternatives where I can have a model output for a single input?
Upvotes: 5
Views: 7221
Reputation: 56357
Your code is running on the GPU, it is a misconception to think that GPU utilization can tell you if code is running in the GPU or not.
The problem is that doing one predict
call for each image is very inefficient, as almost no parallelism can be performed on the GPU, if you pass a whole array of images then it will increase GPU utilization as batches can be provided to the GPU and each image processed in parallel.
GPUs only accelerate specific workloads, so your only choice is to use more images in your call to predict
.
Upvotes: 3