Reputation: 643
I want to train a sequential tensorflow (version 2.3.0) model on a single NVIDIA graphic card (RTX 2080 super). I am using the following code snippet to build and train the model. However, everytime I am running this code I do not see any GPU utilization. Any suggestion how to modify my code so I can run it on 1 GPU?
strategy = tf.distribute.OneDeviceStrategy(device="/GPU:0")
with strategy.scope():
num_classes=len(pd.unique(cats.No))
model = BuildModel((image_height, image_width, 3), num_classes)
model.summary()
model=train_model(model,valid_generator,train_generator,EPOCHS,BATCH_SIZE)
Upvotes: 2
Views: 697
Reputation: 8102
run the code below to see if tensorflow detects your GPU.
import tensorflow as tf
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
print(tf.__version__)
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
tf.test.is_gpu_available()
!python --version
Upvotes: 2