doryfied
doryfied

Reputation: 13

Running the same detection model on different GPUs

I recently ran in to a bit of a glitch where my detection model running on two different GPUs (a Quadro RTX4000 and RTX A4000) on two different systems utilize the GPU differently. The model uses only 0.2% of GPU on the Quadro system and uses anywhere from 50 to 70% on the A4000 machine. I am curious about why this is happening. The rest of the hardware on both the machines are the same.

Additional information: The model uses a 3D convolution and is built on tensorflow.

Upvotes: 1

Views: 146

Answers (1)

Timbus Calin
Timbus Calin

Reputation: 15033

Looks like the Quadro RTX4000 does not use GPU.

The method tf.test.is_gpu_available() is deprecated and can still return True although the GPU is not used.

The correct way to verify the usage of the GPU availability + usage is to check the output of the snippet:

tf.config.list_physical_devices('GPU')  

On the Quadro machine you should also run (in terminal):

watch -n 1 nvidia-smi

to see real-time the amount of GPU memory used.

Upvotes: 1

Related Questions