Mihir Deshpande
Mihir Deshpande

Reputation: 105

How to ensure tensorflow is using the GPU

I installed CUDA v9.2 and corresponding cuDNN manually to install tensorflow gpu But I realized that tensorflow 1.8.0 requires CUDA 9.0 so I ran

pip install tensorflow-gpu

from the anaconda prompt (base environment) where it automatically installed CUDA 9.0 and corresponding cuDNN. I started Spyder from the same command prompt. So here is my code in Python 3.6 where I'm using keras and tensorflow to train using 8000 odd images -

# Convolutional Neural Networks
# Part 1 - Building the CNN
# Not important

# Part 2- Fitting the CNN to the images - 
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
        rescale=1./255,
        shear_range=0.2,
        zoom_range=0.2,
        horizontal_flip=True)

test_datagen = ImageDataGenerator(rescale=1./255)

training_set = train_datagen.flow_from_directory(
        'dataset/training_set',
        target_size=(64, 64),
        batch_size=32,
        class_mode='binary')

test_set = test_datagen.flow_from_directory(
        'dataset/test_set',
        target_size=(64, 64),
        batch_size=32,
        class_mode='binary')
with tf.device("/gpu:0"):   # Notice THIS
    classifier.fit_generator(
            training_set,
            steps_per_epoch=8000,
            epochs=25,
            validation_data=test_set,
            validation_steps=2000)

Notice that right before fitting the dataset at the end, I put it inside

with tf.device("/gpu:0"):

I think this should ensure that it uses the GPU for training? I'm not sure because changing " gpu:0 " to " cpu:0 " gives the exact same time (18-20 minutes per epoch) for training. How do I ensure that tensorflow in Spyder uses my GPU ?

I have a NVIDIA GTX 970 so its CUDA compatible. Also I'm using python 3.6 , is that a problem ? Should I create a seperate Python 3.5 environment and install tensorflow-gpu in that similarly and try ?

Upvotes: 5

Views: 36039

Answers (2)

Abhi25t
Abhi25t

Reputation: 4633

Monitor the GPU usage in real-time, with:

nvidia-smi -l 1

This will loop and call the view at every second.

If you do not want to keep past traces of the looped call in the console history, you can also do:

watch -n0.1 nvidia-smi

Where 0.1 is the time interval, in seconds.

If tensorflow is using GPU, you'll notice a sudden jump in memory usage, temperature etc.

Upvotes: 0

dimension
dimension

Reputation: 1030

Creates a graph.

 with tf.device('/device:GPU:0'):
    a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
    b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
    c = tf.matmul(a, b)
    # Creates a session with log_device_placement set to True.
    sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
    # Runs the op.
    r = sess.run(c)
    print(r)
    import numpy as np
    assert np.all(r == np.array([[22., 28.], [49., 64.]]))

or go tensorflow website (https://www.tensorflow.org/programmers_guide/using_gpu)

import tensorflow as tf
if tf.test.gpu_device_name():
   print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
else:
   print("Please install GPU version of TF")

or this :

from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())

Upvotes: 13

Related Questions