CHETAN RAJPUT
CHETAN RAJPUT

Reputation: 151

How to improve GPU usage in convolutional neural network?

I am using keras library to implement CNN and Anaconda 3 (spyder 4) for execution.

I have used the command conda install -c anaconda keras-gpu It installed cudatoolkit-10.0.130 , cudnn-7.6.5 and tensorflow-gpu-2.0.0 But my code isn't working with tensorflow-gpu-2.0.0 so I have downgraded it to tensorflow-gpu-1.15.0 . (I have aslo installed latest CUDA toolkit on my machine but I dont know which one spyder is using , from my machine or conda environment ) Though my code is working fine but my GPU usage is %1 only. Am I installing something wrong like wrong combinations of Tensorflow and CUDA ? Actually I have tried most of the things mentioned online but I am not getting anywhere.

My system info : CPU : i7 9th gen GPU : RTX 2060 RAM : 16 GB OS : Windows 10

Is there any installation needed or any code changes to get my GPU working ? (I have executed one of the command like tf.config.list_physical_devices('GPU') to check my GPU and it is showing the positive result so tensorflow detecting my GPU but I have no idea why its is not using it for execution)

p.s : I have read online and most of the people talking about bottleneck due to CPU ( Even my CPU usage is low so it ll be appreciated if you tell me something to improve that as well ) , and solution they are asking to do is load your data so that GPU can be utilised efficiently. I am using image dataset so can you tell me how to preload the dataset or implement parallelism so that it can be fed to the GPU rather than giving it on fly. I am using keras as mentioned below so a code snippet which will be easy for newbie like me will be helpful to get kickstart.

Code :

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Convolution2D
from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dense
from tensorflow.keras.preprocessing.image import ImageDataGenerator


classifier = Sequential()

classifier.add(Convolution2D(32, 3, 3, input_shape = (64, 64, 3), activation = 'relu'))

classifier.add(MaxPooling2D(pool_size = (2, 2)))

classifier.add(Convolution2D(32, 3, 3, activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))

classifier.add(Flatten())

classifier.add(Dense(units = 128, activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))

classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])


train_datagen = ImageDataGenerator(rescale = 1./255,
                                   shear_range = 0.2,
                                   zoom_range = 0.2,
                                   horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255)

training_set = train_datagen.flow_from_directory('dataset/training_set',
                                                 target_size = (64, 64),
                                                 batch_size = 32,
                                                 class_mode = 'binary')

test_set = test_datagen.flow_from_directory('dataset/test_set',
                                            target_size = (64, 64),
                                            batch_size = 32,
                                            class_mode = 'binary')

classifier.fit_generator(training_set,
                         steps_per_epoch = 8000,
                         epochs = 25,
                         validation_data = test_set,
                         validation_steps = 2000)

.

Upvotes: 1

Views: 305

Answers (1)

Timbus Calin
Timbus Calin

Reputation: 15003

As per the official documentation of TensorFlow, the following snippet should set the gpu memory usage:

gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
  # Restrict TensorFlow to only allocate 1GB of memory on the first GPU
  try:
    tf.config.experimental.set_virtual_device_configuration(
        gpus[0],
        [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=4096)])# change here for different values
    logical_gpus = tf.config.experimental.list_logical_devices('GPU')
    print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
  except RuntimeError as e:
    # Virtual devices must be set before GPUs have been initialized
    print(e)

Upvotes: 1

Related Questions