bestie
bestie

Reputation: 41

Do I need to modify my keras code to run efficiently on gpu?

I have cuda8.0.61, tensorflow_gpu version and keras. I'm training a keras model of 20 layers on 224*224 image data When I run the nvidia -smi in the terminal, I found that memory is getting used up and computing util is less%. When I try to fit the model machine becomes very slow.

I knew that inorder to use gpu and switch between devices I should use the following code:

with K.tf.device('/gpu:0'):
tf_config = tf.ConfigProto( allow_soft_placement=True )
tf_config.gpu_options.allow_growth = True
sess = tf.Session(config=tf_config) 
keras.backend.set_session(sess)

Do I need to switch between cpu and gpu to increase my speed by using blocks such as with K.tf.device('/gpu:0'): and with K.tf.device('/cpu:0'):?

I'm using numpy arrays to store images. Do I need to use tf.array or tf.convert_to_tensor? Will this be of any help?

Upvotes: 1

Views: 283

Answers (1)

anand_v.singh
anand_v.singh

Reputation: 2838

If tensorflow GPU is installed on your system, then your system will automatically use GPU for computation, however problem occurs because GPU doesn't always have the data that it needs available to it to perform computation, i.e. the bottleneck occurs at your input pipeline, just tf.array and tf.convert_to_tensor are unlikely to help as they only control the data once in memory, what you need are generators (Considering this is python), a generator is a function that returns an object (iterator) which we can iterate over (one value at a time).

The generators and it's iterators are already implemented in tensorflow in the tf.data api https://www.tensorflow.org/guide/datasets You can directly use them and modify your pipeline accordingly.

Upvotes: 2

Related Questions