Kehe CAI
Kehe CAI

Reputation: 1221

tf.Variable can't pin to GPU?

My code:

import tensorflow as tf

def main():
    with tf.device('/gpu:0'):
        a = tf.Variable(1)
        init_a = tf.global_variables_initializer()

        with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess:
           sess.run(init_a)

if __name__ == '__main__':
    main()

The error:

InvalidArgumentError (see above for traceback): Cannot assign a device for operation 'Variable': Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.

Does this mean tf can't pin Variable to GPU?

Here is another thread which related to this topic.

Upvotes: 1

Views: 1442

Answers (2)

ash
ash

Reputation: 6751

int32 types are not (as of January 2018) comprehensively supported on GPUs. I believe the full error would say something like:

InvalidArgumentError (see above for traceback): Cannot assign a device for operation 'Variable': Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
Colocation Debug Info:
Colocation group had the following types and devices: 
Assign: CPU 
Identity: CPU 
VariableV2: CPU 
         [[Node: Variable = VariableV2[container="", dtype=DT_INT32, shape=[], shared_name="", _device="/device:GPU:0"]()]]

And it's the DT_INT32 there that is causing you trouble, since you explicitly requested that the variable be placed on GPU but there is no GPU kernel for the corresponding operation and dtype.

If this was just a test program and in reality you need variables of another type, such as float32, you should be fine. For example:

import tensorflow as tf

with tf.device('/gpu:0'):
  # Providing 1. instead of 1 as the initial value will result
  # in a float32 variable. Alternatively, you could explicitly
  # provide the dtype argument to tf.Variable()
  a = tf.Variable(1.)
  init_a = tf.global_variables_initializer()

  with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess:
    sess.run(init_a)

Alternatively, you could choose to explicitly place int32 variables on CPU, or just not specify any device at all and let TensorFlow's device placement select GPU where appropriate. For example:

import tensorflow as tf

v_int = tf.Variable(1, name='intvar')
v_float = tf.Variable(1., name='floatvar')
init = tf.global_variables_initializer()

with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess:
  sess.run(init)

Which will show that 'intvar' is placed on CPU while 'floatvar' is on GPU using some log lines like:

floatvar: (VariableV2)/job:localhost/replica:0/task:0/device:GPU:0
intvar: (VariableV2)/job:localhost/replica:0/task:0/device:CPU:0

Hope that helps.

Upvotes: 3

Stefan Lindblad
Stefan Lindblad

Reputation: 420

This means that Tensorflow cannot find the device you specified.

I assume you wanted to specify that your code is executed on your GPU 0.

The correct syntax would be:

with tf.device('/device:GPU:0'):

The shortform you are using is only allowed for the CPU.

You can also check this answer here: How to get current available GPUs in tensorflow?
It shows how to list the GPU devices that are recognized by TF.

And this lists the syntax: https://www.tensorflow.org/tutorials/using_gpu

Upvotes: 1

Related Questions