Reputation: 371
Is it possible to have Tensorflow utilize the dedicated graphics card for training models while the integrated graphics card handles the non-ML tasks?
Upvotes: 0
Views: 393
Reputation: 10122
It is basically possible, assuming compatible hardware. Right now, supported hardware are major CPUs, Nvidia GPUs, and Google’s TPUs.
Choosing where to compute is called device pinning or placement. You can see how to actually do it with the current API under the “Placing operations on different devices” section of the current documentation.
Stolen from the above link:
# Operations created outside either context will run on the "best possible"
# device. For example, if you have a GPU and a CPU available, and the operation
# has a GPU implementation, TensorFlow will choose the GPU.
weights = tf.random_normal(...)
with tf.device("/device:CPU:0"):
# Operations created in this context will be pinned to the CPU.
img = tf.decode_jpeg(tf.read_file("img.jpg"))
with tf.device("/device:GPU:0"):
# Operations created in this context will be pinned to the GPU.
result = tf.matmul(weights, img)
You mention the integrated graphic card. In theory possible to use it, but is it supported? It may one day, with the new XLA architecture of TensorFlow (still alpha at this stage).
Upvotes: 1