Ujjwal
Ujjwal

Reputation: 1859

Selecting the device to be used by a graph in TensorFlow

Given a code which uses multiple graphs or multiple versions of the same graph, it is sometimes necessary to ensure that a particular graph uses only CPU for computation, while some other graph uses only GPU.

The basic question is

How to make sure that a particular graph makes use of only CPU (XOR) GPU for computations ?

There is not an exhaustive discussion of this topic on SO and hence this question.

I have tried a number of different approaches and none seem to work as will be outlined below.

Before further details on the question and various options that have been tried, I will lay down following details :-

Various approaches which have been tried

Various related questions exist on SO with accepted answers, but they do not seem to work very well as I outline with examples and outputs.

Various tried approaches

Approach 1

Related question is ( Run Tensorflow on CPU ). The accepted answer is to run tf.Session() with the following configuration :

config = tf.ConfigProto(
        device_count = {'GPU': 0}
    )
sess = tf.Session(config=config)

The corresponding output is :

2017-05-18 13:34:27.477189: I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] Found device 0 with properties: 
name: GeForce GTX 1080
major: 6 minor: 1 memoryClockRate (GHz) 1.7715
pciBusID 0000:04:00.0
Total memory: 7.92GiB
Free memory: 7.80GiB
2017-05-18 13:34:27.477232: I tensorflow/core/common_runtime/gpu/gpu_device.cc:927] DMA: 0 
2017-05-18 13:34:27.477240: I tensorflow/core/common_runtime/gpu/gpu_device.cc:937] 0:   Y 
2017-05-18 13:34:27.477259: I tensorflow/core/common_runtime/gpu/gpu_device.cc:996] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:04:00.0)
2017-05-18 13:34:27.482600: I tensorflow/core/common_runtime/gpu/gpu_device.cc:996] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:04:00.0)
2017-05-18 13:34:27.848864: I tensorflow/compiler/xla/service/platform_util.cc:58] platform CUDA present with 1 visible devices
2017-05-18 13:34:27.848902: I tensorflow/compiler/xla/service/platform_util.cc:58] platform Host present with 40 visible devices
2017-05-18 13:34:27.851670: I tensorflow/compiler/xla/service/service.cc:184] XLA service 0x7f0fd81d5500 executing computations on platform Host. Devices:
2017-05-18 13:34:27.851688: I tensorflow/compiler/xla/service/service.cc:192]   StreamExecutor device (0): <undefined>, <undefined>
2017-05-18 13:34:27.851894: I tensorflow/compiler/xla/service/platform_util.cc:58] platform CUDA present with 1 visible devices
2017-05-18 13:34:27.851903: I tensorflow/compiler/xla/service/platform_util.cc:58] platform Host present with 40 visible devices
2017-05-18 13:34:27.854698: I tensorflow/compiler/xla/service/service.cc:184] XLA service 0x7f0fd82b4c50 executing computations on platform CUDA. Devices:
2017-05-18 13:34:27.854713: I tensorflow/compiler/xla/service/service.cc:192]   StreamExecutor device (0): GeForce GTX 1080, Compute Capability 6.1
2017-05-18 13:34:28.918980: I tensorflow/stream_executor/dso_loader.cc:139] successfully opened CUDA library libcupti.so.8.0 locally

You can clearly see that the GPU is still being used and the XLA service is running on GPU

Approach 2

Related question is ( Run Tensorflow on CPU ). This answer states that the following environment variable can be set as follows to use CPU

CUDA_VISIBLE_DEVICES=""

When the GPU computation is required, it can be unset.

The corresponding output is

2017-05-18 13:42:24.871020: E tensorflow/stream_executor/cuda/cuda_driver.cc:406] failed call to cuInit: CUDA_ERROR_NO_DEVICE
2017-05-18 13:42:24.871071: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:158] retrieving CUDA diagnostic information for host: nefgpu12
2017-05-18 13:42:24.871081: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:165] hostname: nefgpu12
2017-05-18 13:42:24.871114: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:189] libcuda reported version is: 367.48.0
2017-05-18 13:42:24.871147: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:369] driver version file contents: """NVRM version: NVIDIA UNIX x86_64 Kernel Module  367.48  Sat Sep  3 18:21:08 PDT 2016
GCC version:  gcc version 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) 
"""
2017-05-18 13:42:24.871170: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:193] kernel reported version is: 367.48.0
2017-05-18 13:42:24.871178: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:300] kernel version seems to match DSO: 367.48.0
2017-05-18 13:42:25.159632: W tensorflow/compiler/xla/service/platform_util.cc:61] platform CUDA present but no visible devices found
2017-05-18 13:42:25.159674: I tensorflow/compiler/xla/service/platform_util.cc:58] platform Host present with 40 visible devices
2017-05-18 13:42:25.162626: I tensorflow/compiler/xla/service/service.cc:184] XLA service 0x7f5798002df0 executing computations on platform Host. Devices:
2017-05-18 13:42:25.162663: I tensorflow/compiler/xla/service/service.cc:192]   StreamExecutor device (0): <undefined>, <undefined>
2017-05-18 13:42:25.223309: I tensorflow/stream_executor/dso_loader.cc:139] successfully opened CUDA library libcupti.so.8.0 locally

You can see from this output that the GPU is not being used.

Approach 3

The related question is ( Running multiple graphs in different device modes in TensorFlow ). One answer gives the following solution :

# The config for CPU usage
config_cpu = tf.ConfigProto()
config_cpu.gpu_options.visible_device_list=''
sess_cpu = tf.Session(config=config_cpu)

# The config for GPU usage
config_gpu = tf.ConfigProto()
config_gpu.gpu_options.visible_device_list='0'
sess_gpu = tf.Session(config=config_gpu)

The output of using the configuration for CPU usage as outlined in the solution is as follows :

2017-05-18 13:50:32.999431: I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] Found device 0 with properties: 
name: GeForce GTX 1080
major: 6 minor: 1 memoryClockRate (GHz) 1.7715
pciBusID 0000:04:00.0
Total memory: 7.92GiB
Free memory: 7.80GiB
2017-05-18 13:50:32.999472: I tensorflow/core/common_runtime/gpu/gpu_device.cc:927] DMA: 0 
2017-05-18 13:50:32.999478: I tensorflow/core/common_runtime/gpu/gpu_device.cc:937] 0:   Y 
2017-05-18 13:50:32.999490: I tensorflow/core/common_runtime/gpu/gpu_device.cc:996] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:04:00.0)
2017-05-18 13:50:33.084737: I tensorflow/core/common_runtime/gpu/gpu_device.cc:996] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:04:00.0)
2017-05-18 13:50:33.395798: I tensorflow/compiler/xla/service/platform_util.cc:58] platform CUDA present with 1 visible devices
2017-05-18 13:50:33.395837: I tensorflow/compiler/xla/service/platform_util.cc:58] platform Host present with 40 visible devices
2017-05-18 13:50:33.398634: I tensorflow/compiler/xla/service/service.cc:184] XLA service 0x7f08181ecfa0 executing computations on platform Host. Devices:
2017-05-18 13:50:33.398695: I tensorflow/compiler/xla/service/service.cc:192]   StreamExecutor device (0): <undefined>, <undefined>
2017-05-18 13:50:33.398908: I tensorflow/compiler/xla/service/platform_util.cc:58] platform CUDA present with 1 visible devices
2017-05-18 13:50:33.398920: I tensorflow/compiler/xla/service/platform_util.cc:58] platform Host present with 40 visible devices
2017-05-18 13:50:33.401731: I tensorflow/compiler/xla/service/service.cc:184] XLA service 0x7f081821e1f0 executing computations on platform CUDA. Devices:
2017-05-18 13:50:33.401745: I tensorflow/compiler/xla/service/service.cc:192]   StreamExecutor device (0): GeForce GTX 1080, Compute Capability 6.1
2017-05-18 13:50:34.484142: I tensorflow/stream_executor/dso_loader.cc:139] successfully opened CUDA library libcupti.so.8.0 locally

You can see that the GPU is still being used.

Upvotes: 0

Views: 3667

Answers (1)

javidcf
javidcf

Reputation: 59731

See issues #9201 and #2175. The fact that the GPU devices are created does not mean that your graph is necessarily running on the GPU. You can enforce CPU execution with device_count = {'GPU': 0} or tf.device, but the GPU devices are still created with the session, just in case some op wants it. About 'CUDA_VISIBLE_DEVICES', making it empty did not work for me either, but doing export CUDA_VISIBLE_DEVICES"-1" (before starting Python, or inside Python through os.environ before importing TensorFlow) did the trick (TensorFlow will output a warning about the GPU not being found, but it will work). You can see the documentation for CUDA_VISIBLE_DEVICES here.

Upvotes: 1

Related Questions