Dinesh
Dinesh

Reputation: 1565

Tensorflow running version with CUDA on CPU only

I am running tensorflow on a cluster. I installed the CUDA version. It works without any problem. To use GPU, I have to request resource. Now, I want to run only on CPU without requesting GPU resources.

On import tensorflow as tf, I get the error:
ImportError: /home/.pyenv/versions/2.7.13/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so: undefined symbol: cuDevicePrimaryCtxRetain


Failed to load the native TensorFlow runtime.

See https://www.tensorflow.org/install/install_sources#common_installation_problems

for some common reasons and solutions.  Include the entire stack trace
above this error message when asking for help.

I realized I had to run only on CPU and set environment variable CUDA_VISIBLE_DEVICES="". I did it through export on bash as well as on python script both. I still get the same error.

How can I use the GPU version of tensorflow on CPU only? Is it possible? Some other pages e.g. Run Tensorflow on CPU suggest to change session config parameter. But since I get the error on import itself, I don't think that is applicable or helpful.

Stack Trace:

File "<FileNameReplaced>", line 10, in <module>
    import tensorflow as tf
  File "/home/***/.pyenv/versions/2.7.13/lib/python2.7/site-packages/tensorflow/__init__.py", line 24, in <module>
    from tensorflow.python import *
  File "/home/***/.pyenv/versions/2.7.13/lib/python2.7/site-packages/tensorflow/python/__init__.py", line 51, in <module>
    from tensorflow.python import pywrap_tensorflow
  File "/home/***/.pyenv/versions/2.7.13/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 52, in <module>
    raise ImportError(msg)
ImportError: Traceback (most recent call last):
  File "/home/***/.pyenv/versions/2.7.13/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 41, in <module>
    from tensorflow.python.pywrap_tensorflow_internal import *
  File "/home/***/.pyenv/versions/2.7.13/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
    _pywrap_tensorflow_internal = swig_import_helper()
  File "/home/***/.pyenv/versions/2.7.13/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
    _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)

Additional Info:

Version: 1.1.0

Upvotes: 4

Views: 2915

Answers (1)

javidcf
javidcf

Reputation: 59681

Take a look at issue #2175 in the TensorFlow repo, where this problem is discussed. What worked for me was to set CUDA_VISIBLE_DEVICES="-1", not "", following to the documentation of CUDA environment variables. It may produce some warnings when you first create a session, but the computation should work alright. If you are using Bash or similar, you can do this by exporting it before running the program, like you say, or just with:

$ CUDA_VISIBLE_DEVICES="-1" python my_program.py

Alternatively, a probably more portable solution is to have Python itself set the environment variable before TensorFlow is imported by any module:

import os
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
import tensorflow as tf

Another user suggests creating your session in the following way:

import tensorflow as tf

session_conf = tf.ConfigProto(
    device_count={'CPU' : 1, 'GPU' : 0},
    allow_soft_placement=True,
    log_device_placement=False
)

with tf.Session(config=session_conf) as sess:
    sess.run(...)

This should allow you for more fine-grained control too (e.g. I have two GPUs but only want TensorFlow to use one of them).

Upvotes: 3

Related Questions