Reputation: 71
I recently tried to upgrade my Tensorflow installation from 0.6 to 0.7.1 (Ubuntu 15.10, Python 2.7) because it is described to be compatible with more up-to-date Cuda libraries. Everything works well including the simple test from the Tensorflow getting started page. However I'm not able to use cuDNN. When running a program using cuDNN, I first get a warning
"Unable to load cuDNN DSO"
and later the program crashes with
I tensorflow/core/common_runtime/gpu/gpu_device.cc:717] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 980, pci bus id: 0000:01:00.0)
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:73] Allocating 3.30GiB bytes.
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:83] GPU 0 memory begins at 0x704a80000 extends to 0x7d80c8000
F tensorflow/stream_executor/cuda/cuda_dnn.cc:204] could not find cudnnCreate in cudnn DSO; dlerror: /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so: undefined symbol: cudnnCreate
The files I downloaded for the Cuda Installation were
I followed the instructions on the Tensorflow getting started page with the exception of using cuDNN 7.0 instead of 6.5. $LD_LIBRARY_PATH is "/usr/local/cuda/lib64"
I have no clue why cudnnCreate is not found. Is there somebody who has successfully installed this configuration and can give me advice?
Upvotes: 7
Views: 5993
Reputation: 11
Ubuntu 14.04 && cudnnV5.0 && Cuda7.5
I got the some error and solve it in another way. Follow the official get-started page, I install the cudnn with these commands below, which is basically just copy those files into our cuda directory
tar xvzf cudnn-7.5-linux-x64-v5.1-ga.tgz
sudo cp cuda/include/cudnn.h /usr/local/cuda/include
sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64
sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
But after doing this ,if we use ll command to show all the file in "/usr/local/cuda/lib64" and compare with the origin files
ll
it seems that those soft links has broken after copy. so I delete them and create manually, like this:
sudo rm libcudnn.so.5 libcudnn.so
sudo ln -sf libcudnn.so.5 libcudnn.so
sudo ln -sf libcudnn.so.5.1.3 libcudnn.so.5
after that, execute
sudo ldconfig /usr/local/cuda/lib64
and it finally worked!
Upvotes: 0
Reputation: 1192
Download cuDNN v5.1 Library for Windows 10 from the cuda site, register if necessary.
Copy the cudnn64_5.dll (cuda\bin\cudnn64_5.dll) from that zip archive into
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin\;
If C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0 is your install PATH for the CUDA toolkit.
Upvotes: 0
Reputation: 81
I get the same error when I forgot to set the LD_LIBRARY_PATH
and CUDA_HOME
environment variables:
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64"
export CUDA_HOME=/usr/local/cuda
Upvotes: 8
Reputation: 71
The link sent by jorgemf (thank you) describes a Python 3.5 installation and I almost switched to Python 3.5. My last attempt with my present installation was to again copy the cuDNN libraries to /usr/local/cuda/lib64.
And it worked! So the problem is solved, although I still don't know why I had it.
Upvotes: 0
Reputation: 1143
I am following this instructions to install TensorFlow in archlinux: https://github.com/ddigiorg/AI-TensorFlow/blob/master/install/install-TF_2016-02-27.md
It seems you need cuDNN v2 or above, which you can get by register for their Accelerated Computing Developer Program, which usually takes 2 days: https://developer.nvidia.com/accelerated-computing-developer
UPDATE: It seems you already have cuDNNv2
Upvotes: 1