Reputation: 21
Keras is not using the my GPU, even though tensorflow seems to run fine with it. I followed other folks suggestion to check tensorflow with:
import tensorflow
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
Which gives
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 13541243483275802230
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 6694395576
locality {
bus_id: 1
links {
}
}
incarnation: 17715053295272939021
physical_device_desc: "device: 0, name: GeForce GTX 1070, pci bus id: 0000:08:00.0, compute capability: 6.1"
]
So far so good, but when I specify a classifier in Keras and train it, it runs at glacial pace. No sign of GPU acceleration:
classifier.fit(X_train, y_train, batch_size = 10, epochs = 100, verbose=1)
I tried this:
with tensorflow.device('/gpu:0'):
classifier.fit(X_train, y_train, batch_size = 10, epochs = 100)
With the same result. I don't know how to tell if Keras is using GPU except by speed and obvious CPU usage.
I also ran this example from the tensorflow documentation and in my terminal I can clearly see that it uses the GPU. It runs much quicker than the keras example above. import tensorflow # Creates a graph. a = tensorflow.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a') b = tensorflow.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b') c = tensorflow.matmul(a, b) # Creates a session with log_device_placement set to True. sess = tensorflow.Session(config=tensorflow.ConfigProto(log_device_placement=True)) # Runs the op. print(sess.run(c))
I would greatly appreciate your kind help in finding out why Keras can't see my GPU
I use python 3.6.5, tensorflow-gpu 1.11.0 (tensorflow not installed), keras 2.2.4. I need to mention that I had to fiddle a while to get tensorflow to use the GPU and I still don't know why it suddenly did, but it does so consistently now. My assumption was that Keras would automatically inherit this.
A.
Upvotes: 0
Views: 707
Reputation: 21
I am not entirely sure anymore of my originally stated problem. I think Keras was indeed using the GPU, but that I had a significant bottleneck between CPU and GPU. When I increased the batch size, things ran significantly faster (for each epoch), which doesn't make much sense but seems to indicate I have a bottleneck elsewhere. I have no idea how to debug this though
Upvotes: 1
Reputation: 86650
You could try removing your keras and install keras-gpu instead (available in anaconda, maybe in pip too)
If you want to be sure to use with tensorflow.device('/gpu:0'):
, use it "when defining the model":
with tensorflow.device('/gpu:0'):
#blablablabla - layers for functional API model
classifier = Model(inputs, outputs) #or classifier = Sequential()
#blablabla - layers for sequential model
Upvotes: -1