Reputation: 65
In google Colab I've written an Ipython notebook where I build a neural network model, fetch the data from my google drive and train the model.
My code runs without errors and trains the model. Though I do not see any improvement when I use the colab GPU vs the default CPU. Do I correctly make use of the GPU or can tensorflow not use the GPU of google colab?
Some snippets of the code that could relate to this question:
import tensorflow as tf
print(tf.__version__)
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, BatchNormalization, Flatten, Dense, TimeDistributed, ReLU, ConvLSTM2D, Activation, Dropout, Reshape
Result:
2.0.0-alpha0
Found GPU at: /device:GPU:0
Building the model:
with tf.device("/gpu:0"):
model = Sequential()
#layer1
model.add(
TimeDistributed(
TimeDistributed(
Conv2D(
filters=4, kernel_size=(1,10), strides=(1,10), data_format="channels_last"
)
), input_shape=(40, 5, 7, 100, 1), name="LLConv"
)
)
model.add(TimeDistributed(BatchNormalization(axis=4), name="LBNtes"))
model.add(TimeDistributed(ReLU(), name="LRelu"))
#print(model.output_shape)#(None, 40, 5, 7, 10, 4)
#layer2
model.add(
TimeDistributed(
ConvLSTM2D(
filters=4, kernel_size=(7,3), strides=(1,1),data_format="channels_last", return_sequences=True
), name="LConvLST"
)
)
model.add(TimeDistributed(BatchNormalization(axis=4), name="LBN2"))
model.add(TimeDistributed(Activation("tanh"), name="Ltanh"))
#print(model.output_shape)#(None, 40, 5, 1, 8, 4)
model.add(Reshape((40, 5, 8, 4), name="reshape"))
#layers3
model.add(
ConvLSTM2D(
filters=1, kernel_size=(4,4), strides=(1,1), data_format="channels_last", name="GConvLSTM", return_sequences=True
)
)
model.add(BatchNormalization(axis=3, name="GBN"))
model.add(Activation("tanh", name="Gtanh"))
#print(model.output_shape)#(None, 40, 2, 5, 1)
model.add(TimeDistributed(Flatten()))
#print(model.output_shape)#(None, 40, 10)
model.add(Flatten())
#layer4
model.add(Dense(10, name="GDense"))
model.add(BatchNormalization(axis=-1))
model.add(ReLU())
model.add(Dropout(0.5))
#layer5
model.add(Dense(1, activation="linear"))
model.compile(
loss=tf.keras.losses.MeanSquaredError(),
optimizer=tf.keras.optimizers.Nadam(lr=0.001, decay=1e-6),
metrics=['mae', 'mse'],
)
#model.summary()
Training the model:
EPOCHS = 300
BATCH_SIZE = 15
with tf.device("/gpu:0"):
history = model.fit(train_features, train_labels, epochs=EPOCHS, batch_size=BATCH_SIZE, validation_data=(test_features,test_labels))
Upvotes: 0
Views: 4581
Reputation: 86513
If you try to update tensorflow by running pip install tensorflow-gpu
, the binary you install may not be tuned for the GPU hardware that Colaboratory provides. Instead, you should use the tensorflow version that comes bundled with Colab.
Currently, this version is 1.15, but you can switch to version 2.X by running %tensorflow_version 2.X
. At some point in the future, tensorflow 2.X will become the default.
For more information, see https://colab.research.google.com/notebooks/tensorflow_version.ipynb
Upvotes: 0
Reputation: 7353
Make sure that you have tensorflow-gpu
installed.
Try this on a new colab notebook first with GPU kernel enabled.
# Uninstall tensorflow first
!pip uninstall tensorflow -y
# Install tensorflow-gpu (stable version)
!pip install tensorflow-gpu # stable
import tensorflow as tf
# Check version
print(tf.__version__)
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
UPDATE: It looks like you would no longer need to install tensorflow-gpu in Colab as when you select GPU runtime, the environment installs
tensorflow-gpu
under the hood according to this video: Using GPUs in TensorFlow, TensorBoard in notebooks, finding new datasets, & more! (#AskTensorFlow).
Upvotes: 1