Reputation: 1
Some days ago I wrote a BERT Model for text classification using Google Colab Pro. Everything just worked fine, but since yesterday, I always get the output "GPU is NOT AVAILABLE". I haven't changed anything but noticed that errors occur, when installing tensorflow_hub and keras tf-models. There haven't been any errors before.
! python --version
!pip install tensorflow_hub
!pip install keras tf-models-official pydot graphviz
I get this message:
ERROR: tensorflow 2.5.0 has requirement h5py~=3.1.0, but you'll have h5py 2.10.0 which is incompatible.
ERROR: tf-models-official 2.5.0 has requirement pyyaml>=5.1, but you'll have pyyaml 3.13 which is incompatible.
import os
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
from keras.utils import np_utils
import official.nlp.bert.bert_models
import official.nlp.bert.configs
import official.nlp.bert.run_classifier
import official.nlp.bert.tokenization as tokenization
from official.modeling import tf_utils
from official import nlp
from official.nlp import bert
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
import matplotlib.pyplot as plt
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
print(e)
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.config.list_physical_devices('GPU') else "NOT AVAILABLE")
output Version: 2.5.0 Eager mode: True Hub version: 0.12.0 GPU is NOT AVAILABLE
I would really appreciate if someone could help me.
ps.: I already tried to update h5py and PyYAML, but GPU is still not running.
! pip install h5py==3.1.0
! pip install PyYAML==5.1.2
Upvotes: 0
Views: 907
Reputation:
ERROR: tf-models-official 2.5.0 has requirement pyyaml>=5.1, but you'll have pyyaml 3.13 which is incompatible.
I was able to resolved above issue by upgrading pip
package before installation of tf-models-official
as shown below
!pip install --upgrade pip
!pip install keras tf-models-official pydot graphviz
Working code as shown below
import os
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
from keras.utils import np_utils
import official.nlp.bert.bert_models
import official.nlp.bert.configs
import official.nlp.bert.run_classifier
import official.nlp.bert.tokenization as tokenization
from official.modeling import tf_utils
from official import nlp
from official.nlp import bert
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
import matplotlib.pyplot as plt
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
print(e)
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.config.list_physical_devices('GPU') else "NOT AVAILABLE")
Output:
1 Physical GPUs, 1 Logical GPUs
Version: 2.5.0
Eager mode: True
Hub version: 0.12.0
GPU is available
Upvotes: 1