Reputation: 2605
I'm not able to import the elmo module from TensorFlow Hub. I am able to import other modules and use them successfully. I'm running TF2.0 on a GCP Jupyterlab instance with GPUs. When I try this:
import tensorflow as tf
import tensorflow_hub as hub
elmo = hub.Module("https://tfhub.dev/google/elmo/3", trainable=True)
I get:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-2-caced7ee1735> in <module>
----> 1 elmo = hub.Module("https://tfhub.dev/google/elmo/3", trainable=True)
/usr/local/lib/python3.5/dist-packages/tensorflow_hub/module.py in __init__(self, spec, trainable, name, tags)
174 name=self._name,
175 trainable=self._trainable,
--> 176 tags=self._tags)
177 # pylint: enable=protected-access
178
/usr/local/lib/python3.5/dist-packages/tensorflow_hub/native_module.py in _create_impl(self, name, trainable, tags)
384 trainable=trainable,
385 checkpoint_path=self._checkpoint_variables_path,
--> 386 name=name)
387
388 def _export(self, path, variables_saver):
/usr/local/lib/python3.5/dist-packages/tensorflow_hub/native_module.py in __init__(self, spec, meta_graph, trainable, checkpoint_path, name)
443 # TPU training code.
444 with scope_func():
--> 445 self._init_state(name)
446
447 def _init_state(self, name):
/usr/local/lib/python3.5/dist-packages/tensorflow_hub/native_module.py in _init_state(self, name)
446
447 def _init_state(self, name):
--> 448 variable_tensor_map, self._state_map = self._create_state_graph(name)
449 self._variable_map = recover_partitioned_variable_map(
450 get_node_map_from_tensor_map(variable_tensor_map))
/usr/local/lib/python3.5/dist-packages/tensorflow_hub/native_module.py in _create_state_graph(self, name)
503 meta_graph,
504 input_map={},
--> 505 import_scope=relative_scope_name)
506
507 # Build a list from the variable name in the module definition to the actual
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/training/saver.py in import_meta_graph(meta_graph_or_file, clear_devices, import_scope, **kwargs)
1451 return _import_meta_graph_with_return_elements(meta_graph_or_file,
1452 clear_devices, import_scope,
-> 1453 **kwargs)[0]
1454
1455
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/training/saver.py in _import_meta_graph_with_return_elements(meta_graph_or_file, clear_devices, import_scope, return_elements, **kwargs)
1461 """Import MetaGraph, and return both a saver and returned elements."""
1462 if context.executing_eagerly():
-> 1463 raise RuntimeError("Exporting/importing meta graphs is not supported when "
1464 "eager execution is enabled. No graph exists when eager "
1465 "execution is enabled.")
RuntimeError: Exporting/importing meta graphs is not supported when eager execution is enabled. No graph exists when eager execution is enabled.
Upvotes: 2
Views: 1527
Reputation: 4633
handle = "https://tfhub.dev/google/elmo/3"
If you want to use TF 2 for loading/inferencing, these 2 methods are recommended: hub.load()
is the new low-level function to load a SavedModel from TensorFlow Hub (or compatible services). It wraps TF2's tf.saved_model.load()
model = hub.load(handle)
outputs = model(inputs)
or
model = hub.KerasLayer(handle, signature="sig")
outputs = model(inputs)
The hub.KerasLayer
class calls hub.load()
and adapts the result for use in Keras alongside other Keras layers. (It may even be a convenient wrapper for loaded SavedModels used in other ways.)
model = tf.keras.Sequential([
hub.KerasLayer(handle),
...])
But if you want to finetune: Load a legacy model in TF1 Hub format
import tensorflow.compat.v1 as tf
import tensorflow_hub as hub
tf.disable_v2_behavior()
elmo = hub.Module(handle, trainable=True)
Source: https://www.tensorflow.org/hub/model_compatibility
Upvotes: 0
Reputation: 1238
The hub.Module
API does not work in Eager mode. Please check out https://www.tensorflow.org/hub/migration_tf2, and take note of the change in file format from TF1 hub.Module to TF2 SavedModel.
Upvotes: 2