Reputation: 32294
Update:
Whether it is a word, sentence or phrase, The Universal Sentence Encoder will always return vector size of 512. I will like to know why 512 and not something else.
The following question was resolved by the answer provided.
I tried the example provided on tensorflow home page:
https://tfhub.dev/google/universal-sentence-encoder/2
I got runtime error like this:
RuntimeError: Exporting/importing meta graphs is not supported when eager execution is enabled. No graph exists when eager execution is enabled.
The code that I tried is:
import tensorflow.compat.v1 as tf
import tensorflow_hub as hub
config = tf.ConfigProto()
session = tf.Session(config=config)
embed = hub.Module("https://tfhub.dev/google/universal-sentence-encoder/2")
embeddings = embed(
[
"The quick brown fox jumps over the lazy dog.",
"I am a sentence for which I would like to get its embedding",
]
)
print(session.run(embeddings))
How to run this code correctly?
Upvotes: 2
Views: 2700
Reputation: 7379
Based on a discussion from github: https://github.com/tensorflow/hub/issues/350
The below solution worked for me:
import tensorflow.compat.v1 as tf
tf.disable_eager_execution()
The above code disables the eager execution.
Upvotes: 1
Reputation: 7009
It's a matter of the tensorflow version you are using.
In Tensorflow 2.0 you should use hub.load()
or hub.KerasLayer()
.
Upvotes: 6