How to execute RNN in python using real values as input data?

I'm trying to run LSTM in python. Does anyone know how to override the "tf.nn.embedding_lookup (embedding, input_data)" method? I'm using float values ​​in my input variable "input_data" but using this method requires integers. What option do I have to enter with float data in RNN?

I'm using the "tf.nn.dynamic_rnn" method to run the network. I've also tried the "legacy_seq2seq.rnn_decoder" method but it also did not work.

embedding = tf.get_variable("embedding", [config.vocab_size, config.hidden_size])
inputs = tf.nn.embedding_lookup(embedding, input_data)
outputs, last_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=self.initial_state)
#outputs, last_state = seq2seq.rnn_decoder(inputs, initial_state, cell, loop_function=loop, scope='rnnlm')

Upvotes: 0

Views: 350

Answers (1)

william_grisaitis
william_grisaitis

Reputation: 5911

I'm not sure exactly where your code is failing, but if it's failing at tf.nn.dynamic_rnn, maybe you just need to specify the dtype parameter:

tf.nn.dynamic_rnn(cell, inputs, initial_state=self.initial_state, dtype=tf.float32)

as discussed in the docs: https://www.tensorflow.org/versions/r1.6/api_docs/python/tf/nn/dynamic_rnn

If your input_data are floating point, I'd reconsider your approach. Is your data categorical, like words in a language or names of people in the world? If yes, then I'd recommend converting it to be an unsigned integer like uint32, which is (a) more intuitive to most people and (b) compatible with embedding_lookup. If not, then you probably don't want an embedding -- embeddings are (as far as i know..) only for categorical data. If you just want to reduce the dimensionality of your data, then I'd consider some other dimensionality reduction scheme like PCA, or maybe just a wide first layer of your net with a narrow output.

Upvotes: 0

Related Questions