Reputation: 3861
I'm trying to run the Seq2Seq example here, https://blog.keras.io/building-autoencoders-in-keras.html
from keras.layers import Input, LSTM, RepeatVector
from keras.models import Model
inputs = Input(shape=(timesteps, input_dim))
encoded = LSTM(latent_dim)(inputs)
decoded = RepeatVector(timesteps)(encoded)
decoded = LSTM(input_dim, return_sequences=True)(decoded)
sequence_autoencoder = Model(inputs, decoded)
encoder = Model(inputs, encoded)
My input is categorical encoding, e.g. [1, 23, 6, 12, 4, 0, 0, 0], which has 25 categories and fixed length of 1000.
So, the updated version of the code looks like:
MInput = Input(shape=(MAX_LEN, CATEGORY_NUMS))
encode_seq = LSTM(32)(MInput)
decode_seq = RepeatVector(MAX_LEN)(encode_seq)
decode_seq = LSTM(CATEGORY_NUMS, return_sequences=True)(decode_seq)
autoencoder = Model(MInput, decode_seq)
encoder = Model(MInput, encode_seq)
However, I'm getting " Input 0 is incompatible with layer lstm_2: expected ndim=3, found ndim=2" error.
Adding return_sequences=True to the first LSTM layer or removing the RepeatVector all give incompability error.
I'm not sure how else I have to prepare my input.
Thanks!
Upvotes: 0
Views: 373
Reputation: 244
Your input X and output Y should be of the shape (batch_size,timesteps,input_dim). Try to print their shape and compare it with the model summary output shape.
Upvotes: 1