Reputation: 1639
I have a multi layer LSTM autoencoder whose input is a 20 step time series with 4 attributes.
model = Sequential()
model.add(CuDNNLSTM(128, input_shape=(20, 4), return_sequences=True)) # encode 1
model.add(CuDNNLSTM(256, return_sequences=True)) # encode 2
model.add(CuDNNLSTM(512, return_sequences=True)) # encode 3 -- our final vector
model.add(CuDNNLSTM(256, return_sequences=True)) # decode 1
model.add(CuDNNLSTM(128, return_sequences=True)) # decode 2
model.add(TimeDistributed(Dense(4)))
model.compile(optimizer='adam', loss='mse')
When I set the output layer to encode layer #3, the shape of the output is (1,20,512).
How do I get a vector of shape (1,512) from this layer to use as the learned representation of the input time series?
Am I right in saying that the shape is (1,20,512) because the layer is producing one output vector for each time step, in which case I should be using the last output vector?
Upvotes: 0
Views: 197
Reputation: 627
Since you set return_sequences=True
the LSTM layer will output a vector for each time-step of the sequence.
If you are only interested in the last sequence element you can just use the last 512 vector.
However, if you dont need the processing of the following layers you might also just set return_sequences=False
for your layer of interest and it will directly output your desired shape of (1,512)
Upvotes: 1