Reputation: 5691
I was checking the code I found here, the example at Multivariate Multi-Step LSTM Models - > Multiple Input Multi-Step Output
.
I altered the code and used binary_crossentropy
and sigmoid
activation for the last layer.
from numpy import array
from numpy import hstack
from keras.models import Sequential
from keras.layers import LSTM
from keras.layers import Dense
# split a multivariate sequence into samples
def split_sequences(sequences, n_steps_in, n_steps_out):
X, y = list(), list()
for i in range(len(sequences)):
# find the end of this pattern
end_ix = i + n_steps_in
out_end_ix = end_ix + n_steps_out-1
# check if we are beyond the dataset
if out_end_ix > len(sequences):
break
# gather input and output parts of the pattern
seq_x, seq_y = sequences[i:end_ix, :-1], sequences[end_ix-1:out_end_ix, -1]
X.append(seq_x)
y.append(seq_y)
return array(X), array(y)
# define input sequence
in_seq1 = array([10, 20, 30, 40, 50, 60, 70, 80, 90])
in_seq2 = array([15, 25, 35, 45, 55, 65, 75, 85, 95])
out_seq = array([in_seq1[i]+in_seq2[i] for i in range(len(in_seq1))])
# convert to [rows, columns] structure
in_seq1 = in_seq1.reshape((len(in_seq1), 1))
in_seq2 = in_seq2.reshape((len(in_seq2), 1))
out_seq = out_seq.reshape((len(out_seq), 1))
# horizontally stack columns
dataset = hstack((in_seq1, in_seq2, out_seq))
# choose a number of time steps
n_steps_in, n_steps_out = 3, 3
# convert into input/output
X, y = split_sequences(dataset, n_steps_in, n_steps_out)
n_features = X.shape[2]
# define model
model = Sequential()
model.add((LSTM(5, activation='relu', return_sequences=True, input_shape=(n_steps_in, n_features))))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# fit model
model.fit(X, y, epochs=20, verbose=0, batch_size=1)
The above code runs fine. But, when I try to change the n_steps_in, n_steps_out
and use for example: n_steps_in, n_steps_out = 3, 2
, it gives:
ValueError: Dimensions must be equal, but are 2 and 3 for '{{node binary_crossentropy/mul}} = Mul[T=DT_FLOAT](binary_crossentropy/Cast, binary_crossentropy/Log)' with input shapes: [1,2], [1,3].
Why this error comes up and how can I overcome this?
Upvotes: 1
Views: 824
Reputation: 22031
this is because your network is build to output 3D sequences of shape (None, 3, 1)
while your targets have shape (None, 2, 1)
The best and automated way to handle this situation correctly is to build an encoder-decoder structure... Below the example:
model = Sequential()
model.add(LSTM(5, activation='relu', return_sequences=False,
input_shape=(n_steps_in, n_features))) # ENCODER
model.add(RepeatVector(n_steps_out))
model.add(LSTM(5, activation='relu', return_sequences=True)) # DECODER
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(X, y, epochs=20, batch_size=1)
Upvotes: 1