Reputation: 831
First of all, I know there are tons of questions similar like this; I've tried to do what the answers suggest, but seems like I do not know how to solve it. I have a Keras Functional API model
:
lstm_input = keras.layers.Input(shape=(1,4), name='lstm_input')
x = keras.layers.LSTM(50, name='lstm_0')(lstm_input)
x = keras.layers.Dropout(0.2, name='lstm_dropout_0')(x)
x = keras.layers.Dense(64, name='dense_0')(x)
x = keras.layers.Activation('sigmoid', name='sigmoid_0')(x)
x = keras.layers.Dense(1, name='dense_1')(x)
output = keras.layers.Activation('linear', name='linear_output')(x)
model = keras.Model(inputs=lstm_input, outputs=output)
adam = keras.optimizers.Adam(lr=0.0005)
model.compile(optimizer=adam, loss='mse')
And when I try to fit it, it jumps this error:
ValueError: Error when checking input: expected lstm_input to have 3 dimensions, but got array with shape (4, 1)
This is my call to fit
:
model.fit(X_aux['X_i'], X[i+1, 0])
# X_aux['X_i'].shape = (4, ) -- it's a numpy array
I've tried np.reshape([X_aux['X_i1']], (4,1))
, where its new shape is (4, 1)
but it does not work. How can I solve this?
Upvotes: 0
Views: 756
Reputation: 309
Make sure your input_shape of X_aux['X-i'] is 3 dimensional.
The input of any RNN-based layer must be 3 dimensional where each axis is corresponded to batch_size
, time_step
, and feature dimension
respectively.
The reason why reshaping to (4, 1)
wouldn't help is that the reshaped tensor is still 2 dimension. You need 3.
Make sure you define batch_size
, time_step
, and feature dimension
correctly and reshape X_aux['X-i'] and retrain the model again.
Upvotes: 1