Maks
Maks

Reputation: 55

How to define input_dim for Keras recurrent layers properly

I am trying to train some NN to predict timeseries. I am using Sequentional model to define my NN structure. It looks like:

from keras.models import Sequential
from keras.layers import Dense, Activation, SimpleRNN, Embedding
from keras import optimizers
from keras import losses
model = Sequential()
#model.add(Dense(units=5, input_dim=3, activation = 'tanh'))
model.add(SimpleRNN(units=5, input_dim = 3, activation = 'tanh'))
model.add(Dense(units=16, activation='tanh'))
model.add(Dense(1, activation='linear'))
prop = optimizers.rmsprop(lr=0.01)
sgd = optimizers.sgd(lr=0.01, nesterov=True, momentum=0.005)
model.compile(optimizer=prop, loss='mean_squared_error')

It does not execute and returned error is:

ValueError: Error when checking input: expected simple_rnn_9_input to have 3 dimensions, but got array with shape (221079, 3)

When I use commented out Dense layer everything is just fine. I read Keras documentation and I see they are using Embedding layer. Although, I do not really understand why Embedding layer is necessary to use recurrent layers like SimpleRNN or LSTM.

train_set is 2D array with 4 columns - 4-th one is target column, rest are inputs.

Is there any simple way how to use Keras' recurrent layers together with traditional Dense layers? I would appreciate explanation and some code examples.

Best regards, Maks

Upvotes: 1

Views: 256

Answers (1)

Andrew Lavers
Andrew Lavers

Reputation: 4378

I am no expert on this, but this may help

import numpy as  np
import numpy as  np

data = np.zeros((10,4))
X = data[:,0:3].reshape(-1,1,3)
y = data[:,3].reshape(-1,1)
print(X.shape)
print(y.shape)

prints:

(10, 1, 3)
(10, 1)

then:

model.fit(X, y)

Upvotes: 1

Related Questions