HereItIs
HereItIs

Reputation: 154

How do I know the correct format for my input data into my keras RNN?

I am trying to build an Elman simple RNN as described here.

I've built my model using Keras as follows:

model = keras.Sequential()
model.add(keras.layers.SimpleRNN(7,activation =None,use_bias=True,input_shape=
                             [x_train.shape[0],x_train.shape[1]]))
model.add(keras.layers.Dense(7,activation = tf.nn.sigmoid))

model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
simple_rnn_2 (SimpleRNN)     (None, 7)                 105       
_________________________________________________________________
dense_2 (Dense)              (None, 7)                 56        
=================================================================
Total params: 161
Trainable params: 161
Non-trainable params: 0
_________________________________________________________________

My training data is currently of shape (15000, 7, 7). That is, 15000 instances of length 7 one hot encodings, coding for one of seven letters. E.g [0,0,0,1,0,0,0],[0,0,0,0,1,0,0] and so forth.

The data's labels are the same format, since each letter predicts the next letter in the sequence, i.e [0,1,0,0,0,0,0] has the label [0,0,1,0,0,0,0].

So, training data (x_train) and training labels (y_train) are both shaped (15000,7,7).

My validation data x_val and y_val are shaped (10000,7,7). I.e the same shape just with fewer instances.

So when I run my model:

history = model.fit(x_train,
         y_train,
         epochs = 40,
         batch_size=512,
         validation_data = (x_val,y_val))

I get the error:

ValueError: Error when checking input: expected simple_rnn_7_input to have shape (15000, 7) but got array with shape (7, 7)

Clearly my input data is formatted incorrectly for input into the Keras RNN, but I can't think how to feed it the correct input.

Could anyone advise me as to the solution?

Upvotes: 1

Views: 1753

Answers (1)

Ankish Bansal
Ankish Bansal

Reputation: 1900

  1. SimpleRNN layer expects the inputs of dimensions (seq_length, input_dim) that is (7,7) in your case.
  2. Also if you want output at each time-step, you need to use return_sequence=True, by default which is false. This way you can compare the output at time-step.

So the model architecture will be something like this:

model.add(keras.layers.SimpleRNN(7, activation='tanh', 
                       return_sequences=True, 
                        input_shape=[7,7]))
model.add(keras.layers.Dense(7))
model.summary()

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
simple_rnn_12 (SimpleRNN)    (None, 7, 7)              105       
_________________________________________________________________
dense_2 (Dense)              (None, 7, 7)              56        
=================================================================
Total params: 161
Trainable params: 161
Non-trainable params: 0
_________________________________________________________________

Now at training time, it expects data input and output of dims (num_samples, seq_length, input_dims) i.e. (15000, 7, 7) for both.

model.compile(loss='categorical_crossentropy', optimizer='adam')# define any loss, you want
model.fit(x_train, y_train, epochs=2)

Upvotes: 1

Related Questions