Carlton Banks
Carlton Banks

Reputation: 365

Why can't i train my model

I am currently trying to make a linear regression network capable of mapping my input data to a desired output data.

My input and output is currently stored as list of matrices stored as numpy.ndarray.

The input dimension for the regression network is 400 and the output dimension for the regression network is 13.

Each matrix on the input side has dimensions [400,x] => output by print input[0].shape

Each matrix on the output side has dimensions [13,x] => output by print output[0].shape

The network i've currently defined looks like this:

print "Training!"
model = Sequential()
model.add(Dense(output_dim=13, input_dim=400, init="normal"))
model.add(Activation("relu"))
print "Compiling"
model.compile(loss='mean_squared_error', optimizer='sgd')
model.fit(input,output,verbose=1)

Problem here is at the train stage.

It somehow takes very long time, and no information is provided about the progress. It seems like the system stalls, and terminated with this error message.

Traceback (most recent call last):
  File "tensorflow_datapreprocess_mfcc_extraction_rnn.py", line 169, in <module>
    model.fit(train_set_data,train_set_output,verbose=1)
  File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 620, in fit
    sample_weight=sample_weight)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1034, in fit
    batch_size=batch_size)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 961, in _standardize_user_data
    exception_prefix='model input')
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 51, in standardize_input_data
    '...')
Exception: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 arrays but instead got the following list of 270 arrays: [array([[ -1.52587891e-04,   3.05175781e-05,  -1.52587891e-04,
         -5.18798828e-04,   3.05175781e-05,  -3.96728516e-04,
          1.52587891e-04,   3.35693359e-04,  -9.15527344e-05,
          3.3...

I guess the error might be the way i parse my input data, which to me is black magic. The documentation states that

https://keras.io/models/model/

fit(self, x, y, batch_size=32, nb_epoch=10, verbose=1, callbacks=[], validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None)

x: Numpy array of training data, or list of Numpy arrays if the model has multiple inputs. If all inputs in the model are named, you can also pass a dictionary mapping input names to Numpy arrays.

y: Numpy array of target data, or list of Numpy arrays if the model has multiple outputs. If all outputs in the model are named, you can also pass a dictionary mapping output names to Numpy arrays.

Which is what i have a list of Numpy arrays? how it knows it is the rows it has to read in?... I don't know. I guess numpy.ndarrays are stored as list of numpy.arrays in which each arrays is a row.?

It seems so according to this simple example:

Input:

import numpy as np

lis = []
output_data = np.random.rand(5,3)
output_data_1 = np.random.rand(5,2)
lis.append(output_data)
lis.append(output_data_1)
print output_data.shape
print output_data_1.shape
print lis

Output:

(5, 3)
(5, 2)
[array([[ 0.15509364,  0.20140267,  0.13678847],
       [ 0.27932102,  0.38430659,  0.87265863],
       [ 0.01053336,  0.28403731,  0.19749507],
       [ 0.95775409,  0.96032907,  0.46996195],
       [ 0.29515174,  0.74466708,  0.78720968]]), array([[ 0.34216058,  0.74972468],
       [ 0.97262113,  0.84451951],
       [ 0.72230052,  0.30852572],
       [ 0.47586734,  0.03382701],
       [ 0.37998285,  0.80772875]])]

So what am I doing wrong? Why can't i pass the data into model?

Upvotes: 0

Views: 1188

Answers (1)

pyan
pyan

Reputation: 3707

Transpose your input numpy array. Keras requires input array to be in the shape of (number_of_samples, number_of_features).

Upvotes: 1

Related Questions