Reputation: 69
I came across the following error while compiling a Keras Sequential model:
ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 45985 arrays: [array([[ 0.],
[ 0.],
[ 0.],
[ 0.],
[ 0.],
[ 0.],
[ 0.],
[ 0.],
[ 0.],
[ 0.],
[ 0.],
[ 0.],
[ 0.],
[ 0.],
...
This is the code and the data format I am using for X_train, y_train, X_test and y_test:
print(X_train.shape)
>>(45985, 50, 50, 3)
print(X_test.shape)
>>(22650, 50, 50, 3)
print(y_train[0])
>>array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.])
from keras.models import Sequential
from keras.layers import Dense, Conv2D, Flatten
model = Sequential()
model.add(Conv2D(64, kernel_size=3, activation="relu", input_shape=(50,50,3)))
model.add(Conv2D(32, kernel_size=3, activation="relu"))
model.add(Flatten())
model.add(Dense(10, activation="softmax"))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=3)
Upvotes: 0
Views: 81
Reputation: 33410
As an alternative to one-hot encoding the labels mentioned in the other answer, you can keep your labels as they are, i.e. sparse/integer labels, and use 'sparse_categorical_crossentropy'
as the loss function instead. This would help with saving memory, especially if you have lots of samples and/or classes. Though, don't forget that anyways you need to convert your labels to numpy array.
Upvotes: 0
Reputation: 56347
Your y_train
as a list of numpy arrays, it should be a single numpy array with shape (samples, 10)
. You can transform it with:
y_train = np.array(y_train, dtype=np.float32)
Then you should remember to one-hot encode your labels (they look like integer labels):
from keras.utils import to_categorical
y_train = to_categorical(y_train)
Upvotes: 2