Reputation: 413
I've been working on a simple convolutional neural network model but the output doesn't seem to match the shape that I desire.
from keras.layers import Input, Conv2D, MaxPooling2D, Reshape, Flatten, Dense, Dropout, Activation
from keras.models import Sequential
from keras.layers.convolutional import *
from keras.layers.pooling import *
from keras.optimizers import Adam
from keras.optimizers import rmsprop
from keras.metrics import categorical_crossentropy
model_CL = Sequential([
Dense(64, activation = 'relu', input_shape = (200, 4, 1)),
Conv2D(64, kernel_size = (3, 3), activation = 'relu', padding = 'same'),
MaxPooling2D(pool_size = (2, 2), strides = 2, padding = 'valid'),
Dropout(rate=0.3),
Conv2D(64, kernel_size = (5, 5), activation = 'relu', padding = 'same'),
Flatten(),
Dense(2, activation = 'softmax')
])
model_CL.compile(loss = 'sparse_categorical_crossentropy', metrics = ['accuracy'], optimizer = 'Adam')
model_CL.summary()
from keras.callbacks import EarlyStopping
from keras.callbacks import ModelCheckpoint
es_CL = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=5)
mc_CL = ModelCheckpoint('best_model_CL.h5', monitor='val_acc', mode='max', verbose=1, save_best_only=True)
epochs = 50
hist_CL = model_CL.fit(CL_train_input, CL_train_label, validation_data=(CL_validation_input, CL_validation_label), batch_size=32, epochs=epochs, verbose=0, callbacks=[es_CL, mc_CL])
So my input size doesn't seem to be the problem. My training set input shape is (13630, 200, 4, 1) where 13630 is the number of data, while my training_label is as follows. (13630, 2) What I expected the model output shape to be is (2,), but instead it seems like it's expecting (1,) as an output size.
So my error comes out like this.
Error when checking target: expected dense_28 to have shape (1,) but got array with shape (2,)
just for the reference,
Model: "sequential_14"
dense_27 (Dense) (None, 200, 4, 64) 128
conv2d_27 (Conv2D) (None, 200, 4, 64) 36928
max_pooling2d_14 (MaxPooling (None, 100, 2, 64) 0
dropout_14 (Dropout) (None, 100, 2, 64) 0
conv2d_28 (Conv2D) (None, 100, 2, 64) 102464
flatten_13 (Flatten) (None, 12800) 0
Total params: 165,122 Trainable params: 165,122 Non-trainable params: 0
here's the summary for my model. I'm not too sure why it's expecting (1, ).
Upvotes: 0
Views: 1104
Reputation: 116
It sounds like your training_label
is one-hot encoded. So you want to use categorical_crossentropy
instead of sparse_categorical_crossentropy
which expects shape (None, 1)
Upvotes: 0
Reputation: 159
The problem here is that you are taking out 2 as output, while you are giving in only 1 in the input layer. Here is what you can do,
Dense(2, activation = 'softmax')
you can change the first argument to 1, meaning that you are taking one output for a binary problem. Like this Dense(1, activation = 'softmax')
.to_categorical()
which is a keras utils function.
CL_train_labels = to_categorical(CL_train_labels)
I hope this can do the trick. Upvotes: 1