Reputation: 107
I have a dataframe with approximately 14560 word vectors of dimension 400. I have reshaped each vector in 20*20 and used 1 channel for applying a CNN so the dimension has become (14560,20,20,1)
. When I try to fit the CNN model it throws an error.
Code:
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.layers import BatchNormalization
from keras.utils import np_utils
from keras import backend as K
model_cnn=Sequential()
model_cnn.add(Convolution2D(filters = 16, kernel_size = (3, 3),
activation='relu',input_shape = (20, 20,1)))
model_cnn.compile(loss='categorical_crossentropy', optimizer = 'adadelta',
metrics=["accuracy"])
model_cnn.fit(x_tr_,y_tr_,validation_data=(x_te_,y_te))
Error:
Error when checking target: expected conv2d_6 to have 4 dimensions, but got array with shape (14560, 1). When I reshape train data to (14560,1,20,20) still it gives error as model receives input =(1,20,20) and required is (20,20,1).
How do I fix it ?
Upvotes: 3
Views: 183
Reputation: 53758
The problem is not only with x_tr
shape, which should be (-1,20,20,1)
as correctly pointed out in another answer. It's also the network architecture itself. If you do model_cnn.summary()
, you'll see the following:
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 18, 18, 16) 160
=================================================================
Total params: 160
Trainable params: 160
Non-trainable params: 0
The output of the model is rank 4: (batch_size, 18, 18, 16)
. It can't compute the loss when the labels are (batch_size, 1)
.
The correct architecture must reshape the convolutional output tensor (batch_size, 18, 18, 16)
to (batch_size, 1)
. There can be many ways to do it, here's one:
model_cnn = Sequential()
model_cnn.add(Convolution2D(filters=16, kernel_size=(3, 3), activation='relu', input_shape=(20, 20, 1)))
model_cnn.add(MaxPool2D(pool_size=18))
model_cnn.add(Flatten())
model_cnn.add(Dense(units=1))
model_cnn.compile(loss='sparse_categorical_crossentropy', optimizer='adadelta', metrics=["accuracy"])
The summary:
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 18, 18, 16) 160
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 1, 1, 16) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 16) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 17
=================================================================
Total params: 177
Trainable params: 177
Non-trainable params: 0
Note that I added max-pooling to reduce 18x18
feature maps to 1x1
, then flatten layer to squeeze the tensor to (None, 16)
and finally the dense layer to output a single value. Also pay attention to the loss function: it's sparse_categorical_crossentropy
. If you wish to do categorical_crossentropy
, you have to do one-hot encoding and output not a single number, but the probability distribution over classes: (None, classes)
.
By the way, also check that your validation arrays have valid shape.
Upvotes: 2