mohangraj
mohangraj

Reputation: 11034

Error in Convolutional Neural network for input shape

I have 1000, 28*28 resolution images. I converted those 1000 images into numpy array and formed a new array with size of (1000,28,28). So, while creating convolution layer using keras, input shape(X value) is specified as (1000,28,28) and output shape(Y value) as (1000,10). Because I ha ve 1000 examples are inputs and 10 categories of output.

model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),activation='relu',kernel_initializer='he_normal',input_shape=(1000,28,28)))
.
.
.
model.fit(train_x,train_y,batch_size=32,epochs=10,verbose=1)

So, while using fit function, it shows ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (1000, 28, 28) as error. Pls help me guys to provide proper input and output dimension for CNN.

Code:

 model = Sequential()
 model.add(Conv2D(32, kernel_size=(3, 3),activation='relu',kernel_initializer='he_normal',input_shape=(4132,28,28)))
 model.add(MaxPooling2D((2, 2)))
 model.add(Dropout(0.25))

 model.add(Conv2D(64, (3, 3), activation='relu'))
 model.add(MaxPooling2D(pool_size=(2, 2)))
 model.add(Dropout(0.25))

 model.add(Conv2D(128, (3, 3), activation='relu'))
 model.add(Dropout(0.4))

 model.add(Flatten())
 model.add(Dense(128, activation='relu'))
 model.add(Dropout(0.3))
 model.add(Dense(10, activation='softmax'))

 model.compile(loss=keras.losses.categorical_crossentropy,optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
 model.summary()

 train_x  = numpy.array([train_x])

 model.fit(train_x,train_y,batch_size=32,epochs=10,verbose=1)

Upvotes: 1

Views: 906

Answers (3)

thefifthjack005
thefifthjack005

Reputation: 638

from your input it looks like you are using tensorflow as back end.

In keras the input_shape should always be 3 dimension . For tensorflow as a backend the input_shape to your model will be

input_shape = [img_height,img_width,channels(depth)]

in your case for tensor flow backend that should be

input_shape = [28,28,1]

and the shape of train_x should be

train_x = [batch_size,img_height,img_width,channels(depth)]

in your case

train_x = [1000,28,28,1]

As you are using a gray scale image,the dimension of the image will be (image_height, image_width) and hence you have to add an extra dimension to the image which will result to (image_height, image_width, 1) the '1' suggests the depth of the image,for gray scale that is '1' and for rgb that is '3'.

Upvotes: 0

Vijay Mariappan
Vijay Mariappan

Reputation: 17191

You need to change the inputs to 4 dimensions with channel set to 1 : (1000, 28, 28, 1) and you need to change the input_shape of the convolutional layer to (28, 28, 1):

model.add(Conv2D(32, kernel_size=(3, 3),...,input_shape=(28,28,1)))

Upvotes: 2

mmghu
mmghu

Reputation: 611

Your numpy arrays need a fourth dimension, the common standard is to number the samples with the first dimension, so changing (1000, 28, 28) to (1, 1000, 28, 28).

You can read more about this here.

Upvotes: 0

Related Questions