Reputation: 465
I am getting an error of Negative dimension size when I am keeping height and width of the input image anything below 362X362. I am surprised because this error is generally caused because of wrong input dimensions. I did not find any reason why number or rows and columns can cause an error. Below is my code-
batch_size = 32
num_classes = 7
epochs=50
height = 362
width = 362
train_datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
'train',
target_size=(height, width),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
'validation',
target_size=(height, width),
batch_size=batch_size,
class_mode='categorical')
base_model = InceptionV3(weights='imagenet', include_top=False, input_shape=
(height,width,3))
x = base_model.output
x = Conv2D(32, (3, 3), use_bias=True, activation='relu') (x) #line2
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Conv2D(64, (3, 3), activation='relu') (x) #line3
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Flatten()(x)
x = Dense(batch_size, activation='relu')(x) #line1
x = (Dropout(0.5))(x)
predictions = Dense(num_classes, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers:
layer.trainable = False
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=
['accuracy'])
model.fit_generator(
train_generator,
samples_per_epoch=128,
nb_epoch=epochs,
validation_data=validation_generator,
verbose=2)
for i, layer in enumerate(base_model.layers):
print(i, layer.name)
for layer in model.layers[:309]:
layer.trainable = False
for layer in model.layers[309:]:
layer.trainable = True
from keras.optimizers import SGD
model.compile(optimizer=SGD(lr=0.0001, momentum=0.9),
loss='categorical_crossentropy', metrics=['accuracy'])
model.save('my_model.h5')
model.fit_generator(
train_generator,
samples_per_epoch=512,
nb_epoch=epochs,
validation_data=validation_generator,
verbose=2)
Upvotes: 22
Views: 41376
Reputation: 1
I did not want to increase the size from t-1 to t-3 or t-6 or t-12 so I decreased the maxpooling layer from
model.add(MaxPooling1D(pool_size=2))
TO
model.add(MaxPooling1D(pool_size=1))
Upvotes: 0
Reputation: 61
The problem here has different reasons in keras and TF.
In keras: change the input shape accoring to the backend framework in use or change the dim_ordering=(tf/th)
In tensorflow: goto the the code line where error is raised and change the padding='valid' parameter to padding='same'.If the parameter doesn't exist then add as in the example below.
model.add(MaxPooling2D((2,2), strides=(2,2), padding='same'))
more info on the topic can be found here - https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D
Upvotes: 6
Reputation: 45
I once got the same error. this error comes when your input size is smaller than your number of down samples(max pooling layer).
In other words,for example, when you apply max pooling layer of,say, (2,2) number of times to an input size of, say (256,256,3), there comes a point when your input size becomes (1,1,...) (just an example to understand). And at this point when Maxpool of size(2,2) is applied the input size becomes negative.
There are two simple solutions:-
I personally prefer the 1st solution.
Upvotes: 4
Reputation: 449
Replace this:
x = MaxPooling2D(pool_size=(2, 2))(x)
with this:
x = MaxPooling2D((2,2), padding='same')(x)
to prevent dimension during downsampling.
Upvotes: 34
Reputation: 53758
InceptionV3
downsamples the input image very aggressively. For the input 362x362
image, base_model.output
tensor is (?, 9, 9, 2048)
- that is easy to see if you write
base_model.summary()
After that, your model downsamples the (?, 9, 9, 2048)
tensor even further (like in this question):
(?, 9, 9, 2048) # input
(?, 7, 7, 32) # after 1st conv-2d
(?, 3, 3, 32) # after 1st max-pool-2d
(?, 1, 1, 64) # after 2nd conv-2d
error: can't downsample further!
You can prevent the conv layer from reducing the tensor size by adding padding='same'
parameter, even that will make the error disappear. Or by simply reducing the number of downsamples.
Upvotes: 13