Reputation: 65
I want to classify dog breed using data augmentation and transfer learning using VGG16 as the cnn.
First I'm doing some data augmentation using ImageDataGenerator from keras
train_datagen = ImageDataGenerator(rotation_range = 30,
width_shift_range = 0.2,
height_shift_range = 0.2,
rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True,
fill_mode = 'nearest')
train_generator = train_datagen.flow_from_directory('../data/train/',
target_size = (224, 224),
batch_size = batch_size,
class_mode = 'categorical')
The flow_from_directory
method returns a DirectoryIterator yielding tuples of (x, y) where x is a numpy array containing a batch of images with shape (batch_size, *target_size, channels) and y is a numpy array of corresponding labels. Since here the class_mode is caterogical, it's supposed to return 2D one-hot encoded labels for y.
Then I do transfer learning removing only the last layer replacing it with a dense layer with a softmax activation.
model = VGG16(weights="imagenet", include_top=False, input_shape=(224, 224, 3))
for layer in model.layers:
layer.trainable = False
x = model.output
predictions = Dense(120, activation='softmax')(x)
new_model = Model(inputs=model.input, outputs=predictions)
Then I fit my data to the model :
new_model.fit_generator(train_generator,
steps_per_epoch = 6680 // batch_size,
epochs = 50,
validation_data = validation_generator,
validation_steps = 835 // batch_size,
verbose=2)
And I get the error : ValueError: Error when checking target: expected dense_3 to have 4 dimensions, but got array with shape (16, 120)
I have no idea where the problem comes from :(
Thanks for your help !
Upvotes: 1
Views: 468
Reputation: 11198
The summary of VGG16 gives:
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 224, 224, 3)] 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
=================================================================
Total params: 14,714,688
Trainable params: 14,714,688
Non-trainable params: 0
_________________________________________________________________
The last layer has 3-d features, you need to flatten it before applying Dense and softmax.
Add a Flatten()
before the last Dense layer.
x = model.output
x = Flatten()(x) # add this line
predictions = Dense(120, activation='softmax')(x)
Upvotes: 2