Reputation: 2731
I have a problem about increasing the accuracy of the VGG16 model. Even if I defined some Dense layers, I couldn't handle with it. Can you help me how to get the best result if you don't mind? I tried to use Dropout but I couldn't increase its accuracy. Can you look through it if you don't want to open this file?
I think it can be overfitting or Underfitting in terms of model's behaviour.
Here is my model shown below.
base_model=VGG16(
include_top=False,
weights="imagenet",
input_shape=(IMAGE_SIZE,IMAGE_SIZE,3))
#freeze the base model
base_model.trainable = False
model=Sequential()
model.add(base_model)
model.add(Flatten())
model.add(Dense(512,activation='relu'))
#model.add(Dropout(0.2))
model.add(Dense(256,activation='relu'))
#model.add(Dropout(0.2))
model.add(Dense(128,activation='relu'))
#model.add(Dropout(0.2))
model.add(Dense(num_classes,activation='softmax'))
model.summary()
Here is my project link : Project
Upvotes: 0
Views: 1080
Reputation: 93
There are a number of different things you can do, and that depends on your problem scope. What you are showing is the basic transfer learning model with a couple of dense layers.
Regularisation is one thing that you have done already by using Dropout
, but you have turned it off. Other regularisation tools are L2
, L1
regularisation to keep things simple. The other options you can utilize are to lower the learning rate
, reduce the batch size
, use batch normalisation
or change the optimisation function, or all of the above at the same time.
Creating a Neural Network Model is the easy part. The more important and hard to master skill is optimising it to perform good on general data by tweaking each parameter until you produce better results.
Try looking at these three guides (or other ones that you can find about hyperparameter optimisation) to understand more:
Upvotes: 1
Reputation: 15482
Dropout is a regularization technique that introduces noise in the network weights in order to avoid overspecialization of the network to the training inputs (overfitting). The way this kind of layer introduces noise is by mapping some trained weights to 0. The probability according to each weight will remain untouched is given by the dropout value
(the input value to the Dropout layer).
In your case, your dropout value set to 0.2
will mean that ~80% of the weights will be changed into 0 for each layer, which largely reduces the training effectiveness of any neural model. Typically you want to keep this value high, but not equal to 1, otherwise no regularization (annihilation of weights) will be performed.
Try reintroducing Dropout with higher values (like 0.95, 0.9, 0.8, 0.7) and compare them without the regularization (dropout=1 or just comment the dropout layer line) and with excess of regularization (dropout=0.2).
This fix on dropout may come in handy for boosting your model performances.
Upvotes: 0