Reputation: 95
I am using pre-trained GoogLeNet and then fine tuned it on my dataset for classifying 11 classes. I tried the following configurations with different base_learning rate
, but the accuracy
is not improving further.
I used pre-trained GoogLeNet model and then doing the fine-tuning on last 10 layers and on the first 3 layers with the base learning rate 0.01 and maximum iterations to 50K, but this configuration doesn't give the accuracy better than 75%.
I used pre-trained GoogLeNet model and then doing the fine-tuning on last 2 layers with the base learning rate 0.01 and maximum iterations to 50K, but this configuration doesn't give the accuracy better than 71%.
I used pre-trained GoogLeNet model and then doing the fine-tuning on last 6 layers with the base learning rate 0.001 and maximum iterations to 50K, but this configuration doesn't give the accuracy better than 85%.
Can anybody tell me, what are the other methods or parameters which I can change to improve the accuracy?
Upvotes: 4
Views: 937
Reputation: 739
You can use other optimisers such as ADADELTA, ADAM, and RMSPROP. In your solver.prototxt
you can set this parameter by writing this command type: "RMSProp"
For RMSPROP, you can modify the parameters as mentioned here.
Upvotes: 3