danishansari
danishansari

Reputation: 654

What is wrong with this code, why the loss in this code is not reducing?

I have implemented VGG-16 in tensorflow, VGG-16 is reasonably deep network, so the loss should definitely reduce. But in my code it's not reducing. But when i run the model on same batch again and again then the loss is reducing. Any idea, why such thing can happen.

VGG-net is followed from here.

Training was done on, dog-vs-cat dataset, with image size 224x224x3.

Network parameters are folloing:

lr_rate: 0.001 batch_size = 16

Find code @ GitHubGist

Output is as below:

Output

Upvotes: 0

Views: 68

Answers (1)

prouast
prouast

Reputation: 1196

I am assuming you're following architecture variant E from the Simonyan & Zisserman paper you linked - then I found a few problems with your code:

  • Use activation='relu' for all hidden layers.

  • Max pooling should be done over a 2 x 2 window, so use pool_size=[2, 2] instead of pool_size=[3, 3] in the pooling layers.

  • Properly link up pool13 with conv13:

pool13 = tf.layers.max_pooling2d(conv13, [2, 2], 2, name='pool13')

I don't have any GPU available to test, but with sufficient iterations the loss should decrease.

Upvotes: 1

Related Questions