m33n
m33n

Reputation: 1751

Optimize sparse softmax cross entropy with L2 regularization

I was training my network using tf.losses.sparse_softmax_cross_entropy as the classification function in the last layer and everything was working fine.

I simply added a L2 regularization over my weights now and my loss is not getting optimized anymore. What can be happening?

reg = tf.nn.l2_loss(w1) + tf.nn.l2_loss(w2)
loss = tf.reduce_mean(tf.losses.sparse_softmax_cross_entropy(y, logits)) + reg*beta
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)

enter image description here

Upvotes: 1

Views: 828

Answers (1)

benjaminplanche
benjaminplanche

Reputation: 15119

It is hard to answer with certainty given the provided information, but here is a possible cause:

tf.nn.l2_loss is computed as a sum over the elements, while your cross-entropy loss is reduced to its mean (c.f. tf.reduce_mean), hence a numerical unbalance between the 2 terms.

Try for instance to divide each L2 loss by the number of elements it is computed over (e.g. tf.size(w1)).

Upvotes: 2

Related Questions