user4911648
user4911648

Reputation:

L2 regularization in tensorflow with high level API

I know there are some similar questions out there regarding l2 regularization with the layer API from tensorflow but it is still not quite clear to me.

So first I set the kernel_regularizer in my conv2d layers repeatedly like this:

regularizer = tf.contrib.layers.l2_regularizer(scale=0.1)
tf.layers.conv2d(kernel_regularizer=)

Then I can collect all the regularization losses with following:

regularization_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)

And last but not least I have to incorporate the regularization term into the final loss. However, here I am not quite sure what to do, which one of the following is correct?

1) loss = loss + factor * tf.reduce_sum(regularization_losses)

2) loss = loss + tf.contrib.layers.apply_regularization(regularizer, weights_list=regularization_losses)

Or are both of them wrong? The second option seems weird to mean since I have to pass the regularizer as parameter once again, even tho each layer already has a regularizer as argument.

EDIT

loss_1 = tf.losses.mean_squared_error(labels=y, predictions=logits, weights=1000)

regularization_loss = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)

loss = tf.add_n([loss_1] + regularization_loss, name='loss')

Upvotes: 3

Views: 2502

Answers (1)

Maxim
Maxim

Reputation: 53766

The first method is correct. One more way of doing this is via tf.add_n function:

reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
loss = tf.add_n([base_loss] + reg_losses, name="loss")

The second method also works, but you'll have to define a single regularizer. So it works in your case, but may be inconvenient if you use different regularizers in different layers.

Upvotes: 1

Related Questions