chesschi
chesschi

Reputation: 708

Tensorflow: How to set the learning rate in log scale and some Tensorflow questions

I am a deep learning and Tensorflow beginner and I am trying to implement the algorithm in this paper using Tensorflow. This paper uses Matconvnet+Matlab to implement it, and I am curious if Tensorflow has the equivalent functions to achieve the same thing. The paper said:

The network parameters were initialized using the Xavier method [14]. We used the regression loss across four wavelet subbands under l2 penalty and the proposed network was trained by using the stochastic gradient descent (SGD). The regularization parameter (λ) was 0.0001 and the momentum was 0.9. The learning rate was set from 10−1 to 10−4 which was reduced in log scale at each epoch.

This paper uses wavelet transform (WT) and residual learning method (where the residual image = WT(HR) - WT(HR'), and the HR' are used for training). Xavier method suggests to initialize the variables normal distribution with

stddev=sqrt(2/(filter_size*filter_size*num_filters)

Q1. How should I initialize the variables? Is the code below correct?

weights = tf.Variable(tf.random_normal[img_size, img_size, 1, num_filters], stddev=stddev)

This paper does not explain how to construct the loss function in details . I am unable to find the equivalent Tensorflow function to set the learning rate in log scale (only exponential_decay). I understand MomentumOptimizer is equivalent to Stochastic Gradient Descent with momentum.

Q2: Is it possible to set the learning rate in log scale?

Q3: How to create the loss function described above?

I followed this website to write the code below. Assume model() function returns the network mentioned in this paper and lamda=0.0001,

inputs = tf.placeholder(tf.float32, shape=[None, patch_size, patch_size, num_channels])
labels = tf.placeholder(tf.float32, [None, patch_size, patch_size, num_channels])

# get the model output and weights for each conv
pred, weights = model()

# define loss function
loss = tf.nn.softmax_cross_entropy_with_logits_v2(labels=labels, logits=pred)

for weight in weights:
    regularizers += tf.nn.l2_loss(weight)

loss = tf.reduce_mean(loss + 0.0001 * regularizers)

learning_rate = tf.train.exponential_decay(???) # Not sure if we can have custom learning rate for log scale
optimizer = tf.train.MomentumOptimizer(learning_rate, momentum).minimize(loss, global_step)

NOTE: As I am a deep learning/Tensorflow beginner, I copy-paste code here and there so please feel free to correct it if you can ;)

Upvotes: 8

Views: 2930

Answers (3)

kww
kww

Reputation: 549

Q1. How should I initialize the variables? Is the code below correct?

That's correct (although missing an opening parentheses). You could also look into tf.get_variable if the variables are going to be reused.

Q2: Is it possible to set the learning rate in log scale?

Exponential decay decreases the learning rate at every step. I think what you want is tf.train.piecewise_constant, and set boundaries at each epoch.

EDIT: Look at the other answer, use the staircase=True argument!

Q3: How to create the loss function described above?

Your loss function looks correct.

Upvotes: 4

Deniz Beker
Deniz Beker

Reputation: 2184

Q1. How should I initialize the variables? Is the code below correct?

Use tf.get_variable or switch to slim (it does the initialization automatically for you). example

Q2: Is it possible to set the learning rate in log scale?

You can but do you need it? This is not the first thing that you need to solve in this network. Please check #3

However, just for reference, use following notation.

learning_rate_node = tf.train.exponential_decay(learning_rate=0.001, decay_steps=10000, decay_rate=0.98, staircase=True)

optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate_node).minimize(loss)

Q3: How to create the loss function described above?

At first, you have not written "pred" to "image" conversion to this message(Based on the paper you need to apply subtraction and IDWT to obtain final image).

There is one problem here, logits have to be calculated based on your label data. i.e. if you will use marked data as "Y : Label", you need to write

pred = model()

pred = tf.matmul(pred, weights) + biases

logits = tf.nn.softmax(pred)

loss = tf.reduce_mean(tf.abs(logits - labels))

This will give you the output of Y : Label to be used

If your dataset's labeled images are denoised ones, in this case you need to follow this one:

pred = model()

pred = tf.matmul(image, weights) + biases

logits = tf.nn.softmax(pred)

image = apply_IDWT("X : input", logits) # this will apply IDWT(x_label - y_label)

loss = tf.reduce_mean(tf.abs(image - labels))

Logits are the output of your network. You will use this one as result to calculate the rest. Instead of matmul, you can add a conv2d layer in here without a batch normalization and an activation function and set output feature count as 4. Example:

pred = model()

pred = slim.conv2d(pred, 4, [3, 3], activation_fn=None, padding='SAME', scope='output')

logits = tf.nn.softmax(pred)

image = apply_IDWT("X : input", logits) # this will apply IDWT(x_label - y_label)

loss = tf.reduce_mean(tf.abs(logits - labels))

This loss function will give you basic training capabilities. However, this is L1 distance and it may suffer from some issues (check). Think following situation

Let's say you have following array as output [10, 10, 10, 0, 0] and you try to achieve [10, 10, 10, 10, 10]. In this case, your loss is 20 (10 + 10). However, you have 3/5 success. Also, it may indicate some overfit.

For same case, think following output [6, 6, 6, 6, 6]. It still has loss of 20 (4 + 4 + 4 + 4 + 4). However, whenever you apply threshold of 5, you can achieve 5/5 success. Hence, this is the case that we want.

If you use L2 loss, for the first case, you will have 10^2 + 10^2 = 200 as loss output. For the second case, you will get 4^2 * 5 = 80. Hence, optimizer will try to run away from #1 as quick as possible to achieve global success rather than perfect success of some outputs and complete failure of the others. You can apply loss function like this for that.

tf.reduce_mean(tf.nn.l2_loss(logits - image))

Alternatively, you can check for cross entropy loss function. (it does apply softmax internally, do not apply softmax twice)

tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, image))

Upvotes: 4

dgumo
dgumo

Reputation: 1878

Other answers are very detailed and helpful. Here is a code example that uses placeholder to decay learning rate at log scale. HTH.

import tensorflow as tf

import numpy as np


# data simulation
N = 10000
D = 10
x = np.random.rand(N, D)
w = np.random.rand(D,1)
y = np.dot(x, w)

print y.shape

#modeling
batch_size = 100
tni = tf.truncated_normal_initializer()
X = tf.placeholder(tf.float32, [batch_size, D])
Y = tf.placeholder(tf.float32, [batch_size,1])
W = tf.get_variable("w", shape=[D,1], initializer=tni)
B = tf.zeros([1])

lr = tf.placeholder(tf.float32)

pred = tf.add(tf.matmul(X,W), B)
print pred.shape
mse = tf.reduce_sum(tf.losses.mean_squared_error(Y, pred))
opt = tf.train.MomentumOptimizer(lr, 0.9)

train_op = opt.minimize(mse)

learning_rate = 0.0001

do_train = True
acc_err = 0.0
sess = tf.Session()
sess.run(tf.global_variables_initializer())
while do_train:
  for i in range (100000):
     if i > 0 and i % N == 0:
       # epoch done, decrease learning rate by 2
       learning_rate /= 2
       print "Epoch completed. LR =", learning_rate

     idx = i/batch_size + i%batch_size
     f = {X:x[idx:idx+batch_size,:], Y:y[idx:idx+batch_size,:], lr: learning_rate}
     _, err = sess.run([train_op, mse], feed_dict = f)
     acc_err += err
     if i%5000 == 0:
       print "Average error = {}".format(acc_err/5000)
       acc_err = 0.0

Upvotes: 2

Related Questions