user15957418
user15957418

Reputation:

How to convert this tensor flow code into pytorch code?

I am trying to implement an Image Denoising Gan which is written in tensorflow to pytorch and I am unable to understand what is tf.variable_scope and tf.Variable similar in pytorch. please help.

def conv_layer(input_image, ksize, in_channels, out_channels, stride, scope_name, activation_function=lrelu, reuse=False):
    with tf.variable_scope(scope_name):
        filter = tf.Variable(tf.random_normal([ksize, ksize, in_channels, out_channels], stddev=0.03))
        output = tf.nn.conv2d(input_image, filter, strides=[1, stride, stride, 1], padding='SAME')
        output = slim.batch_norm(output)
        if activation_function:
            output = activation_function(output)
        return output, filter
def residual_layer(input_image, ksize, in_channels, out_channels, stride, scope_name):
    with tf.variable_scope(scope_name):
        output, filter = conv_layer(input_image, ksize, in_channels, out_channels, stride, scope_name+"_conv1")
        output, filter = conv_layer(output, ksize, out_channels, out_channels, stride, scope_name+"_conv2")
        output = tf.add(output, tf.identity(input_image))
        return output, filter

def transpose_deconvolution_layer(input_tensor, used_weights, new_shape, stride, scope_name):
    with tf.varaible_scope(scope_name):
        output = tf.nn.conv2d_transpose(input_tensor, used_weights, output_shape=new_shape, strides=[1, stride, stride, 1], padding='SAME')
        output = tf.nn.relu(output)
        return output

def resize_deconvolution_layer(input_tensor, new_shape, scope_name):
    with tf.variable_scope(scope_name):
        output = tf.image.resize_images(input_tensor, (new_shape[1], new_shape[2]), method=1)
        output, unused_weights = conv_layer(output, 3, new_shape[3]*2, new_shape[3], 1, scope_name+"_deconv")
        return output

Upvotes: 1

Views: 506

Answers (1)

convolutionBoy
convolutionBoy

Reputation: 831

You can replace tf.Variable with torch.tensor, torch.tensor can hold gradients all the same.

In torch, you also don't create a graph and then access things in there by name via some scope. You would just create the tensor and then can access it directly. The output variable there would just be accessible to you do with it however you want and to reuse however you see fit.

In fact, if you're code isn't directly using this variable scope then you can likely just ignore it. Often the variable scopes are just to give convenient names to thing if you were ever to inspect the graph.

Upvotes: 1

Related Questions