SunnyMarkLiu
SunnyMarkLiu

Reputation: 93

what is the different between inception_v1.py and inception_v2.py in tensorflow/models?

We know that in inception v2 paper (Batch Normalization), it add Batch Normalization before convolution layer to reduce internal covariate shift, and remove Local Response Normalization. But when I was studying inception_v1.py and inception_v2.py, I think these two model's code is almost same... In inception_2.py, I can't find Batch Normalization. For example: in inception_v1.py:

end_point = 'Mixed_3b'
with tf.variable_scope(end_point):
    with tf.variable_scope('Branch_0'):
        branch_0 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')
    with tf.variable_scope('Branch_1'):
        branch_1 = slim.conv2d(net, 96, [1, 1], scope='Conv2d_0a_1x1')
        branch_1 = slim.conv2d(branch_1, 128, [3, 3], scope='Conv2d_0b_3x3')
    with tf.variable_scope('Branch_2'):
        branch_2 = slim.conv2d(net, 16, [1, 1], scope='Conv2d_0a_1x1')
        branch_2 = slim.conv2d(branch_2, 32, [3, 3], scope='Conv2d_0b_3x3')
    with tf.variable_scope('Branch_3'):
        branch_3 = slim.max_pool2d(net, [3, 3], scope='MaxPool_0a_3x3')
        branch_3 = slim.conv2d(branch_3, 32, [1, 1], scope='Conv2d_0b_1x1')
    net = tf.concat(3, [branch_0, branch_1, branch_2, branch_3])

in inception_v2.py:

end_point = 'Mixed_3b'
with tf.variable_scope(end_point):
    with tf.variable_scope('Branch_0'):
      branch_0 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
    with tf.variable_scope('Branch_1'):
      branch_1 = slim.conv2d(
          net, depth(64), [1, 1],
          weights_initializer=trunc_normal(0.09),
          scope='Conv2d_0a_1x1')
      branch_1 = slim.conv2d(branch_1, depth(64), [3, 3],
                             scope='Conv2d_0b_3x3')
    with tf.variable_scope('Branch_2'):
      branch_2 = slim.conv2d(
          net, depth(64), [1, 1],
          weights_initializer=trunc_normal(0.09),
          scope='Conv2d_0a_1x1')
      branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
                             scope='Conv2d_0b_3x3')
      branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],
                             scope='Conv2d_0c_3x3')
    with tf.variable_scope('Branch_3'):
      branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
      branch_3 = slim.conv2d(
          branch_3, depth(32), [1, 1],
          weights_initializer=trunc_normal(0.1),
          scope='Conv2d_0b_1x1')
    net = tf.concat(3, [branch_0, branch_1, branch_2, branch_3])

So, here is my question, what is the different between inception_v1.py and inception_v2.py? Thanks a lot!

Upvotes: 1

Views: 127

Answers (1)

keveman
keveman

Reputation: 8487

inception_v1.py implements this paper whereas inception_v2.py implements Batch Normalization paper, which is precisely what you notice.

Upvotes: 1

Related Questions