shubham2206
shubham2206

Reputation: 31

Unconditional Generative Adverserial Networks on MNIST dataset

I am training unconditional GANs on MNIST dataset using tfgan library and tfgan estimators. Things are working fine and images are being generated, see . The helper functions for the generator and discriminator model functions are written using tf.layers. But when I am changing the helper functions only and writing them using tf.keras the same exact code is not working and no images are being generated, see. Can anyone help me out with this? The only difference between the two scripts is change in helper functions from using tf.layers to using tf.keras. Helper functions using tf.layers:

def _dense(inputs, units, l2_weight):
  return tf.layers.dense(
      inputs, units, None,
      kernel_initializer=tf.keras.initializers.glorot_uniform,
      kernel_regularizer=tf.keras.regularizers.l2(l=l2_weight),
      bias_regularizer=tf.keras.regularizers.l2(l=l2_weight))

def _batch_norm(inputs, is_training):
  return tf.layers.batch_normalization(
      inputs, momentum=0.999, epsilon=0.001, training=is_training)

def _deconv2d(inputs, filters, kernel_size, stride, l2_weight):
  return tf.layers.conv2d_transpose(
      inputs, filters, [kernel_size, kernel_size], strides=[stride, stride], 
      activation=tf.nn.relu, padding='same',
      kernel_initializer=tf.keras.initializers.glorot_uniform,
      kernel_regularizer=tf.keras.regularizers.l2(l=l2_weight),
      bias_regularizer=tf.keras.regularizers.l2(l=l2_weight))

def _conv2d(inputs, filters, kernel_size, stride, l2_weight):
  return tf.layers.conv2d(
      inputs, filters, [kernel_size, kernel_size], strides=[stride, stride], 
      activation=None, padding='same',
      kernel_initializer=tf.keras.initializers.glorot_uniform,
      kernel_regularizer=tf.keras.regularizers.l2(l=l2_weight),
      bias_regularizer=tf.keras.regularizers.l2(l=l2_weight)) 

Helper functions using tf.keras:

def _dense(inputs, units, l2_weight):
  return Dense(units,
      kernel_initializer=tf.keras.initializers.glorot_uniform,
      kernel_regularizer=tf.keras.regularizers.l2(l=l2_weight),
      bias_regularizer=tf.keras.regularizers.l2(l=l2_weight))(inputs)

def _batch_norm(inputs, is_training):
  return BatchNormalization(momentum=0.999, epsilon=0.001)(inputs, training = is_training)


def _deconv2d(inputs, filters, kernel_size, stride, l2_weight):
  return Conv2DTranspose(filters=filters, kernel_size=[kernel_size, kernel_size], strides=[stride, stride],
                                      activation=keras.activations.relu, padding='same',
                                      kernel_initializer=keras.initializers.glorot_uniform,
                                      kernel_regularizer=keras.regularizers.l2(l=l2_weight),
                                      bias_regularizer=keras.regularizers.l2(l=l2_weight))(inputs)

def _conv2d(inputs, filters, kernel_size, stride, l2_weight):
  return Conv2D(filters=filters, kernel_size=[kernel_size, kernel_size], strides=[stride, stride], padding='same',
      kernel_initializer=tf.keras.initializers.glorot_uniform,
      kernel_regularizer=tf.keras.regularizers.l2(l=l2_weight),
      bias_regularizer=tf.keras.regularizers.l2(l=l2_weight))(inputs)

Upvotes: 2

Views: 82

Answers (1)

Aaron S
Aaron S

Reputation: 71

Unfortunately, tfgan currently relies on variable_scopes in order to work properly and Keras layers don't respect variable_scopes. We have general plans for a redesign that will support Keras, but at the moment we unfortunately don't have anything to show for it or an ETA. Code contributions welcome!

Upvotes: 1

Related Questions