Aditya Vartak
Aditya Vartak

Reputation: 380

How to Convert the following tf 1.x code to tf 2.0 (With minimal changes to existing code)

I am migrating code from tensorflow 1.x to tensorflow-2.0. I have used the conversion script provided in tensorflow-2.0 and it was good. However the script cannot convert tf.contrib module's code. I want to make the following code tensorflow-2.0 compatible.


def dropout(input_tensor, dropout_prob):
  """Perform dropout.

  Args:
    input_tensor: float Tensor.
    dropout_prob: Python float. The probability of dropping out a value (NOT of
      *keeping* a dimension as in `tf.nn.dropout`).

  Returns:
    A version of `input_tensor` with dropout applied.
  """
  if dropout_prob is None or dropout_prob == 0.0:
    return input_tensor

  output = tf.nn.dropout(input_tensor, 1 - (1.0 - dropout_prob))
  return output

def layer_norm(input_tensor, name=None):
  """Run layer normalization on the last dimension of the tensor."""
  return tf.contrib.layers.layer_norm(
      inputs=input_tensor, begin_norm_axis=-1, begin_params_axis=-1, scope=name)


def layer_norm_and_dropout(input_tensor, dropout_prob, name=None):
  """Runs layer normalization followed by dropout."""
  output_tensor = layer_norm(input_tensor, name)
  output_tensor = dropout(output_tensor, dropout_prob)
  return output_tensor


Error I Encounter:

1) Using Member tf.contrib.layers.layer_norm in deprecated module tf.contrib.layer_norm

My search on the internet found me this github issue

However it is still not clear how to migrate.

Thanks in advance.

Upvotes: 2

Views: 974

Answers (1)

Zhao Chen
Zhao Chen

Reputation: 88

For layer normalization, migrating to Keras layers works for me and gives me similar fine-tuned model performance.

def dropout(input_tensor, dropout_prob):
    """Perform dropout.

    Args:
      input_tensor: float Tensor.
      dropout_prob: Python float. The probability of dropping out a value (NOT of
        *keeping* a dimension as in `tf.nn.dropout`).

    Returns:
      A version of `input_tensor` with dropout applied.
    """
    if dropout_prob is None or dropout_prob == 0.0:
        return input_tensor

    output = tf.nn.dropout(input_tensor, dropout_prob) # tf 2.10
    return output


def layer_norm(input_tensor, name=None):
    """Run layer normalization on the last dimension of the tensor."""
    input_layer_norm = tf.keras.layers.LayerNormalization(
        axis=-1, name=name, epsilon=1e-12, dtype=tf.float32)
    return input_layer_norm(input_tensor)


def layer_norm_and_dropout(input_tensor, dropout_prob, name=None):
    """Runs layer normalization followed by dropout."""
    output_tensor = layer_norm(input_tensor, name)
    # output_tensor = tf.keras.layers.Dropout(rate=dropout_prob)(output_tensor)
    output_tensor = dropout(output_tensor, dropout_prob)
    return output_tensor

The caveat is that tf.nn.dropout takes dropout probability instead of keep probability as in TF1.x versions, otherwise you are dropping out 90% layer outputs for the default 10% BERT drop rate. You may refer to further details in the official transformer encoder here. https://github.com/tensorflow/models/blob/master/official/nlp/modeling/networks/transformer_encoder.py

Upvotes: 2

Related Questions