Lemon
Lemon

Reputation: 1394

language modeling in tensorflow - how to tie embedding and softmax weights

As suggested by recent language modeling papers, I want to use weight tying in my RNN language model. That is, I want to share the weights between the embedding and softmax layer. However, I am not sure how this can be done in TensorFlow.

My network receives inputs of shape (batch_size, sequence_length). The embedding matrix has shape (vocab_size, embedding_size) and is created as follows (I am using pre-trained word2vec embeddings):

        with tf.variable_scope('embedding'):
            self.embedding_matrix = tf.Variable(tf.constant(0.0, shape=[self.vocab_size, self.embd_size]), trainable=False, name='embedding')
            self.embed_placeholder = tf.placeholder(tf.float32, [self.vocab_size, self.embd_size])
            self.embed_init = self.embedding_matrix.assign(self.embed_placeholder)

The logits are computed as follows:

            output, self.final_state = tf.nn.dynamic_rnn(
                cell,
                inputs=self.inputs,
                initial_state=self.init_state)

            self.output_flat = tf.reshape(output, [-1, cell.output_size])
            softmax_w = tf.get_variable("softmax_w", [self.n_hidden, self.vocab_size], dtype=tf.float32)

            softmax_b = tf.get_variable("softmax_b", [self.vocab_size], dtype=tf.float32)
            logits = tf.nn.xw_plus_b(self.output_flat, softmax_w, softmax_b)
            # Reshape logits to be a 3-D tensor
            self.logits = tf.reshape(logits, [self.batch_size, self.seq_length, self.vocab_size])

My questions are:

  1. The matrix that has to be changed to using the embeddings weights is softmax_w, correct?
  2. softmax_w has shape (n_hidden, vocab_size). How does that fit the size of the embedding matrix? Or do I have to ensure that n_hidden = embedding_size?
  3. How can I reuse the embedding weights in TensorFlow? I know that I have to use reuse=True in the variable_scope.

Upvotes: 1

Views: 1160

Answers (1)

Lemon
Lemon

Reputation: 1394

I have figures out how to implement weight sharing correctly:

        with tf.variable_scope('embedding'):
            self.embedding_matrix = tf.get_variable( "embedding", shape=[self.vocab_size, self.n_hidden], dtype=tf.float32, initializer=self.initializer)

            [...]

            # tie input embedding weights to output embedding weights
            with tf.variable_scope("embedding", reuse=True):
                self.softmax_w = tf.transpose(tf.get_variable('embedding'))

            # Set output bias vector to zero as outlined paper
            softmax_b = tf.zeros(shape=[self.vocab_size], dtype=tf.float32, name="softmax_b")

Upvotes: 1

Related Questions