Reputation: 71
I am implementing a custom loss function in keras. The model is an autoencoder
. The first layer is an Embedding layer, which embed an input of size (batch_size, sentence_length)
into (batch_size, sentence_length, embedding_dimension)
. Then the model compresses the embedding into a vector of a certain dimension, and finaly must reconstruct the embedding (batch_size, sentence_lenght, embedding_dimension)
.
But the embedding layer is trainable, and the loss must use the weights of the embedding layer (I have to sum over all word embeddings of my vocabulary).
For exemple, if I want to train on the toy exemple : "the cat". The sentence_length is 2
and suppose embedding_dimension is 10
and the vocabulary size is 50
, so the embedding matrix has shape (50,10)
. The Embedding layer's output X
is of shape (1,2,10)
. Then it passes in the model and the output X_hat
, is also of shape (1,2,10)
. The model must be trained to maximize the probability that the vector X_hat[0]
representing 'the' is the most similar to the vector X[0]
representing 'the' in the Embedding layer, and same thing for 'cat'. But the loss is such that I have to compute the cosine similarity between X
and X_hat
, normalized by the sum of cosine similarity of X_hat
and every embedding (50, since the vocabulary size is 50) in the embedding matrix, which are the columns of the weights of the embedding layer.
But How can I access the weights in the embedding layer at each iteration of the training process?
Thank you !
Upvotes: 6
Views: 2038
Reputation: 71
It seems a bit crazy but it seems to work : instead of creating a custom loss function that I would pass in model.compile, the network computes the loss (Eq. 1 from arxiv.org/pdf/1708.04729.pdf) in a function that I call with Lambda :
loss = Lambda(lambda x: similarity(x[0], x[1], x[2]))([X_hat, X, embedding_matrix])
And the network has two outputs: X_hat
and loss
, but I weight X_hat
to have 0 weight and loss to have all the weight :
model = Model(input_sequence, [X_hat, loss])
model.compile(loss=mean_squared_error,
optimizer=optimizer,
loss_weights=[0., 1.])
When I train the model :
for i in range(epochs):
for j in range(num_data):
input_embedding = model.layers[1].get_weights()[0][[data[j:j+1]]]
y = [input_embedding, 0] #The embedding of the input
model.fit(data[j:j+1], y, batch_size=1, ...)
That way, the model is trained to tend loss
toward 0, and when I want to use the trained model's prediction I use the first output which is the reconstruction X_hat
Upvotes: 1