andra
andra

Reputation: 23

How to use my own sentence embeddings in Keras?

I am new to Keras and I created my own tf_idf sentence embeddings with shape (no_sentences, embedding_dim). I am trying to add this matrix as input to an LSTM layer. My network looks something like this:

q1_tfidf = Input(name='q1_tfidf', shape=(max_sent, 300))
q2_tfidf = Input(name='q2_tfidf', shape=(max_sent, 300))

q1_tfidf = LSTM(100)(q1_tfidf)
q2_tfidf = LSTM(100)(q2_tfidf)
distance2 = Lambda(preprocessing.exponent_neg_manhattan_distance, output_shape=preprocessing.get_shape)(
        [q1_tfidf, q2_tfidf])

I'm struggling with how the matrix should be shaped. I am getting this error:

ValueError: Error when checking input: expected q1_tfidf to have 3 dimensions, but got array with shape (384348, 300)

I already checked this post: Sentence Embedding Keras but still can't figure it out. It seems like I'm missing something obvious.

Any idea how to do this?

Upvotes: 2

Views: 1147

Answers (1)

ixeption
ixeption

Reputation: 2050

Ok as far as I understood, you want to predict the difference between two sentences. What about reusing the LSTM layer (the language model should be the same) and just learn a single sentence embedding and use it twice:

q1_tfidf = Input(name='q1_tfidf', shape=(max_sent, 300))
q2_tfidf = Input(name='q2_tfidf', shape=(max_sent, 300))

lstm = LSTM(100)

lstm_out_q1= lstm (q1_tfidf)
lstm_out_q2= lstm (q2_tfidf)
predict = concatenate([lstm_out_q1, lstm_out_q2])
model = Model(inputs=[q1_tfidf ,q1_tfidf ], outputs=predict)

predict = concatenate([q1_tfidf , q2_tfidf])

You could also introduce your custom distance in an additional lambda layer, but therefore you need to use a different reshaping in concatenation.

Upvotes: 1

Related Questions