Reputation: 418
Consider the situation:
token_ids = [17, 189, 981, 1000, 11, 42, 109, 26, 3377, 261]
word_ids = [0, 0, 0, 0, 1, 1, 1, 2, 2, 2]
where I need to compute the sum of token_ids
reduced for each word_id
like so:
output = [ (emb[17] + emb[189] + emb[981] + emb [1000]),
(emb[11] + emb[42] + emb[109]),
(emb[26] + emb[3377] + emb[261]) ]
where emb
is any embedding matrix.
I can write this code in python using for-loop like so:
prev = 0
sum_all = []
sum = 0
for i in range(len(word_ids)):
if word_ids[i] == prev:
sum += emb[token_ids[i]]
else:
sum_all += [sum]
sum = emb[token_ids[i]]
prev = word_ids[i]
if i == len(word_ids):
sum_all += [sum]
return sum_all
But I want to do it in tensorflow efficiently (vectorized if possible). Can anybody please give suggestions how to go about doing this ?
Upvotes: 0
Views: 39
Reputation: 6176
You need tf.segment_sum
to computes the sum along segments of a tensor..
import tensorflow as tf
token_ids = tf.constant([17, 189, 981, 1000, 11, 42, 109, 26, 3377, 261],tf.int32)
word_ids = tf.constant([0, 0, 0, 0, 1, 1, 1, 2, 2, 2],tf.int32)
emb_matrix = tf.ones(shape=(4000,3))
emb = tf.nn.embedding_lookup(emb_matrix, token_ids)
result = tf.segment_sum(emb,word_ids)
with tf.Session() as sess:
print(sess.run(result))
[[4. 4. 4.]
[3. 3. 3.]
[3. 3. 3.]]
Upvotes: 1