Luca
Luca

Reputation: 841

Weighted sampling from multidimensional tensor

I need to perform a weighted sampling of a multidimensional tensor.

I have a tensor A of shape [X,Y] and a distribution of probabilities B of shape [X]. I need to sample N elements from A according to the distribution B.

B represents the distribution of the subtensors. The sampling inside each subtensor is uniform.

There is some padding in A, so I have to take this into account. The information of what is a padding is contained in a mask.

e.g

A      = [[1,   2,   3,   X,   X,  X],
          [10,  20,  30,  X,   X,  X],
          [100, 200, 300, 400, 500, 600]]
A_mask = [[T,   T,   T,   F,   F,  F],
          [T,   T,   T,   F,   F,  F],
          [T,   T,   T,   T,   T,  T]]
B = [0.5, 0.4, 0.1]

# a possible output, with N = 10
ouput = [1, 1, 2, 2, 3, 10, 20, 30, 30, 200]

I'm able to retrieve the number of elements to sample from each nested tensor of A with:

tf.multinomial(tf.log(probability_distribution), N)

# a possible output of that function, with N = 10, is:
[1, 1, 1, 1, 1, 2, 2, 2, 2, 3]

For each one of these numbers, I must perform an uniform sampling in that subtensor.

I'm able to compute the maxvalue for each subtensor.

subtensor_sizes = tf.reduce_sum(tf.cast(A_mask, tf.int32), axis=1)

# it would return: [3, 3, 6]

At this point, for each subtensor returned by the multinomial function I should perform an uniform sampling between 0 and its maxvalue (or similarly, count the occurrences and sample T elements from a subtensor that appears T times in the output of the multinomial).

I'm not sure how to procede, how can this be done?

Upvotes: 1

Views: 311

Answers (1)

P-Gn
P-Gn

Reputation: 24581

So you have a tensor A containing sequences of different lengths. You want to extract values out of those sequences, with different probabilities B to pick a value for each sequence.

You can proceed as follows:

import tensorflow as tf

A = tf.constant(
    [[1,   2,   3,   -1,  -1,  -1],
     [10,  20,  30,  -1,  -1,  -1],
     [100, 200, 300, 400, 500, 600]])
A_mask = tf.constant(
    [[True,   True,   True,   False,   False,  False],
     [True,   True,   True,   False,   False,  False],
     [True,   True,   True,   True,   True,  True]])
B = tf.constant([0.5, 0.4, 0.1])
subtensor_sizes = tf.reduce_sum(tf.cast(A_mask, tf.int32), axis=1)

# get random sample index
output = tf.to_int32(tf.multinomial(tf.log(B[None]), 10)[0])
# get corresponding sample size
output_sizes = tf.gather(subtensor_sizes, output)
# generate a random index in each range
random_idxs = tf.map_fn(
  lambda x: tf.random_uniform((), maxval=x, dtype=tf.int32), output_sizes)
# construct nd-index for tf.gather
random_ndxs = tf.concat([output[:, None], random_idxs[:, None]], axis=-1)
# get sample values
random_samples = tf.gather_nd(A, random_ndxs)

Upvotes: 1

Related Questions