user7243382
user7243382

Reputation:

Tensorflow: Regarding tensorflow functions

I'm new to tensorflow. I have the following problem:

input: list of floats (or a dynamic array. In python list is the datatype to be used) Output: is a 2-d array of size len(input) × len(input)

Example1:

Input:

[1.0, 2.0, 3.0]

Output:

[[0.09003057, 0.24472847, 0.66524096], 
 [0.26894142, 0.73105858, 0.0       ], 
 [1.0,        0.0,        0.0       ]]

I tried to create the function using while loop and calculating each row independently and concatenating them, but my instructor asked me to explore other ways.

Can you suggest me an idea on how to approach this problem?

Upvotes: 5

Views: 456

Answers (2)

saetch_g
saetch_g

Reputation: 1505

This is probably a bit late for your class, but hopefully it will help someone.

If your goal is to simply output a len(input)xlen(input) array, you can matrix multiply a 1xlen(input) tensor with your input array after expanding its dimensions to len(input)x1:

input_ = tf.placeholder(tf.float32, [len(input)])
input_shape = input_.get_shape().as_list()
tfvar = tf.Variable(tf.random_normal([1,input_shape[0]], mean=0.0,
                                    stddev=.01, dtype=tf.float32))

def function(input_):
    x = tf.expand_dims(input_, axis=1) # dims = len(input)x1
    return tf.matmul(x,tfvar) # mtrx multiplication produces 3x3 mtrx

This function should generalize to any 1D input_ tensor and produce a square len(input_)xlen(input_) tensor.

If your goal is to train a tensorflow variable to produce the provided output exactly, you can then train tfvar with a loss function and optimizer:

desired_output = tf.constant([[0.09003057, 0.24472847, 0.66524096], 
                              [0.26894142, 0.73105858, 0.0       ], 
                              [1.0,        0.0,        0.0       ]],
                              dtype=tf.float32)

actual_output = function(input_)
loss = tf.reduce_mean(tf.square(actual_output-desired_output))
optimizer = tf.train.AdamOptimizer().minimize(loss)

init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    cost, opt = sess.run([loss, optimizer], feed_dict={input_:input})

Note, if you want a more robust training session, add a bias, a non-linearity, and more layers.

Upvotes: 0

Alexey Romanov
Alexey Romanov

Reputation: 256

You can achieve this with the following approach:

  1. Repeat the input array to create a square matrix tiled the input data
  2. Create a mask with consists of ones in the left upper corner
  3. Do softmax using the mask. Note that we cannot use tf.nn.softmax here because it will give small probabilities to those zeros also

Here is a TensorFlow (v0.12.1) code that does this:

def create_softmax(x):
    x_len = int(x.get_shape()[0])

    # create a tiled array
    # [1, 2, 3] 
    # =>
    # [[1,2,3], [1,2,3], [1,2,3]]
    x_tiled = tf.tile(tf.expand_dims(x, 0), [x_len, 1])

    # get the mask to do element-wise multiplication
    mask = tf.ones_like(x_tiled) # returns an array of the same size filled with 1
    mask = tf.matrix_band_part(mask, 0, -1) # zeros everythings except from the upper triangular part 
    mask = tf.reverse(mask, [False, True]) # reverses the y dimension

    # compute masked softmax
    exp = tf.exp(x_tiled) * mask
    sum_exp = tf.reshape(tf.reduce_sum(exp, reduction_indices=1), (-1, 1))

    x_softmax = exp / sum_exp

    return x_softmax

Upvotes: 4

Related Questions