Reputation: 471
using tensorflow, I am trying to compute the loss for a tensor relative to a known set.
given:
targets = [[.1,.2,.3],[.3,.2,.1],[.5,.3,.5],[.5,.5,.5],[.6,.8.,9]]
guesses = [[.5,.5,.5],[.3,.3,.4],[.5,.6,.4]]
I want to return:
[0.0, 0.0499, 0.02]
I can find the value going through each guess at a time with:
for i in range(guesses):
tf.reduce_min(tf.reduce_sum(tf.square(targets - guesses[i]),1))
is there a tensorflow function which will more efficiently calculate the values?
Upvotes: 0
Views: 193
Reputation: 676
Something like:
import numpy as np
import tensorflow as tf
targets = np.array([[.1,.2,.3],[.3,.2,.1],[.5,.3,.5],[.5,.5,.5],[.6,.8,.9]])
guesses = np.array([[.5,.5,.5],[.3,.3,.4],[.5,.6,.4]])
targets = tf.reshape(targets,(5, 1, 3))
goal = tf.reduce_min(tf.reduce_sum(tf.square(targets - guesses), 2), 0)
sess = tf.Session()
o = sess.run(goal)
print o
Upvotes: 1
Reputation: 1637
There are approximate ways to run this computation. The two classical ways are spectral clustering and K-means clustering. They solve two problems, respectively: 1) you have vectors of large dimension, 2) you have a large number of targets. They can be combined, and generalized by using neural networks. Both should be expressible in tensorflow.
In spectral clustering, you find a low-dimensional approximation of the input vectors, then run the full, exhaustive search there.
In K-means clustering, you find a smaller number of targets (called centroids), which are "representative" of clusters of targets. You run an exhaustive search on the centroids. Then run another search on the targets associated to the centroids, ignoring all other targets. So if you have 100 centroids, you reduce the computation by a factor of 100. If you think of the problem as a fully connected bipartite graph, it's tantamount to adding a layer with a tree structure.
Note: in your problem above, you can change the loop around guesses with a tensor operation.
Upvotes: 0