Reputation: 423
I was wondering if there is an easy way to calculate the dot product of two vectors (i.e. 1-d tensors) and return a scalar value in tensorflow.
Given two vectors X=(x1,...,xn) and Y=(y1,...,yn), the dot product is dot(X,Y) = x1 * y1 + ... + xn * yn
I know that it is possible to achieve this by first broadcasting the vectors X and Y to a 2-d tensor and then using tf.matmul. However, the result is a matrix, and I am after a scalar.
Is there an operator like tf.matmul that is specific to vectors?
Upvotes: 30
Views: 75901
Reputation: 11
Use tf.reduce_sum(tf.multiply(x,y)) if you want the dot product of 2 vectors.
To be clear, using tf.matmul(x,tf.transpose(y)) won't get you the dot product, even if you add all the elements of the matrix together afterward.
I'm only mentioning this because of how often it comes up in the above answers when it has nothing to do with the question being asked. I'd just make a comment, but don't have the rep to do that.
Upvotes: 0
Reputation: 4918
One of the easiest way to calculate dot product between two tensors (vector is 1D tensor) is using tf.tensordot
a = tf.placeholder(tf.float32, shape=(5))
b = tf.placeholder(tf.float32, shape=(5))
dot_a_b = tf.tensordot(a, b, 1)
with tf.Session() as sess:
print(dot_a_b.eval(feed_dict={a: [1, 2, 3, 4, 5], b: [6, 7, 8, 9, 10]}))
# results: 130.0
Upvotes: 33
Reputation: 11
Let us assume that you have two column vectors
u = tf.constant([[2.], [3.]])
v = tf.constant([[5.], [7.]])
If you want a 1x1 matrix you can use
tf.einsum('ij,ik->jk',x,y)
If you are interested in a scalar you can use
tf.einsum('ij,ik->',x,y)
Upvotes: 1
Reputation: 21
ab = tf.reduce_sum(a*b)
Take a simple example as follows:
import tensorflow as tf
a = tf.constant([1,2,3])
b = tf.constant([2,3,4])
print(a.get_shape())
print(b.get_shape())
c = a*b
ab = tf.reduce_sum(c)
with tf.Session() as sess:
print(c.eval())
print(ab.eval())
# output
# (3,)
# (3,)
# [2 6 12]
# 20
Upvotes: 2
Reputation: 3358
In addition to tf.reduce_sum(tf.multiply(x, y))
, you can also do tf.matmul(x, tf.reshape(y, [-1, 1]))
.
Upvotes: 27
Reputation: 5201
Maybe with the new docs you can just set the transpose option to true for either the first argument of the dot product or the second argument:
tf.matmul(a, b, transpose_a=False, transpose_b=False, adjoint_a=False, adjoint_b=False, a_is_sparse=False, b_is_sparse=False, name=None)
leading:
tf.matmul(a, b, transpose_a=True, transpose_b=False)
tf.matmul(a, b, transpose_a=False, transpose_b=True)
Upvotes: 1
Reputation: 41
import tensorflow as tf
x = tf.Variable([1, -2, 3], tf.float32, name='x')
y = tf.Variable([-1, 2, -3], tf.float32, name='y')
dot_product = tf.reduce_sum(tf.multiply(x, y))
sess = tf.InteractiveSession()
init_op = tf.global_variables_initializer()
sess.run(init_op)
dot_product.eval()
Out[46]: -14
Here, x and y are both vectors. We can do element wise product and then use tf.reduce_sum to sum the elements of the resulting vector. This solution is easy to read and does not require reshaping.
Interestingly, it does not seem like there is a built in dot product operator in the docs.
Note that you can easily check intermediate steps:
In [48]: tf.multiply(x, y).eval()
Out[48]: array([-1, -4, -9], dtype=int32)
Upvotes: 4
Reputation: 1033
you can use tf.matmul and tf.transpose
tf.matmul(x,tf.transpose(y))
or
tf.matmul(tf.transpose(x),y)
depending on the dimensions of x and y
Upvotes: 20
Reputation: 20950
In newer versions (I think since 0.12), you should be able to do
tf.einsum('i,i->', x, y)
(Before that, the reduction to a scalar seemed not to be allowed/possible.)
Upvotes: 3