jared-nelsen
jared-nelsen

Reputation: 1231

How does tf.losses.absolute_difference() work?

I am working with some simple vectors that I want to do some math with. I want to compute the sum of differences between the vectors element-wise.

Example:

a = [1,2,3]
b = [4,5,6]

evaluates to:

abs(a[0] - b[0]) + abs(a[1] - b[1]) + abs(a[2] - b[2]) = 3 + 3 + 3 = 9

When I went looking for a way to do this with Tensorflow if found the tf.losses.absolute_difference() function. However, when I started experimenting with it I get results that I don't quite understand.

d = [1,1]
e = [2,3]

with tf.Session() as sess:

  sess.run(init)

  t = tf.losses.absolute_difference(e, d)

  print(t.eval())

Here t evaluates to 1.5. I would have expected 3.

d = [1,1]
e = [2,2]

Here t evaluates to 1.0. I would have expected 2.

What function is tf.losses.absolute_difference() actually computing here?

Upvotes: 1

Views: 396

Answers (1)

OverLordGoldDragon
OverLordGoldDragon

Reputation: 19806

From the source code, tf.losses.absolute_difference computes the weighted absolute difference of its inputs - and by default, weights=1.0 #broadcasted (which computes the average difference):

absolute_difference(a,b) = weights .* abs(a - b) / sum(weights) # .* = dot product
                         = [1, 1] .* abs([2, 3] - [1, 1]) / (1 + 1) # lists for simplicity
                         = [1, 1] .* abs([1, 2]) / 2
                         = (1 + 2) / 2
                         = 1.5

Upvotes: 1

Related Questions