Rahul Vishwakarma
Rahul Vishwakarma

Reputation: 1456

reduce_mean() of tensorflow gives an approximate value (not exact value)

I am making a loss function, in which I have used tf.reduce_mean(). But it returns an approximate value.

My code is as follows:

import tensorflow as tf

real = [[1.0], [0.3]]
pred = [[0.8], [0.2]]

loss_object2 = tf.keras.losses.mean_squared_error
def loss_function(real, pred):
    loss_ = loss_object2(real, pred)
    print(loss_)
    return tf.reduce_mean(loss_)

loss_function(real, pred)

which gives the following output:

tf.Tensor([0.04 0.01], shape=(2,), dtype=float32)
<tf.Tensor: shape=(), dtype=float32, numpy=0.024999999>

This should simply return 0.025, why is it returning 0.024999999?

Upvotes: 0

Views: 133

Answers (1)

Eric Postpischil
Eric Postpischil

Reputation: 223389

Clause 3.2 of the IEEE 754-2008 Standard for Floating-Point Arithmetic says “Floating-point arithmetic is a systematic approximation of real arithmetic…”

Floating-point arithmetic is designed to approximate real arithmetic. One should not expect exact results in absence of thorough understanding of the floating-point formats and arithmetic rules.

In the IEEE 754 binary32 format used for float32, the representable value closest to .04 is 0.039999999105930328369140625 (5368709•2−27). The representable value closest to .01 is 0.00999999977648258209228515625 (5368709•2−29). When these are added and divided by two using IEEE 754 rules, the result is 0.024999998509883880615234375 (3355443•2−27).

Upvotes: 2

Related Questions