Euler_Salter
Euler_Salter

Reputation: 3561

tensorflow - is this equivalent to mse?

I am very new to TensorFlow, I notice that here there is tf.losses.mean_squared_error which implements the mean squared error loss function.

Before using it, I played around with TF and I wrote

tf.reduce_mean(tf.reduce_sum(tf.square(tf.subtract(y, y_))))

However this gives different results. To me it looks like it is the same formula. What is going wrong?

Are the two formulations different? (and what about tf.nn.l2_loss?)

Also, I am trying to do a MLP and I am using a mse loss function as input to tf.train.GradientDescentOptimizer(0.5).minimize(mse). Can this function (mse = tf.losses.mean_squared_error(y, y_)) be used (in a regression problem) also as "accuracy" on the test set by using sess.run(mse, feed_dict = {x:X_test, y: y_test})? Or what is the difference?

Upvotes: 0

Views: 2308

Answers (1)

Anis
Anis

Reputation: 3094

It is because you sum before taking the mean, so you get the squared error and not its mean. Change tf.reduce_mean(tf.reduce_sum(tf.square(tf.subtract(y, y_)))) to tf.reduce_mean((tf.square(tf.subtract(y, y_)))

import tensorflow as tf
import numpy as np
y = tf.get_variable("y", [1, 5])
x = tf.get_variable("x", [1, 5])
sess = tf.Session()
t = tf.reduce_mean(tf.reduce_sum(tf.square(tf.subtract(y, x))))
t2 = tf.losses.mean_squared_error(x, y)
t3 = tf.reduce_mean(tf.square(tf.subtract(y, x)))
sess.run(t, {"x": np.ones((1, 5)), "y": np.zeros((1, 5))})  # 5
sess.run(t2, {x: np.ones((1, 5)), y: np.zeros((1, 5))})  # 1
sess.run(t3, {x: np.ones((1, 5)), y: np.zeros((1, 5))})  # 1

Upvotes: 3

Related Questions