Reputation: 1041
I'm trying to get familiar with TensorFlow framework from this site by playing around with Linear Regression (LR). The source code for LR can be found here, with the name 03_linear_regression_sol.py
.
Generally, the defined model for LR is Y_predicted = X * w + b
, where
w
and b
are parameters (tf.Variable
)Y_predicted
and X
are training data (placeholder
)For w
and b
, in the sample code, they are defined as following
w = tf.Variable(0.0, name='weights')
b = tf.Variable(0.0, name='bias')
And I changed these two lines of code a little bit as following
w = tf.get_variable('weights', [], dtype=tf.float32)
b = tf.get_variable('bias', [], dtype=tf.float32)
For this experiment, I got two different total_loss/n_samples
for those two versions. More specifically, in the original version, I got a deterministic result at anytime, 1539.0050282141283
. But, in the modified version, I got undeterministic results at different running time, for example, total_loss/n_samples
could be 1531.3039793868859
, 1526.3752814714044
, ... etc.
What is the difference between tf.Variable()
and tf.get_variable()
?
Upvotes: 2
Views: 470
Reputation: 53758
tf.Variable
accepts an initial value upon creation (a constant), this explains deterministic results when you use it.
tf.get_variable
is slightly different: it has an initializer
argument, by default None
, which is interpreted like this:
If
initializer
isNone
(the default), the default initializer passed in the variable scope will be used. If that one isNone
too, aglorot_uniform_initializer
will be used.
Since you didn't pass an initializer, the variable got uniform random initial value.
Upvotes: 4