Reputation: 587
For tf.random_uniform
and similar random ops I understand that "The random ops are stateful, and create new random values each time they are evaluated.", and therefore I get different values when calling session.run()
twice:
# Each time we run these ops, different results are generated
sess = tf.Session()
print(sess.run(norm))
print(sess.run(norm))
My question is, if my graph refers to a random op twice, is it guaranteed that the two "calls" will see the same value inside a single run()
? E.g.
rnd_source = tf.random_normal(...)
x1 = rnd_source + 0.
x2 = rnd_source * 1.
sess.run([x1, x2])
If it is not guaranteed that x1
and x2
will have the same values, is there an easy way to store the random value in a tensor (not a tf.Variable) to ensure that the random op is evaluated only once? If it is guaranteed that x1
will have the same values as x2
, is there a way to force re-evaluation of the random op inside a single run to get new random values?
Upvotes: 2
Views: 564
Reputation: 5373
You have already done that without realizing it. Just assign the value to a tensor and then use that value:
rnd_source = tf.random_normal((1,))
m = rnd_source
Now, at every run, m
evaluates to a single draw from the normal distribution, and then you draw other graphs from it:
In [27]: for i in range(10):
...: a, b, c, d, e = sess.run( [m*1, m+0, m+1, m+2, m+3 ] )
...: print(a, b, c, d, e)
[-2.1935725] [-2.1935725] [-1.1935725] [-0.19357252] [0.8064275]
[-0.5607107] [-0.5607107] [0.43928927] [1.4392893] [2.4392893]
[0.17031813] [0.17031813] [1.1703181] [2.1703181] [3.1703181]
[0.05647242] [0.05647242] [1.0564724] [2.0564723] [3.0564723]
[-0.2119268] [-0.2119268] [0.7880732] [1.7880732] [2.7880733]
[-0.07041783] [-0.07041783] [0.9295822] [1.9295821] [2.929582]
[-0.9486307] [-0.9486307] [0.05136931] [1.0513693] [2.0513692]
[1.3629643] [1.3629643] [2.3629642] [3.3629642] [4.362964]
[1.6997207] [1.6997207] [2.6997209] [3.6997209] [4.699721]
[1.480969] [1.480969] [2.480969] [3.480969] [4.480969]
Now, every time you go through your training loop, you will get a new value from the distribution, but, create the rest of the graph using m
, and that will be consistent ...
To clarify further, lets add new nodes ...
In [28]: n = m+0
In [29]: o = m+1
Now,
In [31]: for i in range(10):
...: a, b = sess.run([n, o])
...: print(a, b)
...:
[0.32054538] [1.3205454]
[-0.6587958] [0.34120423]
[-0.8067821] [0.19321787]
[-0.29313084] [0.7068691]
[-1.1867933] [-0.18679333]
[1.4355402] [2.4355402]
[0.45581594] [1.4558159]
[-1.9583491] [-0.9583491]
[-1.2682568] [-0.26825678]
[1.534502] [2.534502]
Upvotes: 1