Reputation: 1887
I have played a little
import tensorflow as tf
x = tf.Variable([1.0, 2.0])
initializer = tf.global_variables_initializer()
session.run(initializer)
x
<tf.Variable 'Variable:0' shape=(2,) dtype=float32_ref>
y = 2 * x
y
<tf.Tensor 'mul:0' shape=(2,) dtype=float32>
z = y + 1
z
<tf.Tensor 'add:0' shape=(2,) dtype=float32>
v = session.run(x)
sess.run(initializer)
v = sess.run(x)
print (v)
[ 1. 2.]
v1 = sess.run(z)
print (v1)
[ 3. 5.]
v = sess.run(x)
I have 3 variables x,y,z.Is it possible to show all the variables defined with one command from prompt? If I try what Jonas suggested
new = tf.trainable_variables()
print (new)
[<tf.Variable 'Variable:0' shape=(2,) dtype=float32_ref>]
Upvotes: 3
Views: 10636
Reputation: 2363
Maybe I'm misunderstanding the question, but what's wrong with this?
print(sess.run([x, y, z]))
Upvotes: 1
Reputation: 5844
tf.trainable_variables()
prints out all the trainable variables in your graph, which in your case, is only x. When you're doing y = 2 * x
, this is actually implicitly defining a constant value mul/x
, and taking in the original variable as a Variable/read
If you run the following code:
x = tf.Variable(1)
y = 2 * x
z = y + 1
for v in tf.get_default_graph().as_graph_def().node:
print v.name
You will get the following output:
Variable/initial_value
Variable
Variable/Assign
Variable/read
mul/x
mul
add/y
add
These are all the nodes in your graph. You can use this to filter out all the relevant information that you need. Specific to your case, I wouldn't call y
and z
variables.
Note that this is getting all the information from a graph and not a session. If you'd like to get it from a particular session, you'd need to get the relevant session and call sess.graph
.
As a last note, the above example used v.name
, but each graph node actually has more attributes, such as name
, op
, input
, device
, attr
. Refer to the API for more information.
Upvotes: 6