Reputation: 440
In the TensorFlow MNIST beginners tutorial, code excerpts here:
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
sess = tf.Session()
sess.run(init)
#-----training loop starts here-----
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
Is it possible to access/retrieve the values of the cross_entropy error, Weights, and biases while inside the loop? I want to plot the error, and possibly a histogram of the weights.
Thanks!
Upvotes: 1
Views: 1958
Reputation: 714
As some person say, TensorBoard is the one for that purpose.
Here I can give you how to.
First, let's define a function for logging min, max, mean and std-dev for the tensor.
def variable_summaries(var, name):
with tf.name_scope("summaries"):
mean = tf.reduce_mean(var)
tf.scalar_summary('mean/' + name, mean)
with tf.name_scope('stddev'):
stddev = tf.sqrt(tf.reduce_sum(tf.square(var - mean)))
tf.scalar_summary('stddev/' + name, stddev)
tf.scalar_summary('max/' + name, tf.reduce_max(var))
tf.scalar_summary('min/' + name, tf.reduce_min(var))
tf.histogram_summary(name, var)
Then, create a summarize operation after you build a graph like below. This code saves weight and bias of first layer with cross-entropy in "mnist_tf_log" directory.
variable_summaries(W_fc1, "W_fc1")
variable_summaries(b_fc1, "b_fc1")
tf.scalar_summary("cross_entropy:", cross_entropy)
summary_op = tf.merge_all_summaries()
summary_writer = tf.train.SummaryWriter("mnist_tf_log", graph_def=sess.graph)
Now you're all set. You can log those data by returning summary_op and pass it to summary_writer.
Here is an example for logging every 10 training steps.
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
if i % 10 == 0:
_, summary_str = sess.run( [train_step, summary_op], feed_dict={x: batch_xs, y_: batch_ys})
summary_writer.add_summary(summary_str, i)
summary_writer.flush()
else:
sess.run( train_step, feed_dict={x: batch_xs, y_: batch_ys})
Execute TensorBoard after you run the code.
python /path/to/tensorboard/tensorboard.py --logdir=mnist_tf_log
Then you can see the result by opening http://localhost:6006 with your web browser.
Upvotes: 4
Reputation: 5771
Tensorboard is made exactly for that: https://www.tensorflow.org/versions/r0.7/how_tos/summaries_and_tensorboard/index.html
Upvotes: 0