Integration
Integration

Reputation: 347

Tensorflow MNIST beginners need some understanding evaluation step

I went thru the elementary example in Tensorflow of evaluating the trained model. Here is what it says:

accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))

I did not follow this code, where is the trained 'model'? or is it tf.reduce_mean(....) ? checking the trained model.

Upvotes: 0

Views: 559

Answers (1)

mathetes
mathetes

Reputation: 12077

As "Guy Coder" says, maybe you should check other online resources or MOOCs before starting with tensorflow.

But anyway, maybe you will get a clearer picture with this...

There are two parts into training a model in tensorflow.

  1. First you declare the structure of your model, with the different layers and variables. Tensorflow will make a graph out of that, but there is no computation happened yet.
  2. Then you ask tensorflow to "run" and optimize the model. What you do here is tell tensorflow that you want to reduce the cross entropy, or whatever loss function you define, so you provide the input data and labels the graph needs to compute that.

After this you come up with a trained model. Maybe you will want to save the model and reuse it later, but that's another story.

So, during training, or after it is finished you may call
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})).

This is telling tensorflow to calculate the accuracy using the graph with the current value of the variables (maybe you are in the middle of training). And you are feeding this accuracy function the images and the labels. Tensorflow will take the x values and try to predict y_, and the accuracy will be the result of how well he did.

The connection to your trained model comes from the correct_prediction function which should compare the correct output with your model's prediction, i.e. y_ vs y

Hope this helps

EDIT

I will answer based on your comments, but be aware that your question is very poorly explained... as pointed out by S_kar

To save a model you do it like this:

# model declared before this line
with tf.Session() as sess:
    # Merge all the summaries and write them out to /tmp/tf
    merged = tf.merge_all_summaries()
    writer = tf.train.SummaryWriter("/tmp/tf", sess.graph_def)
    tf.initialize_all_variables().run()

    saver = tf.train.Saver()

    """
    train the model...
    """

    print "Model succesfuly trained"

    # now save the model in a subdirectory called "model"
    checkpoint_path = os.getcwd() + "/model/model.ckpt"
    saver.save(sess, checkpoint_path)
    print "Model saved"

To restore the model look into this question

Upvotes: 4

Related Questions