Karan Chauhan
Karan Chauhan

Reputation: 13

Tensorflow evaluation

I am running a tensor flow model and trying to understand its performance. However, I am not sure about some of the metrics in the results. I have used the Linear classifier using tf.estimator.LinearClassifier. The code and results are attached below:

The model is:

def build_estimator(model_dir, model_type):
    wide_columns, deep_columns = build_model_columns()
    run_config = tf.estimator.RunConfig().replace(session_config=tf.ConfigProto(device_count={'GPU': 0}))

    if model_type == 'wide':
        return tf.estimator.LinearClassifier(
            model_dir=model_dir,
            feature_columns=wide_columns,
            config=run_config)

And the model.evaluate method is:

for n in range(FLAGS.train_epochs // FLAGS.epochs_per_eval):
    model.train(input_fn=lambda: input_fn(
        FLAGS.train_data, FLAGS.epochs_per_eval, True, FLAGS.batch_size))

    results = model.evaluate(input_fn=lambda: input_fn(
        FLAGS.test_data, 1, False, FLAGS.batch_size))

I want to know what the metric auc_precision_recall: 0.998951 is in the model.evaluate method. Is it auc or precision or recall or any of the combination of these?

The result is as attached in this screenshot

Upvotes: 0

Views: 150

Answers (1)

iga
iga

Reputation: 3633

auc_precision_recall is "area under precision recall curve". AUC stands for "area under curve". There are plenty of references online for these concepts. Here is one: http://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html

Upvotes: 1

Related Questions