Jonathan DEKHTIAR
Jonathan DEKHTIAR

Reputation: 3536

Tensorflow Contrib Metrics always return 0.0

I tried to use the contrib metrics for the first time and didn't manage to make them work.

Here is the metrics I tried to use, and how they were implemented:

y_pred_labels = y[:, 1]
y_true_labels = tf.cast(y_[:, 1], tf.int32)

with tf.name_scope('auc'):    
    auc_score, update_op_auc = tf.contrib.metrics.streaming_auc(
        predictions=y_pred_labels, 
        labels=y_true_labels
    )
    tf.summary.scalar('auc', auc_score)

with tf.name_scope('accuracy_contrib'):  
    accuracy_contrib, update_op_acc = tf.contrib.metrics.streaming_accuracy(
        predictions=y_pred_labels, 
        labels=y_true_labels
    )
    tf.summary.scalar('accuracy_contrib', accuracy_contrib)

with tf.name_scope('error_contrib'):
    error_contrib, update_op_error = tf.contrib.metrics.streaming_mean_absolute_error(
        predictions=y_pred_labels, 
        labels=y_[:, 1] ## Needs to use float32 and not int32
    )
    tf.summary.scalar('error_contrib', error_contrib)

This code perfectly execute and during execution I obtain the following:

########################################
Accuracy at step 1000: 0.633333 # This is computed by another displayed not displayed above
Accuracy Contrib at step 1000: (0.0, 0.0)
AUC Score at step 1000: (0.0, 0.0)
Error Contrib at step 1000: (0.0, 0.0)
########################################

Here is the format of the data inputed:

y_pred_labels = [0.1, 0.5, 0.6, 0.8, 0.9, 0.1, ...] #Represent a binary probability
y_true_labels = [1, 0, 1, 1, 1, 0, 0, ...] # Represent the true class {0 or 1}
y_[:, 1]      = [1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, ...] # Same as y_true_labels formated as float32

I think I've understood in the official documentation that it is normal behavior under certain conditions ... However, I don't manage to obtain the values of my metric.


Secondly, I have noticed two of the metrics are called: streaming_accuracy and streaming_auc, how does it behave differently than in a "non streaming" accuracy or auc metric? And is there any way to make it "non streaming" if necessary ?

Upvotes: 1

Views: 1342

Answers (1)

LeckieNi
LeckieNi

Reputation: 466

I encountered the same problem just now. And found out:

You need to run update_ops such as sess.run(update_op_auc), while running metric operations such as sess.run(auc_score).

Upvotes: 4

Related Questions