Rishik Mani
Rishik Mani

Reputation: 498

Problem comparing a tensor with binary values

I have a problem where there is an image and a given question. The answer is of the form [False, True]. During training I try to find whether the prediction is right or wrong.

def build_loss(logits, labels):
    # Cross-entropy loss
    loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)

    # Classification accuracy
    correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    return tf.reduce_mean(loss), accuracy

logits = C(self.img, self.q, scope='Classifier')
self.all_preds = tf.nn.softmax(logits)
self.loss, self.accuracy = build_loss(logits, self.a)

Assuming the original answer is [0, 0] and the predicted answer is [0, 1], the tf.argmax for both, logits and labels would return a value of 1. Is there a way to avoid this ugly comparison? I tried replacing tf.argmax with tf.reduce_max but then accuracy always comes to be 0.

Or is there any other way to change the model, so that, rather than having 2 neurons in output layer I could get it done in 1 neuron. Currently, I am using tf.nn.softmax for finding the predictions and tf.nn.sigmoid_cross_entropy_with_logits as my loss function.

Upvotes: 0

Views: 62

Answers (1)

Abhishek Verma
Abhishek Verma

Reputation: 1729

Yes, you can use only one neuron at the end and use sigmoid activation in it.

model.add(Dense(1, activation='sigmoid'))

Upvotes: 0

Related Questions