Reputation: 677
While trying to implement Intersection over Union (IoU), I've run into a python/keras error that I can't seem to place. In a seperate file I define the following metric:
def computeIoU(y_pred_batch, y_true_batch):
print y_true_batch.shape[0]
return np.mean(np.asarray([imageIoU(y_pred_batch[i], y_true_batch[i]) for i in range(y_true_batch.shape[0])]))
def imageIoU(y_pred, y_true):
y_pred = np.argmax(y_pred, axis=2)
y_true = np.argmax(y_true, axis=2)
inter = 0
union = 0
for x in range(imCols):
for y in range(imRows):
for i in range(num_classes):
inter += (y_pred[y][x] == y_true[y][x] == i)
union += (y_pred[y][x] == i or y_true[y][x] == i)
print inter
print union
return float(inter)/union
In the main file I have imported the function and use the metric as follows:
fcn32_model.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy', computeIoU])
The error that is thrown is
TypeError: __int__ should return int object
After implementing the above algorithm with the suggested Keras/tf syntax from answers here and on another question the code was changed to:
def iou(y_pred_batch, y_true_batch):
intersection = tf.zeros(())
union = tf.zeros(())
y_pred_batch = K.argmax(y_pred_batch, axis=-1)
y_true_batch = K.argmax(y_true_batch, axis=-1)
for i in range(num_classes):
iTensor = tf.to_int64(tf.fill(y_pred_batch.shape, i))
intersection = tf.add(intersection, tf.to_float(tf.count_nonzero(tf.logical_and(K.equal(y_true_batch, y_pred_batch), K.equal(y_true_batch, iTensor)))))
union = tf.add(union, tf.to_float(tf.count_nonzero(tf.logical_or(K.equal(y_true_batch, iTensor), K.equal(y_pred_batch, iTensor)))))
return intersection/union
Upvotes: 1
Views: 274
Reputation: 638
The issue seems to be that you are trying to compute in plain integer,instead its a keras variable.
intersection = K.sum(K.abs(y_true * y_pred), axis=-1)
union_sum = K.sum(K.abs(y_true) + K.abs(y_pred), axis=-1)
IOU = (intersection) / (union_sum- intersection)
Upvotes: 2