user836026
user836026

Reputation: 11360

Implementing Dice Lose

I would like to implement Dice Lose like this from dice_loss_for_keras.py:

from keras import backend as K

def dice_coef(y_true, y_pred, smooth=1):
    """
    Dice = (2*|X & Y|)/ (|X|+ |Y|)
         =  2*sum(|A*B|)/(sum(A^2)+sum(B^2))
    ref: https://arxiv.org/pdf/1606.04797v1.pdf
    """
    intersection = K.sum(K.abs(y_true * y_pred), axis=-1)
    return (2. * intersection + smooth) / (K.sum(K.square(y_true),-1) + K.sum(K.square(y_pred),-1) + smooth)

The only problem is that, in most UNET implementation, y_true have 1 channel while y_pred have 3 channels (for example for 3 classes) because y_pred is represented by hot vector. Is there away either to convert y_true like y_pred or vice versa or make the output of UNET as 1 channel like y_true

Upvotes: 1

Views: 223

Answers (1)

Shai
Shai

Reputation: 114896

You need to convert y_true to 1-hot representation in order to apply per-class dice loss. It seems like you have tf.one_hot function that does it for you.

Once you have y_true in the same shape as y_pred, you can use your code to compute the dice score for each class separately, and then combine the scores of all classes to get the final scalar loss.

Upvotes: 1

Related Questions