Reputation: 85
I am experimenting with optimising a neural network using a loss function that is equivalent to the concordance index (c-index). The loss function I would like to use is (latex equation in link)
∑{i=0}^N ∑{j=i}^N \sigma ( (y_i - y_j)(y'_i - y'_j ) )
where y' is the vector of predictions and y is the vector of labels in a batch of size N, and \sigma is the sigmoid function. I would like to be able to implement this in TensorFlow but I can't find a way to express the two-index sum.
I have tried rearranging the equation into a different form that can be expressed in terms of TensorFlow and Keras primitives but with no success. I am using Keras, so Keras or TensorFlow implementations would both be useable.
The Python code is
from itertools import permutations, combinations
a = np.arange(4)
a = a*100
def loss_ci(y_true, y_pred):
summ = 0.
total=0
for i in range(len(y_true)):
for j in range(i+1,len(y_true)):
summ += 1/(1+np.exp(-(y_true[i]-y_true[j]) * (y_pred[i]-y_pred[j])))
total+=1
return (summ)/total
print("y_true\t\ty_pred\t\tc-index\tloss")
for c in permutations(a,3):
for d in combinations(a,3):
print(c, d, "\t{:.4f}".format(ci(c, d)), "\t{:.4f}".format(loss_ci(c, d)))
Upvotes: 0
Views: 634
Reputation: 17201
The loss can be calculated using tensor flow as shown in the code below:
from itertools import permutations, combinations
a = np.arange(4)
a = a*100
def loss_ci(y_true, y_pred):
summ = 0.
total=0
for i in range(len(y_true)):
for j in range(i+1,len(y_true)):
summ += 1/(1+np.exp(-(y_true[i]-y_true[j]) * (y_pred[i]-y_pred[j])))
return (summ)
def tf_loss_ci(y_true, y_pred):
Y = tf.constant(y_true)
_Y = tf.constant(y_pred)
S = tf.sigmoid(tf.multiply((Y[tf.newaxis,:]-Y[:,tf.newaxis]),(_Y[tf.newaxis,:]-_Y[:,tf.newaxis])))
S = tf.reduce_sum(tf.matrix_set_diag(S,tf.zeros_like(Y))) / 2
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
return S.eval()
print("y_true\t\ty_pred\t\ttensorloss\tloss")
for c in permutations(a,3):
for d in combinations(a,3):
print(c, d, "\t{:.4f}".format(tf_loss_ci(np.asarray(c, np.float32), np.array(d, np.float32))), "\t{:.4f}".format(loss_ci(c, d)))
Upvotes: 2