Reputation: 131
I have a code which receives 2 tensors and converts them to numpy array and then does some operation and converts the result back to a tensor and returns it. I have errors associated with this. I provide this function as a custom metric to model.compile
keras function. However this function works good when i use it stand alone i.e feeding two tensors and then analysing the returned value.
I have tried doing initialization inside the function but nothing solves the issue.
def _cohen_kappa(y_true, y_pred):
y_pred2 = K.argmax(y_pred, axis=-1)
y_true2 = K.argmax(y_true, axis=-1)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
sess.run(y_true2)
sess.run(y_pred2)
y_true_ar = y_true2.eval()
y_pred_ar = y_pred2.eval()
kappa_score_ar = cohen_kappa_score(y_true_ar, y_pred_ar, weights='linear')
kappa_score_ar_tf = tf.convert_to_tensor(kappa_score_ar, dtype=tf.float32)
sess.run(kappa_score_ar_tf)
return kappa_score_ar_tf
# i feed this as custom metric
model.compile(optimizer=optimizers.SGD(lr=0.001, momentum=0.9),
loss='categorical_crossentropy',
metrics=['categorical_crossentropy',
'mae', _cohen_kappa])
Error message is :
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'dense_21_target' with dtype float and shape [?,?]
[[node dense_21_target (defined at C:\ProgramData\Anaconda3\envs\py36\lib\site-packages\keras\backend\tensorflow_backend.py:517) = Placeholder[dtype=DT_FLOAT, shape=[?,?], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
This function works when i try executing independently.
y_true = tf.Variable([[1,0,0],[0,0,1],[0,1,0],[1,0,0],[0,0,1],[0,1,0],[1,0,0],[0,0,1],[0,1,0],[1,0,0],[0,0,1],[0,1,0]])
y_pred = tf.Variable([[1,0,0],[0,1,0],[0,0,1],[1,0,0],[0,1,0],[0,0,1],[1,0,0],[0,1,0],[0,0,1],[1,0,0],[0,1,0],[0,0,1]])
return_value = _cohen_kappa(y_true,y_pred)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
temp = return_value.eval()
print(temp)
Upvotes: 0
Views: 48
Reputation: 16856
Converting the tensors to np arrays then again converting back to tensor will break the computation graph. Backpropogation cannot happen in that case. You have to use tensor ops rather then using np operations for the computation graph to backpropogate.
If you are not using it for loss calculation but only for metrics then please check this similar question
How can I specify a loss function to be quadratic weighted kappa in Keras?
Upvotes: 1