Reputation: 850
I've had a look at the Keras metrics documentation and couldn't find an equivalent to scikit learn's average precision score metric (which I think is the same as the area under the precision-recall curve, AUPRC). It isn't the same as the average_precision_at_k, I believe unless someone can correct me on that.
Upvotes: 1
Views: 1668
Reputation: 71
Late answer, but I was facing the same issue recently.
You can use the AUC metric with the parameter curve
. Something like:
AUC(curve='PR')
Upvotes: 5
Reputation: 1261
You can implement custom metrics for keras to be passed at the compilation step. (https://keras.io/metrics/) The function would need to take (y_true, y_pred) as arguments and return a single tensor value.
Here is an implementation of average_precision for keras:
import keras.backend as K
def average_precision(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
Upvotes: 1