Reputation: 15847
In TensorFlow 2.0, there's the class tf.keras.metrics.AUC
. It can easily be added to the list of metrics of the compile
method as follows.
# Example taken from the documentation
model.compile('sgd', loss='mse', metrics=[tf.keras.metrics.AUC()])
However, in my case, the output of my neural network is an NxM
tensor, where N
is the batch size and M
is the number of separate outputs. I would like to compute the AUC metric for each of these M
outputs separately (across all N
instances of the batch). So, there should be M
AUC metrics, each of them is computed with N
observations. I tried to create a custom metric, but I am facing some issues. The following is my first attempt.
def get_custom_auc(output):
auc = tf.metrics.AUC()
@tf.function
def custom_auc(y_true, y_pred):
y_true = y_true[:, output]
y_pred = y_pred[:, output]
auc.update_state(y_true, y_pred)
return auc.result()
custom_auc.__name__ = "custom_auc_" + str(output)
return custom_auc
The need to rename custom_auc.__name__
is described in the following post: Is it possible to have a metric that returns an array (or tensor) rather than a number?. However, this implementation raises an error.
tensorflow.python.framework.errors_impl.InvalidArgumentError: assertion failed: [predictions must be >= 0] [Condition x >= y did not hold element-wise:] [x (strided_slice_1:0) = ] [3.14020467 3.06779885 2.86414027...] [y (Cast_1/x:0) = ] [0] [[{{node metrics/custom_auc_2/StatefulPartitionedCall/assert_greater_equal/Assert/AssertGuard/else/_161/Assert}}]] [Op:__inference_keras_scratch_graph_5149]
I have also tried to create the AUC
object inside the custom_auc
, but this is not possible because I am using @tf.function
, so I will get the error ValueError: tf.function-decorated function tried to create variables on non-first call.
. Even if I remove the @tf.function
(which I may need because I may use some if-else statements inside the implementation), I get another error
tensorflow.python.framework.errors_impl.FailedPreconditionError: Error while reading resource variable _AnonymousVar33 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/_AnonymousVar33/N10tensorflow3VarE does not exist. [[node metrics/custom_auc_0/add/ReadVariableOp (defined at /train.py:173) ]] [Op:__inference_keras_scratch_graph_5174]
Note that, currently, I am adding these AUC metrics, one for each of the M
outputs, as described in this answer. Furthermore, I cannot simply return the object auc
, because apparently Keras expects the output of the custom metric to be a tensor and not an AUC object. So, if you do that, you get the following error.
TypeError: To be compatible with tf.contrib.eager.defun, Python functions must return zero or more Tensors; in compilation of .custom_auc at 0x1862e6680>, found return value of type , which is not a Tensor.
I've also tried to implement a custom metric class as follows.
class CustomAUC(tf.metrics.Metric):
def __init__(self, num_outputs, name="custom_auc", **kwargs):
super(CustomAUC, self).__init__(name=name, **kwargs)
assert num_outputs >= 1
self.num_outputs = num_outputs
self.aucs = [tf.metrics.AUC() for _ in range(self.num_outputs)]
def update_state(self, y_true, y_pred, sample_weight=None):
for output in range(self.num_outputs):
y_true1 = y_true[:, output]
y_pred1 = y_pred[:, output]
self.aucs[output].update_state(y_true1, y_pred1)
def result(self):
return [auc.result() for auc in self.aucs]
However, I am currently getting the error
ValueError: Shapes (200,) and () are incompatible
This error seems to be related to reset_states
, so maybe I should also override this method. In fact, if I override reset_states
with the following implementation
def reset_states(self):
for auc in self.aucs:
auc.reset_states()
I don't get this error anymore, but I get another error
tensorflow.python.framework.errors_impl.InvalidArgumentError: assertion failed: [predictions must be >= 0] [Condition x >= y did not hold element-wise:] [x (strided_slice_1:0) = ] [-1.38822043 1.24234951 -0.254447281...] [y (Cast_1/x:0) = ] [0] [[{{node metrics/custom_auc/PartitionedFunctionCall/assert_greater_equal/Assert/AssertGuard/else/_98/Assert}}]] [Op:__inference_keras_scratch_graph_5248]
So, how do I implement this custom AUC metric, one for each of the M
outputs of the network? Basically, I want to do something similar to the solution described in this answer, but with the AUC metric.
I have also opened the related issue on the TensorFlow's Github issue tracker.
Upvotes: 4
Views: 1755
Reputation: 11
I have a similar problem like yours. I have a model with 3 outputs and and i want to compute a custom metric (ConfusionMatricMetric) for the 3 outputs (that have different number of classes each). I used a solution in here https://keras.io/guides/customizing_what_happens_in_fit/ - Going lower level. My problem now is that I can't train the model because of
ValueError: tf.function-decorated function tried to create variables on non-first call.
then I used
tf.config.run_functions_eagerly(True)
and now the models train, very slow but it can be saved
P.S. I also used tf.keras.metrics.KLDivergence()
instead of my custom metric and reproduced the same experiment with the same results as above - trained & saved (tf.saved_model.save
)
Upvotes: 1