Reputation: 1306
In Tensorflow 2.0, I'm trying to build a model that classifies my objects onto two categories: positives and negatives.
I want to use tf.keras.metrics.FalsePositives
and tf.keras.metrics.FalseNegatives
metrics to see how the model improves with every epoch. Both of these metrics have assertions stipulating: [predictions must be >= 0]
and [predictions must be <= 1]
.
The problem is that an untrained model can generate an arbitrary number as a prediction. But even a trained model can sometimes produce an output slightly above 1 or slightly below 0.
Is there any way to disable these assertions?
Alternatively, is there any suitable activation function that forces the model outputs into [0, 1]
range without causing any problems with the learning rate?
Upvotes: 0
Views: 362
Reputation: 897
The sigmoid
activation function is a suitable alternative if outputs must be in the range [0, 1] as it also ranges from 0 t0 1
.
Upvotes: 1