Reputation: 1
currently I am working on an image error classifier using tensorflow and the on ImageNet pre-trained EfficientNetB0 from keras applications. As metrics, I am using false positive (fp), true positive (tp), false negative (fn), true negative (tn), ... The problem I have with the metrics like fp, tp, fn and tn is that they are not real integer values during training (i.e. tp = 4883.6257) and only during validation they are integer values. As far as I know, these metrics should always be integers as they are only then number of i.e. false positive predicted samples. Is there something I am missing what keras does for comuting these values during training?
As input pipeline I am using the tensorflow ImageDataGenerator and the .flow_from_dataframe() function:
# create data generators in order to load the images
datagen_train = ImageDataGenerator(horizontal_flip = True, vertical_flip = True, brightness_range = (0.9, 1.1),
rescale=1. / 255, fill_mode = "constant", zoom_range = 0.3, channel_shift_range=100.0)
datagen_val = ImageDataGenerator(rescale = 1. / 255)
train_generator = datagen_train.flow_from_dataframe(
dataframe = balanced_df[:N_Train],
directory = bitmap_folder_path,
x_col = "filename",
y_col = "particle",
batch_size = batch_size,
shuffle = True,
class_mode = "binary",
target_size = (250,250),
color_mode = "rgb",
seed = 42)
valid_generator = datagen_val.flow_from_dataframe(
dataframe = balanced_df[N_Train:],
directory = bitmap_folder_path,
x_col = "filename",
y_col = "particle",
batch_size = batch_size,
shuffle = True,
class_mode = "binary",
target_size = (250,250),
color_mode = "rgb",
seed = 42)
Setting up the model:
from tensorflow.keras.applications import EfficientNetB0
input_shape = (img_height, img_width, 3) # use depth=3 because imagenet is trained on RGB images
model = EfficientNetB0(weights='imagenet', include_top = False, input_shape = input_shape)
# add a global spatial average pooling layer
x = model.output
x = keras.layers.GlobalAveragePooling2D()(x)
# and a fully connected output/classification layer
predictions = keras.layers.Dense(1, activation='sigmoid')(x)
# create the full network so we can train on it
model_B0 = keras.models.Model(inputs=model.input, outputs=predictions)
batch_size = 16
num_epochs = 30
# setup optimizer similar to used one in original paper
# they used: RMSProp with decay of 0.9 and momentum of 0.9, batch norm momentum of 0.99, a initial learning rate of
# 0.256 that decays by 0.97 every 2.4 epochs
initial_learning_rate = 1e-5
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate,
decay_steps= int(2.4 * steps_per_epoch_train),
decay_rate=0.97,
staircase=True)
opt_efficientNet = tf.keras.optimizers.RMSprop(learning_rate=lr_schedule,
rho=0.9, momentum=0.9, name="RMSprop")
For better analysis, I've added the following metrics:
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
]
model_B0.compile(
loss="binary_crossentropy",
optimizer=opt_efficientNet,
metrics=METRICS)
Upvotes: 0
Views: 124
Reputation: 4085
I think you should define parameter thresholds in those metrics. By default, BinaryAccuracy metrics has thresholds of 0.5, which you can adjust according to the accuracy.
Example:
keras.metrics.TruePositives(name='tp', thresholds=0.5)
Upvotes: 0