Reputation: 21
When creating a neural network for image classification, I want to get the classification on one hand and the raw output on the other hand to determine if the image really contains one of the images I want to classify or not. If not then the raw output should contain very low values for all classes. But if the image really contains one of the objects that I want to classify, then the raw output should have a high value for one of the neurons.
Assuming I have the following code:
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(80, 80, 3)))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu'))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu'))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(4, activation='softmax'))
How would I get the raw output of the last dense layer?
Upvotes: 0
Views: 567
Reputation:
You can use functional API and implement your model in a next way:
inputs = tf.keras.Input(shape=(80, 80, 3))
x = tf.keras.layers.Conv2D(16, (3, 3), activation='relu')(inputs)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Dropout(0.3)(x)
x = tf.keras.layers.Conv2D(16, (3, 3), activation='relu')(x)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Dropout(0.3)(x)
x = tf.keras.layers.Conv2D(16, (3, 3), activation='relu')(x)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Flatten()(x)
# here you can get raw output
logits = tf.keras.layers.Dense(4)(x)
model = tf.keras.Model(
inputs=inputs,
outputs={
'logits': logits,
'predictions': tf.nn.softmax(logits)
}
)
model.summary()
After that, your model will have two outputs in dictionary format. Beware that you can't use a simple loss function like categorical_crossentropy
because it will try to minimize loss for both outputs. You need to use losses
argument in compile
method to specify the loss for each output. For example:
model.compile(
optimizer='adam',
loss={
# ignore logits loss
'logits': lambda y_true, y_pred: 0.0,
'predictions': tf.keras.losses.CategoricalCrossentropy()
})
And your fit would look like this:
model.fit(
x_train,
{
'logits': y_train,
'predictions': y_train
},
epochs=10
)
Upvotes: 1