Reputation: 9
I am working on a face authentication project. While the model performs well on the training data, it struggles with unseen data. When I plot the graph comparing the training accuracy with the model accuracy, I observe this trend
model loss:
CNN model:
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(int(self.org_img_height),int(self.org_img_width),3)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
Upvotes: -1
Views: 98
Reputation: 19
What makes you think that this model is either Overfit or Underfit? From the first graph, your training accuracy is increasing with each epoch and so is your validation accuracy. Similarly, from the second graph, your training loss is decreasing and so is your validation loss. So this means your training is on the right track. Neither Overfit not underfit.
You will reach the overfit condition wen your training loss is low and your validation loss is high. This is not the case here. Similarly, your will have underfitting if your training loss and validation losses both are high and this is also not the case here. As depicted in the figure below.
Upvotes: 1