Reputation: 23
My validation accuracy reaches 100% and almost stays for last 100 epochs or so. While training accuracy ranges from 98 to 99 %. I am using neural network with 2 hidden layers. It is a multi class classification problem.
Size of the training data is 777385 and validation is 20% of it.
Code:
model_Lrelu_3L3N = Sequential()
model_Lrelu_3L3N.add(Dense(3, input_dim=49, activation='linear'))
model_Lrelu_3L3N.add(LeakyReLU(alpha=.01))
model_Lrelu_3L3N.add(Dense(3,input_dim = 5, activation ='linear'))
model_Lrelu_3L3N.add(LeakyReLU(alpha=.01))
model_Lrelu_3L3N.add(Dense(9, activation='softmax'))
model_Lrelu_3L3N.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model_Lrelu_3L3N.fit(xcon_train,Ycon_train,validation_split = 0.20, batch_size = 100, epochs = 800)
Upvotes: 0
Views: 2907
Reputation: 873
Not sure what platform you use, but at general validation and training data are different. Validation set is at most cases smaller than training. When you train your network is updated according to gradients computed on training set. However performance metric accordin to your choice is applied at validation set. Reason behind this is spot overfitting. If your network performs well on training but bad on validation set it is sign of overfiting because network is not possible to perform well on validation data.
About accuracy, according to me only explanation is that your network is doing good job plus validation set contains samples which suits good for your network. On the other hand training set is probably bigger and there are few samples which was misclasified. Try to play with size of validation set.
Upvotes: 1