Reputation: 1
I am always getting flat curves for error plots while deep learning with conventional BP neural networks. I am using Keras sequential model with Adam optimiser. The NN giving overall 80% accuracy both for training and testing. Can anyone explain why the error curves are flat (see attached figure)? Also is there any way to improve my results?
def build_model():
model = keras.Sequential()
model.add(layers.Dense(128, activation=tf.nn.relu, input_shape=len(normed_train_data.keys())]))
model.add(layers.Dense(128,activation=tf.nn.relu, input_shape=(1,)))
model.add(layers.Dense(4))
model.compile(loss='mean_squared_error', optimizer='Adam',metrics=['mae', 'mse','accuracy'])
return model
def plot_history(history):
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Abs Error [per]')
plt.plot(hist['epoch'], hist['mean_absolute_error'],label='Train Error')
plt.plot(hist['epoch'], hist['val_mean_absolute_error'],label = 'Val Error')
plt.legend()
plt.ylim([0,200])
plt.show()
And in the main function,
model = build_model()
model.summary()
history = model.fit(normed_train_data, train_labels,epochs=EPOCHS,validation_split = 0.2, verbose=0,callbacks=[PrintDot()])
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plot_history(history)
error plots :
Error plot with reduced learning rate
Upvotes: 0
Views: 798
Reputation: 43
It is difficult to assess that without having more information about your data, can you share a sample? But I'd hazard a guess that your model overfits very quickly. Things that you can try:
Upvotes: 0