Reputation: 85
I'm very new to deep learning models, and trying to train a multi-label classifying text model using LSTM .I have around 2600 records which has 4 categories.Using 80% for train and rest for validations.
There is nothing complex in code i.e am reading csv, tokenizing the data and feeding to model. But after 3-4 epochs validation loss becomes greater than 1 whereas train_loss tends to zero.As far as i searched it is the case of over fitting. To overcome this, i tried with different layers,changing the units.But still problem remains as it is. If i stop at 1-2 epochs, then predictions get's wrong.
Below is my model creation code:-
ACCURACY_THRESHOLD = 0.75
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
print(logs.get('val_accuracy'))
fname='Arabic_Model_'+str(logs.get('val_accuracy'))+'.h5'
if(logs.get('val_accuracy') > ACCURACY_THRESHOLD):
#print("\nWe have reached %2.2f%% accuracy, so we will stopping training." %(acc_thresh*100))
#self.model.stop_training = True
self.model.save(fname)
#from google.colab import files
#files.download(fname)
# The maximum number of words to be used. (most frequent)
MAX_NB_WORDS = vocab_len
# Max number of words in each complaint.
MAX_SEQUENCE_LENGTH = 50
# This is fixed.
EMBEDDING_DIM = 100
callbacks = myCallback()
def create_model(vocabulary_size, seq_len):
model = models.Sequential()
model.add(Embedding(input_dim=MAX_NB_WORDS+1, output_dim=EMBEDDING_DIM,
input_length=seq_len,mask_zero=True))
model.add(GRU(units=64, return_sequences=True))
model.add(Dropout(0.4))
model.add(LSTM(units=50))
#model.add(LSTM(100))
#model.add(Dropout(0.4))
#Bidirectional(tf.keras.layers.LSTM(embedding_dim))
#model.add(Bidirectional(LSTM(128)))
model.add(Dense(50, activation='relu'))
#model.add(Dense(200, activation='relu'))
model.add(Dense(4, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])
model.summary()
return model
model=create_model(MAX_NB_WORDS, MAX_SEQUENCE_LENGTH)
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_4 (Embedding) (None, 50, 100) 2018600
_________________________________________________________________
gru_2 (GRU) (None, 50, 64) 31680
_________________________________________________________________
dropout_10 (Dropout) (None, 50, 64) 0
_________________________________________________________________
lstm_6 (LSTM) (None, 14) 4424
_________________________________________________________________
dense_7 (Dense) (None, 50) 750
_________________________________________________________________
dropout_11 (Dropout) (None, 50) 0
_________________________________________________________________
dense_8 (Dense) (None, 4) 204
=================================================================
Total params: 2,055,658
Trainable params: 2,055,658
Non-trainable params: 0
_________________________________________________________________
model.fit(sequences, y_train, validation_data=(sequences_test, y_test),
epochs=25, batch_size=5, verbose=1,
callbacks=[callbacks]
)
It will be very helpful if i can get a sure shot to overcome overfitting.You can refer to below collab to see complete code:-
https://colab.research.google.com/drive/13N94kBKkHIX2TR5B_lETyuH1QTC5VuRf?usp=sharing
Edit:--- I am now using pre-trained embedding layer which I created with gensim but now accuracy got decreased.Also,my record size is 4643.
Attaching below code:- in this 'English_dict.p' is the dictionary which i created using gensim.
embeddings_index=load(open('English_dict.p', 'rb'))
vocab_size=len(embeddings_index)+1
embedding_model = zeros((vocab_size, 100))
for word, i in embedding_matrix.word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_model[i] = embedding_vector
model.add(Embedding(input_dim=MAX_NB_WORDS, output_dim=EMBEDDING_DIM,
weights=[embedding_model],trainable=False,
input_length=seq_len,mask_zero=True))
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_2 (Embedding) (None, 50, 100) 2746300
_________________________________________________________________
gru_2 (GRU) (None, 50, 64) 31680
_________________________________________________________________
dropout_2 (Dropout) (None, 50, 64) 0
_________________________________________________________________
lstm_2 (LSTM) (None, 128) 98816
_________________________________________________________________
dense_3 (Dense) (None, 50) 6450
_________________________________________________________________
dense_4 (Dense) (None, 4) 204
=================================================================
Total params: 2,883,450
Trainable params: 137,150
Non-trainable params: 2,746,300
_________________________________________________________________
Let me know if I am doing anything wrong. You can refer above collab for reference.
Upvotes: 0
Views: 1726
Reputation: 2316
Yes, it is classical overfitting. Why it is happening - the neural network has more than 2 million trainable parameters (2 055 658) while you have only 2600 records (you are using 80% for training). The NN is too big and instead of generalization, does memorization.
How to solve:
Upvotes: 2