Reputation: 21
I was trying to train my model and while the model is training, multiple accuracies are printed which then get deleted as the training in the same epoch progresses.
I see 3 ETAs in the new epoch while it should only be 1.
Also the training accuracy is not increasing.I have tried 15 epochs and even the training accuracy remains exactly the same.
model = tf.keras.models.Sequential([tf.keras.layers.Conv2D(64,kernel_size = (3,3), input_shape=(X_train.shape[1:]),activation = 'relu'),
tf.keras.layers.Conv2D(64,kernel_size = (3,3),activation = 'relu'),
tf.keras.layers.MaxPooling2D(pool_size = (2,2), strides = (2,2)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Conv2D(64,kernel_size = (3,3),activation = 'relu'),
tf.keras.layers.Conv2D(64,kernel_size = (3,3),activation = 'relu'),
tf.keras.layers.MaxPooling2D(pool_size = (2,2), strides = (2,2)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Conv2D(128,kernel_size = (3,3),activation = 'relu'),
tf.keras.layers.Conv2D(128,kernel_size = (3,3),activation = 'relu'),
tf.keras.layers.MaxPooling2D(pool_size = (2,2), strides = (2,2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(8*128, activation = 'relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(8*128, activation = 'relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(7, activation = 'softmax')])
model.compile( optimizer = 'SGD', loss = tf.keras.losses.categorical_crossentropy, metrics = ['accuracy'])
model.fit(X_train, y_train, epochs = 2, verbose = 1, shuffle = True, batch_size = 64, validation_split=0.05)
Upvotes: 0
Views: 45
Reputation: 940
i. You're seeing multiple accuracy because multiple batches are being processed. All of them gets merged later. Don't worry about that. It's a normal process. But if you still want to know about it more, take a look into tensorflow multi processing and back propagation.
ii. It depends on your data. To increase accuracy you can try following:
Pre-process your data.
Try to use augmentation
Maybe your data needs more complex architecture. Try adding few more layers. I would have tried removing dropout layer first.
Please also try to train the same architecture with higher learning rate for 15-20 epochs. If you still see no difference, try the methods mentioned above.
Upvotes: 1