KoB81
KoB81

Reputation: 47

Keras training accuracy only changing a bit, and after a few epochs it is always the same

I try to train a model, which can detect if a person fell. I have a dataset which contains accelerometer and gyroscope x,y,z data. It has multiple type of human activity, and as well multiple type of falls. I reduced it as fall or not fall data, so I don't care about what kind of activity type is it actually. When I try to rain the model, the val_accuracy and the accuracy is not really changing. And what is weird, that at the first epoch the accuracy is already really hihgh. What can I do, or am I miss something? Maybe my dataset isn't good for training?

Here is my model:

model4 = Sequential()
optimizer = keras.optimizers.Adam()
model4.add(LSTM(units=100, input_shape=(1000, 6), kernel_initializer='normal'))
model4.add(Dense(2, activation='sigmoid'))
model4.compile(loss=keras.losses.CategoricalCrossentropy(), optimizer=keras.optimizers.Adam(learning_rate=1e-4), metrics=['accuracy'])

history = model4.fit(train_set, train_labels_int, epochs=100, batch_size=100, validation_split=0.1, callbacks=[early_stopping])
_, accuracy = model4.evaluate(train_set, train_labels_int, batch_size=200)

print(accuracy)

And here is the epochs:

Epoch 1/100
81/81 [==============================] - 4s 52ms/step - loss: 0.6623 - accuracy: 0.6632 - val_loss: 0.6074 - val_accuracy: 0.7833
Epoch 2/100
81/81 [==============================] - 4s 47ms/step - loss: 0.4808 - accuracy: 0.8209 - val_loss: 0.4284 - val_accuracy: 0.8489
Epoch 3/100
81/81 [==============================] - 4s 47ms/step - loss: 0.4429 - accuracy: 0.8391 - val_loss: 0.4268 - val_accuracy: 0.8489
Epoch 4/100
81/81 [==============================] - 4s 47ms/step - loss: 0.4412 - accuracy: 0.8391 - val_loss: 0.4270 - val_accuracy: 0.8489
Epoch 5/100
81/81 [==============================] - 4s 48ms/step - loss: 0.4409 - accuracy: 0.8391 - val_loss: 0.4259 - val_accuracy: 0.8489
Epoch 6/100
81/81 [==============================] - 4s 47ms/step - loss: 0.4403 - accuracy: 0.8391 - val_loss: 0.4253 - val_accuracy: 0.8489
Epoch 7/100
81/81 [==============================] - 4s 47ms/step - loss: 0.4402 - accuracy: 0.8391 - val_loss: 0.4258 - val_accuracy: 0.8489
Epoch 8/100
81/81 [==============================] - 4s 48ms/step - loss: 0.4399 - accuracy: 0.8391 - val_loss: 0.4256 - val_accuracy: 0.8489
Epoch 9/100
81/81 [==============================] - 4s 47ms/step - loss: 0.4397 - accuracy: 0.8391 - val_loss: 0.4267 - val_accuracy: 0.8489
Epoch 10/100
81/81 [==============================] - 4s 47ms/step - loss: 0.4394 - accuracy: 0.8391 - val_loss: 0.4276 - val_accuracy: 0.8489
Epoch 11/100
81/81 [==============================] - 4s 48ms/step - loss: 0.4392 - accuracy: 0.8391 - val_loss: 0.4265 - val_accuracy: 0.8489
45/45 [==============================] - 1s 20ms/step - loss: 0.4370 - accuracy: 0.8401
0.8401111364364624

I tried multiple models, but it is always ends up with val_accuracy = 0.8489. The dataset has a shape (9000,1000,6) as I have 9000 events, the time step is 1000 which is 5 seconds of recording, and each time step has the 3 acceleration and 3 gyroscope values. I am normalizing the data with StandardScaler, but without normalizing is the same result.

Upvotes: 0

Views: 86

Answers (1)

Yannick Funk
Yannick Funk

Reputation: 1581

The phenomenon you run into is called underfitting. This happens when the amount our quality of your training data is insufficient, or your network architecture is too small and/or not capable to learn the problem.

Try normalizing your input data and experiment with different network architectures, learning rates and activation functions.

Upvotes: 1

Related Questions