Reputation: 25
I am trying to implement gesture classification through CNNs, with bvh files with joints' data points as input. I have verified that importing the data is correct and also the process of the recognition work great, I have already tested it with other datasets with the same data formats.
However, training the network with this specific dataset gives me a huge test loss as a result and I have no idea why. Below is an example of my results
Epoch 119/500 7/7 [==============================] - 1s 178ms/step - loss: 0.4896 - accuracy: 0.9977 - val_loss: 1.1887 - val_accuracy: 0.7182 Epoch 120/500 7/7 [==============================] - 1s 185ms/step - loss: 0.4886 - accuracy: 1.0000 - val_loss: 1.1803 - val_accuracy: 0.7182 Epoch 121/500 7/7 [==============================] - 1s 178ms/step - loss: 0.4876 - accuracy: 0.9977 - val_loss: 1.1760 - val_accuracy: 0.7182
Final Test Results: Accuracy = 65.18%, Loss = 165731.6406
Here is part of my code
model = Sequential([
Conv1D(64, kernel_size=3, kernel_regularizer=l2(0.01), input_shape=(max_frames,
X_train.shape[2])),
LeakyReLU(alpha=0.01),
BatchNormalization(),
MaxPooling1D(pool_size=2),
Dropout(0.3),
Conv1D(128, kernel_size=3, kernel_regularizer=l2(0.01),),
LeakyReLU(alpha=0.01),
BatchNormalization(),
MaxPooling1D(pool_size=2),
Dropout(0.3),
Conv1D(32, kernel_size=3, kernel_regularizer=l2(0.01)), # Added additional Conv1D layer
LeakyReLU(alpha=0.01),
BatchNormalization(),
Dropout(0.3),
GlobalMaxPooling1D(),
Dense(64),
LeakyReLU(alpha=0.01),
BatchNormalization(),
Dropout(0.3),
Dense(classes, activation='softmax')])
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=0.2, random_state=42, stratify=y_train.argmax(axis=1)
optimizer = Adam(learning_rate=0.0001, clipvalue=1.0)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
early_stopping = EarlyStopping(monitor='val_loss', patience=30, restore_best_weights=True, verbose=0)
final_history = model.fit(
X_train, y_train,
validation_data=(X_val, y_val),
epochs=500,
batch_size=64,
callbacks=[early_stopping],
verbose=1)
test_loss, test_accuracy = model.evaluate(X_test, y_test, verbose=0)
print(f"\nFinal Test Results: Accuracy = {test_accuracy * 100:.2f}%, Loss = {test_loss:.4f}")
Any ideas on what could be wrong? I tried many things like checking my data again, changing paramteres, optimizers, learning rates etc., and nothing helped.
Upvotes: 0
Views: 9