DeeKay
DeeKay

Reputation: 1

Neural network | The problem of fitting the model to the data | MSE value too high

I'm new to neural networks (actually, this is my first time using python). I have created a simple neural network whose task is to predict inflation based on historical data on inflation, average salary, unemployment, and the number of unemployed. Everything was working fine until I decided to use cross-validation to split the training cases into subsets.

The problem is that in the first and last fold the MSE value is high. Here is a snippet from the terminal:

Fold 1
1/1 [==============================] - 0s 197ms/step - loss: 306.7101 - accuracy: 0.0000e+00
Error: Mean Squared Error = 306.7101135253906, The neural network model does not cope well with data fitting. The predictions are definitely wrong. Rerun the model because training did not run accurately.    
Fold 2
1/1 [==============================] - 0s 16ms/step - loss: 2.6362 - accuracy: 0.0000e+00
Success: Mean Squared Error = 2.636242389678955, The neural network model does a good job of fitting the data.
Fold 3
1/1 [==============================] - 0s 15ms/step - loss: 1.9960 - accuracy: 0.0000e+00
Success: Mean Squared Error = 1.9960471391677856, The neural network model does a good job of fitting the data.
Fold 4
1/1 [==============================] - 0s 15ms/step - loss: 13.9704 - accuracy: 0.1667
Success: Mean Squared Error = 13.970401763916016, The neural network model does a good job of fitting the data.
Fold 5
1/1 [==============================] - 0s 16ms/step - loss: 130.8911 - accuracy: 0.0000e+00
Error: Mean Squared Error = 130.89105224609375, The neural network model does not cope well with data fitting. The predictions are definitely wrong. Rerun the model because training did not run accurately. 

I tried changing batch_size and epochs but it didn't help. I also changed the structure of the neural network - the number of layers and neurons, activation functions, learning_rate, which increased the MSE in each of the folds. I was also changing the number of folds, which resulted in an increase in the MSE in the first and (mostly) last folds.

Below is a code snippet responsible for dividing the learning cases into subsets and for building the model:

# Define the number of folds for cross-validation
n_folds = 5

# Split the data into training and testing sets using KFold
kf = KFold(n_splits=n_folds)
fold = 1

# Define the model
model = Sequential()
model.add(Dense(128, activation='relu', kernel_regularizer=regularizers.l2(0.01)))
model.add(Dense(64, activation='relu', kernel_regularizer=regularizers.l2(0.01)))
model.add(Dense(32, activation='relu', kernel_regularizer=regularizers.l2(0.01)))
model.add(Dropout(0.2))
model.add(Dense(16, activation='relu', kernel_regularizer=regularizers.l2(0.01)))
model.add(Dense(8, activation='relu', kernel_regularizer=regularizers.l2(0.01)))
model.add(Dense(4, activation='relu', kernel_regularizer=regularizers.l2(0.01)))
model.add(Dense(1, activation='linear'))

# Compile model with a lower learning rate
adam = Adam(learning_rate=0.001)
model.compile(loss='mean_squared_error', optimizer=adam, metrics=['accuracy'])

for train_indices, test_indices in kf.split(inputs):
    print(f"Fold {fold}")
    fold += 1
    
    # Split data into training and testing sets
    train_inputs = inputs[train_indices]
    train_targets = targets[train_indices]
    test_inputs = inputs[test_indices]
    test_targets = targets[test_indices]
    
    # Train the model on the training data
    history = model.fit(
        train_inputs,
        train_targets,
        epochs=400,
        batch_size=40,
        verbose=0
    )
    
    # Evaluate the model on the test set
    test_loss = model.evaluate(test_inputs, test_targets)

I wonder if the problem is not on the data side. I have the impression that the model is overfitted. The model has about 54 sets of inputs and outputs, which may be quite a small value. Unfortunately, I have no way to increase the amount of data due to the fact that it is historical data, which simply has not been collected in larger amounts. Is there a way to increase model accuracy without interfering with the data?

Previously, the model was divided into training, validation and test sets in the proportions of 70/15/15. Back then, the MSE was around 20, which is still a lot, but not that much.


If it helps to understand the problem:

Entire neural network code: https://pastebin.com/G6cAkPA6

JSON with data: https://pastebin.com/B5fEYjTF

Upvotes: 0

Views: 209

Answers (0)

Related Questions