Anthony Arena
Anthony Arena

Reputation: 347

Accuracy of 1.0 while Training Loss and Validation Loss still decreasing

I have created a LSTM RNN in order to predict whether someone is driving or not based on gps coordinates. Here is a sample of the data(note: x,y,z are 3d coordinates converted from lat,lon):

                        x           y           z       trip_id,mode_cat,weekday,period_of_day
datetime            id                          
2011-08-27 06:13:01 20  0.650429    0.043524    0.758319    1   1   1   0
2011-08-27 06:13:02 20  0.650418    0.043487    0.758330    1   1   1   0
2011-08-27 06:13:03 20  0.650421    0.043490    0.758328    1   1   1   0
2011-08-27 06:13:04 20  0.650427    0.043506    0.758322    1   1   1   0
2011-08-27 06:13:05 20  0.650438    0.043516    0.758312    1   1   1   0

When I train my network, my training_loss and validation_loss both decrease but accuracy reaches 1.0 on the first epoch. I made sure that my training and testing data are not the same. Here is how I split the training and testing data:

t_num_test = df["trip_id"].iloc[-1]*4//5
train_test_df = df.loc[df["trip_id"]<=t_num_test].copy(deep=True)
test_test_df = df.loc[df["trip_id"]>t_num_test].copy(deep=True)

features_train = train_test_df[["x","y","z","datetime","id","trip_id","mode_cat","weekday","period_of_day"]]
features_train.set_index(["datetime","id"],inplace=True)
dataset_train_x = features_train[["x","y","z","trip_id","weekday","period_of_day"]].values
dataset_train_y = features_train[["mode_cat"]].values

features_test = test_test_df[["x","y","z","datetime","id","trip_id","mode_cat","weekday","period_of_day"]]
features_test.set_index(["datetime","id"],inplace=True)
dataset_test_x = features_test[["x","y","z","trip_id","weekday","period_of_day"]].values
dataset_test_y = features_test[["mode_cat"]].values

And here is how I have built my network:

single_step_model = tf.keras.models.Sequential()
single_step_model.add(tf.keras.layers.LSTM(1,
                                           input_shape=x_train_single.shape[-2:]))
single_step_model.add(tf.keras.layers.Dropout(0.2))
single_step_model.add(tf.keras.layers.Dense(1, activation='sigmoid'))

single_step_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001), loss='binary_crossentropy',
                          metrics=['accuracy'])
.
.
.
single_step_history = single_step_model.fit(train_data_single, epochs=epochs,
                                            steps_per_epoch=evaluation_interval,
                                            validation_data=test_data_single,
                                            validation_steps=60)

And here is the graph displaying training_loss, validation_loss and accuracy

What could be causing this outcome? If it matters, I'm using approximately 500,000 data points with approximately 8000 unique trip_id.

Please advise

EDIT: # of Driving/Not Driving (Mode_cat: 1/0)

Upvotes: 0

Views: 273

Answers (1)

sam
sam

Reputation: 1896

Hope this helps!

Few cases I could think of

  1. Your dataset is biased. It could such that most of the input data is skewed? Check the % of mode_cat values in it. (All are 1 only, or most of them are 1's only ?)

  2. You X values could have a feature/column that is a function y is a function of x values (like y_val = m * x_col2 + x_col3 ?)

  3. Accuracy is good to learn but try to use something like f1 score/confusion_matrix.

Link:

https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html#sklearn.metrics.f1_score

https://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html#sklearn.metrics.confusion_matrix

Upvotes: 1

Related Questions