snxmx
snxmx

Reputation: 65

General question about time series forecasting

I have a general question about time series forecasting in machine learning. It's not about coding yet, and I'm just trying to understand how I should build the model.

Below is some code I have related to my model:

def build_model(my_learning_rate, feature_layer):
  model = tf.keras.models.Sequential()
  model.add(feature_layer)
  model.add(tf.keras.layers.Dense(units=64, activation="relu"))
  model.add(tf.keras.layers.Dense(units=1))  
  model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=my_learning_rate), loss="mean_squared_error", metrics=[tf.keras.metrics.RootMeanSquaredError()])
  return model

Here is my feature layer:

<tf.Tensor: shape=(3000, 31), dtype=float32, numpy=
array([[0., 0., 1., ..., 0., 0., 0.],
       [0., 0., 1., ..., 0., 0., 0.],
       [0., 0., 1., ..., 0., 0., 0.],
       ...,
       [0., 0., 1., ..., 1., 0., 0.],
       [0., 0., 1., ..., 0., 1., 0.],
       [0., 0., 1., ..., 0., 0., 1.]], dtype=float32)>

The time series forecasting modeling technique I learned recently is totally different than how I have been building the model. The technique involves time windows that use past values (my labels!) as features and the next value as the label. It also involves RNN and LSTM.

Is the way I built the model and the time series forecasting technique fundamentally different and will generate different outcomes? Is the way I have been modeling this reasonable, or I should switch to the proper time series forecasting approach?

Upvotes: 1

Views: 338

Answers (1)

Victor Sim
Victor Sim

Reputation: 368

Yes, Using LSTM and Recurrent layers are usually used for time series as the data from previous timestamps are essential to create a successful model to create accurate and precise predictions. For example, when I make models for time series models, I usually use time distributed 1 dimensional convolutional layers. Code below:

model = Sequential()
model.add(TimeDistributed(Conv1D(filters=64, kernel_size=1, activation='relu'),input_shape=(None, n_steps, n_features)))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(100, activation='relu'))
model.add(BatchNormalization())
model.add(Dense(1))

If you want to implement this yourself, you must reshape the original X array int o n_steps (timestamps) and n_features(number of features in data)

Hope this helps!

Upvotes: 1

Related Questions