whtitefall
whtitefall

Reputation: 671

LSTM predict one result at a time

I trying to predict a single result from my LSTM model

my model has n_features = 32 and time_step = 100 with following code

  model = tf.keras.Sequential([
  tf.keras.layers.InputLayer( input_shape=(time_step , n_features)), 
  tf.keras.layers.LSTM(64),
  tf.keras.layers.Dense(1)]
)

I trained my model using generator

generator = TimeseriesGenerator(x_feature,y_target,length=time_step ,batch_size = 128)

When I try to predict my model using test dataset with shape of (2,32), which has 2 rows and 32 features.

(I'm planning to get 2 predictions from my model)

I have the following error

ValueError: Input 0 of layer sequential is incompatible with the layer: expected ndim=3, found ndim=2. 
Full shape received: [None, 32]

I understand this because my test dataset has shape of [None,32], but how can I reshape it, so that it becomes shape of (100,32)

I tried to reshape using

x_feature.reshape(-1,100,36)
model.predict(x_feature)

However it shows

ValueError: cannot reshape array of size 64 into shape (100,36)

How can I deal with such reshape problem, when my input shape of model is 100,36 but test dataset has shape of 2,36 ?

Thank you!

Upvotes: 0

Views: 1129

Answers (1)

Bashir Kazimi
Bashir Kazimi

Reputation: 1377

Keras model always expects inputs to have shape (batch_size, time_steps, n_features) in the case of LSTM layers. When you train, it works because you train on multiple examples, i.e., with a fixed batch size. However, when you predict, and you are using one example, you should add a batch dimension for it to work. Let us say your single example x has shape (time_steps,n_features), you should use:

x = numpy.expand_dims(x, 0)

which converts your x to have a shape of (1, time_steps, n_features) and the model would work as if the inputs have a batch size of 1. Now if you call

output = model.predict(x)

your output would be a list with one element. So output[0] would be the prediction for the your original x.

The error you mentioned is caused because you are not adding the batch dimension and hence the model complains that it gets an input with ndim=2 instead of 3. The modifications suggested above will resolve the error, however it will still not work for the problem you are trying to solve because you are training the model with time_steps of 100 and features of 32. So it will only work if your test example has that many time steps.

Upvotes: 1

Related Questions