Reputation: 815
I am building a encoder-decoder sequence-to-sequence model using LSTM on a time series. I am following some tutorials and I am confused as to why do we need to send the previous step prediction as an input for the next step when we are already passing the last hidden output which effectively is the prediction from that step. As in the picture below, why do we need to pass Xt+1 as the input to the second decoder cell when we are already passing ht+1?
thanks,
Upvotes: 0
Views: 6