Zane
Zane

Reputation: 101

Keras: Embedding layer + LSTM: Time Dimension

This might be too stupid to ask ... but ...

When using LSTM after the initial Embedding layer in Keras (for example the Keras LSTM-IMDB tutorial code), how does the Embedding layer know that there is a time dimension? In another word, how does the Embedding layer knowthe length of each sequence in the training data set? How does the Embedding layer know I am training on sentences, not on individual words? Does it simply infer during the training process?

Upvotes: 2

Views: 1623

Answers (1)

Marcin Możejko
Marcin Możejko

Reputation: 40516

Embedding layer is usually either first or second layer of your model. If it's first (usually when you use Sequential API) - then you need to specify its input shape which is either (seq_len,) or (None,). In a case when it's second layer (usually when you use Functional API) then you need to specify a first layer which is an Input layer. For this layer - you also need to specify shape. In a case when a shape is (None,) then an input shape is inferred from a size of a batch of data fed to a model.

Upvotes: 1

Related Questions