Splash
Splash

Reputation: 127

Why Keras Embedding layer's input_dim = vocab_size + 1

In this code snippet from TensorFlow tutorial Basic text classification,

model = tf.keras.Sequential([
  layers.Embedding(max_features + 1, embedding_dim),
  layers.Dropout(0.2),
  layers.GlobalAveragePooling1D(),
  layers.Dropout(0.2),
  layers.Dense(1)])

As far as I understood, max_features is the size of vocabulary(with index 0 for padding and index 1 for OOV).

Also, I've done an experiment by setting layers.Embedding(max_features, embedding_dim), the tutorial can still successfully run through(screenshots below).

So why do we need input_dim=max_features + 1 here?

model training

Upvotes: 3

Views: 2497

Answers (3)

MasterOne Piece
MasterOne Piece

Reputation: 461

Vocabulary Size = Maximum Integer Index + 1

Example:
a[0] = 'item 1'
a[1] = 'item 2'
a[2] = 'item 3'
................
Maximum Integer Index = 2
Vocabulary Size = 3

Upvotes: 1

grim_trigger
grim_trigger

Reputation: 169

The example is very misleading - arguably wrong, though the example code doesn't actually fail in that execution context.

The embedding layer input dimension, per the Embedding layer documentation is the maximum integer index + 1, not the vocabulary size + 1, which is what the author of that example had in the code you cite.

enter image description here

In my toy example below, you can see how the 0-based integer index works out:

enter image description here

enter image description here

Frankly, it looks like the writer just got lucky because he was using the Sequential model type and didn't need to serialize the model. In this special case, the example code worked.

Upvotes: 1

Jifu Zhao
Jifu Zhao

Reputation: 21

I have the same question here. I am thinking here they make some mistake. They may originally want to do this for RNN with padding, such that 0 is not part of the vocabulary. So, the max_features + 1 is the input_dimension

Upvotes: 0

Related Questions