Reputation: 1679
I am trying to build a text classification model in tensorflow, following one of Francois Chollet's tutorials from his book. I am trying to first create an embedding layer, but it keeps breaking at this stage.
My logic is follows:
Start with list of strings of text as X and list of integers as y.
tokenize, vectorize, and pad text data to longest sequence length
convert each integer label into a one hot encoded array
Can someone explain to me what I am getting wrong here? I thought I understood how to instantiate an embedding layer, but is this not the correct understanding?
Here is my code:
# read in raw data
df = pd.read_csv('text_dataset.csv')
samples = df.data.tolist() # list of strings of text
labels = df.sentiment.to_list() # list of integers
# tokenize and vectorize text data to prepare for embedding
tokenizer = Tokenizer()
tokenizer.fit_on_texts(samples)
sequences = tokenizer.texts_to_sequences(samples)
word_index = tokenizer.word_index
print(f'Found {len(word_index)} unique tokens.')
# setting variables
vocab_size = len(word_index) # 1499
# Input_dim: This is the size of the vocabulary in the text data.
input_dim = vocab_size # 1499
# This is the size of the vector space in which words will be embedded.
output_dim = 32 # recommended by tf
# This is the length of input sequences
max_sequence_length = len(max(sequences, key=len)) # 295
# train/test index splice variable
training_samples = round(len(samples)*.8)
# data = pad_sequences(sequences, maxlen=max_sequence_length) # shape (499, 295)
# keras automatically pads to maxlen if left without input
data = pad_sequences(sequences)
# preprocess labels into one hot encoded array of 3 classes ([1., 0., 0.])
labels = to_categorical(labels, num_classes=3, dtype='float32') # shape (499, 3)
# Create test/train data (80% train, 20% test)
x_train = data[:training_samples]
y_train = labels[:training_samples]
x_test = data[training_samples:]
y_test = labels[training_samples:]
model = Sequential()
model.add(Embedding(input_dim, output_dim, input_length=max_sequence_length))
model.add(Dense(32, activation='relu'))
model.add(Dense(3, activation='softmax'))
model.summary()
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train,
y_train,
epochs=10,
batch_size=32,
validation_data=(x_test, y_test))
When I run this, I get this error:
Found 1499 unique tokens.
Model: "sequential_23"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_21 (Embedding) (None, 295, 32) 47968
_________________________________________________________________
dense_6 (Dense) (None, 295, 32) 1056
_________________________________________________________________
dense_7 (Dense) (None, 295, 3) 99
=================================================================
Total params: 49,123
Trainable params: 49,123
Non-trainable params: 0
_________________________________________________________________
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-144-f29ef892e38d> in <module>()
51 epochs=10,
52 batch_size=32,
---> 53 validation_data=(x_test, y_test))
2 frames
/usr/local/lib/python3.6/dist-packages/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
129 ': expected ' + names[i] + ' to have ' +
130 str(len(shape)) + ' dimensions, but got array '
--> 131 'with shape ' + str(data_shape))
132 if not check_batch_axis:
133 data_shape = data_shape[1:]
ValueError: Error when checking target: expected dense_7 to have 3 dimensions, but got array with shape (399, 3)
To troubleshoot, I have been commenting out layers to try to see whats going on. I found that the problem persists all the way up to the first layer, making me think I have a poor understanding of the Embedding layer. See below:
model = Sequential()
model.add(Embedding(input_dim, output_dim, input_length=max_sequence_length))
# model.add(Dense(32, activation='relu'))
# model.add(Dense(3, activation='softmax'))
model.summary()
Which results in:
Found 1499 unique tokens.
Model: "sequential_24"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_22 (Embedding) (None, 295, 32) 47968
=================================================================
Total params: 47,968
Trainable params: 47,968
Non-trainable params: 0
_________________________________________________________________
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-150-63d1b96db467> in <module>()
51 epochs=10,
52 batch_size=32,
---> 53 validation_data=(x_test, y_test))
2 frames
/usr/local/lib/python3.6/dist-packages/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
129 ': expected ' + names[i] + ' to have ' +
130 str(len(shape)) + ' dimensions, but got array '
--> 131 'with shape ' + str(data_shape))
132 if not check_batch_axis:
133 data_shape = data_shape[1:]
ValueError: Error when checking target: expected embedding_22 to have 3 dimensions, but got array with shape (399, 3)
Upvotes: 1
Views: 1423
Reputation: 9900
Dense layer in keras is expected to take a flat input with only 2 dimensions [BATCH_SIZE, N]
. Output of an embedding layer for a sentence has 3 diemnsions: [BS, SEN_LENGTH, EMBEDDING_SIZE]
.
There are 2 options to tackle that:
model.add(Flatten())
before the first dense layer;model.add(Conv1D(filters=32, kernel_size=8, activation='relu'))
Upvotes: 1