Reputation: 393
this is two neuron networks that I tried to merge by using concatenate operation. The network should classify IMDB movies reviews by 1-good and 0-bad movies
def cnn_lstm_merged():
embedding_vecor_length = 32
cnn_model = Sequential()
cnn_model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
cnn_model.add(Conv1D(filters=32, kernel_size=3, padding='same', activation='relu'))
cnn_model.add(MaxPooling1D(pool_size=2))
cnn_model.add(Flatten())
lstm_model = Sequential()
lstm_model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
lstm_model.add(LSTM(64, activation = 'relu'))
lstm_model.add(Flatten())
merge = concatenate([lstm_model, cnn_model])
hidden = (Dense(1, activation = 'sigmoid'))(merge)
#print(model.summary())
output = hidden.fit(X_train, y_train, epochs=3, batch_size=64)
return output
But when I run the code there is an error:
File "/home/pythonist/Desktop/EnsemblingLSTM_CONV/train.py", line 59, in cnn_lstm_merged
lstm_model.add(Flatten())
File "/home/pythonist/deeplearningenv/lib/python3.6/site-packages/keras/engine/sequential.py", line 185, in add
output_tensor = layer(self.outputs[0])
File "/home/pythonist/deeplearningenv/lib/python3.6/site-packages/keras/engine/base_layer.py", line 414, in __call__
self.assert_input_compatibility(inputs)
File "/home/pythonist/deeplearningenv/lib/python3.6/site-packages/keras/engine/base_layer.py", line 327, in assert_input_compatibility
str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer flatten_2: expected min_ndim=3, found ndim=2
[Finished in 4.8s with exit code 1]
How to merge these two layers? Thank you
Upvotes: 3
Views: 4506
Reputation: 4951
There is no need to use the Flatten
after the LSTM
as the LSTM
(per default) only returns the last state and not a sequence, i.e. the data will have the shape (BS, n_output)
but the Flatten
layer expects a shape of (BS, a, b)
which will be transformed into (BS, a*b)
.
So, either remove the Flatten
layer and work just with the last state or add return_sequences=True
to the LSTM
. This will make the LSTM
to return all outputs and not just the last one, i.e. (BS, T, n_out)
.
Edit: Also, the way how you created the final model is wrong. Please take a look at this example; for you, it should be something like this:
merge = Concatenate([lstm_model, cnn_model])
hidden = Dense(1, activation = 'sigmoid')
conc_model = Sequential()
conc_model.add(merge)
conc_model.add(hidden)
conc_model.compile(...)
output = conc_model .fit(X_train, y_train, epochs=3, batch_size=64)
All in all, it might be better to use the Functional API.
Edit 2: This is the final code
cnn_model = Sequential()
cnn_model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
cnn_model.add(Conv1D(filters=32, kernel_size=3, padding='same', activation='relu'))
cnn_model.add(MaxPooling1D(pool_size=2))
cnn_model.add(Flatten())
lstm_model = Sequential()
lstm_model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
lstm_model.add(LSTM(64, activation = 'relu', return_sequences=True))
lstm_model.add(Flatten())
# instead of the last two lines you can also use
# lstm_model.add(LSTM(64, activation = 'relu'))
# then you do not have to use the Flatten layer. depends on your actual needs
merge = Concatenate([lstm_model, cnn_model])
hidden = Dense(1, activation = 'sigmoid')
conc_model = Sequential()
conc_model.add(merge)
conc_model.add(hidden)
Upvotes: 5