Reputation: 1
I am doing a multiclass classification using LSTM model. One sample is 20 frames of data, each frame has 64 infrared signals, so each 20 × 64 dimension matrix signal is converted into a 1 × 1280 dimension vector (one sample). There are 1280 nodes in the input layer of LSTM.
Then I need to build the following LSTM model:
the number of nodes in the hidden layer is 640 and each hidden layer node is connected to a full connection layer with 100 backward nodes, and there is a ReLU activation layer behind the full connection layer. Finally, the softmax activation function is used to normalize the data to obtain the output. Additionally, the timesteps of LSTM are set to 16.
Here is my attempt to build this architecture according to intsructions above:
embedding_vecor_length = 16
model_1 = Sequential()
model_1.add(Embedding(len(X_train), embedding_vecor_length, input_length=1280))
model_1.add(LSTM(640))
model_1.add(Dense(100, activation='relu'))
model_1.add(Dense(4, activation='softmax'))
model_1.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model_1.summary())
model_1.fit(X_train, y_train, epochs=10, batch_size=10)
I am very confused by the hidden layer of LSTM and fully connected layer. According to these instructions, should my fully connected layer be inside LSTM block? And what does it mean backward nodes? Also, where do we indicate the timesteps of LSTM? Could somebody explain please? Thank you!
Upvotes: 0
Views: 143
Reputation: 552
Upvotes: 0