How to implement LSTM layer with multiple cells in Pytorch?

I intend to implement an LSTM with 2 layers and 256 cells in each layer. I am trying to understand the PyTorch LSTM framework for the same. The variables in torch.nn.LSTM that I can edit are input_size, hidden_size, num_layers, bias, batch_first, dropout and bidirectional.

However, how do I have multiple cells in a single layer?

Upvotes: 1

Views: 1387

Answers (1)

Sung Kim
Sung Kim

Reputation: 8536

These cells will be automatically unrolled based on your sequence size in the input. Please check out this code:

# One cell RNN input_dim (4) -> output_dim (2). sequence: 5, batch 3
# 3 batches 'hello', 'eolll', 'lleel'
# rank = (3, 5, 4)
inputs = Variable(torch.Tensor([[h, e, l, l, o],
                                [e, o, l, l, l],
                                [l, l, e, e, l]]))
print("input size", inputs.size())  # input size torch.Size([3, 5, 4])

# Propagate input through RNN
# Input: (batch, seq_len, input_size) when batch_first=True
# B x S x I
out, hidden = cell(inputs, hidden)
print("out size", out.size())  # out size torch.Size([3, 5, 2])

You can find more examples at https://github.com/hunkim/PyTorchZeroToAll/.

Upvotes: 1

Related Questions