dearn44
dearn44

Reputation: 3422

Input dimension error on pytorch's forward check

I am creating an RNN with pytorch, it looks like this:

class MyRNN(nn.Module):
    def __init__(self, batch_size, n_inputs, n_neurons, n_outputs):
        super(MyRNN, self).__init__()

        self.n_neurons = n_neurons
        self.batch_size = batch_size
        self.n_inputs = n_inputs
        self.n_outputs = n_outputs

        self.basic_rnn = nn.RNN(self.n_inputs, self.n_neurons)

        self.FC = nn.Linear(self.n_neurons, self.n_outputs)

    def init_hidden(self, ):
        # (num_layers, batch_size, n_neurons)
        return torch.zeros(1, self.batch_size, self.n_neurons)

    def forward(self, X):
        self.batch_size = X.size(0)
        self.hidden = self.init_hidden()

        lstm_out, self.hidden = self.basic_rnn(X, self.hidden)
        out = self.FC(self.hidden)

        return out.view(-1, self.n_outputs)

My input x looks like this:

tensor([[-1.0173e-04, -1.5003e-04, -1.0218e-04, -7.4541e-05, -2.2869e-05,
         -7.7171e-02, -4.4630e-03, -5.0750e-05, -1.7911e-04, -2.8082e-04,
         -9.2992e-06, -1.5608e-05, -3.5471e-05, -4.9127e-05, -3.2883e-01],
        [-1.1193e-04, -1.6928e-04, -1.0218e-04, -7.4541e-05, -2.2869e-05,
         -7.7171e-02, -4.4630e-03, -5.0750e-05, -1.7911e-04, -2.8082e-04,
         -9.2992e-06, -1.5608e-05, -3.5471e-05, -4.9127e-05, -3.2883e-01],
        ...

        [-6.9490e-05, -8.9197e-05, -1.0218e-04, -7.4541e-05, -2.2869e-05,
         -7.7171e-02, -4.4630e-03, -5.0750e-05, -1.7911e-04, -2.8082e-04,
         -9.2992e-06, -1.5608e-05, -3.5471e-05, -4.9127e-05, -3.2883e-01]],
       dtype=torch.float64)

and is a batch of 64 vectors with size 15.

When trying to test this model by doing:

BATCH_SIZE = 64
N_INPUTS = 15
N_NEURONS = 150
N_OUTPUTS = 1
model = MyRNN(BATCH_SIZE, N_INPUTS, N_NEURONS, N_OUTPUTS)
model(x)

I get the following error:

File "/home/tt/anaconda3/envs/venv/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 126, in check_forward_args
    expected_input_dim, input.dim()))
RuntimeError: input must have 3 dimensions, got 2

How can I fix it?

Upvotes: 2

Views: 1326

Answers (2)

Coolness
Coolness

Reputation: 1992

See the documentation, the RNN layer expects

input of shape (seq_len, batch, input_size): tensor containing the features of the input sequence.

In your case it seems that your "size" is the length of the sequence, and you have one feature at every timestep. Edited for 15 features, one timestep

# 15 features, 150 neurons
rnn = nn.RNN(15, 150)
# sequence of length 1, batch size 64, 15 features
x = torch.rand(1, 64, 15)
res, _ = rnn(x)
print(res.shape)
# => torch.Size([1, 64, 150])

Also note that you don't need to prespecify batch size. 

Upvotes: 0

Erik Z
Erik Z

Reputation: 412

You are missing one of the required dimensions for the RNN layer.

Per the documentation, your input size needs to be of shape (sequence length, batch, input size).

So - with the example above, you are missing one of these. Based on your variable names, it appears you are trying to pass 64 examples of 15 inputs each... if that’s true, you are missing sequence length.

With an RNN, the sequence length is the number of times you want the layer to recur. For example, in NLP your sequence length might be equal to the number of words in a sentence, while batch size would be the number of sentences you are passing, and input size would be the vector size of each word.

You might not need an RNN here if you are just trying to do use 64 samples of size 15.

Upvotes: 3

Related Questions