Tonz
Tonz

Reputation: 187

Pytorch CNN:RuntimeError: Given groups=1, weight of size [16, 16, 3], expected input[500, 1, 19357] to have 16 channels, but got 1 channels instead

class ConvolutionalNetwork(nn.Module):
    def __init__(self, in_features, trial):
        super().__init__()
        self.in_features = in_features
        self.trial = trial
        # this computes num features outputted from the two conv layers
        c1 = int(((self.in_features - 2)) / 64)  # this is to account for the loss due to conversion to int type
        c2 = int((c1 - 2) / 64)
        self.n_conv = int(c2 * 16)
        # self.n_conv = int((( ( (self.in_features - 2)/4 ) - 2 )/4 ) * 16)
        self.conv1 = nn.Conv1d(16, 16, 3, 1)
        self.conv1_bn = nn.BatchNorm1d(16)
        self.conv2 = nn.Conv1d(16, 16, 3, 1)
        self.conv2_bn = nn.BatchNorm1d(16)
        # self.dp = nn.Dropout(trial.suggest_uniform('dropout_rate',0,1.0))
        self.dp = nn.Dropout(0.5)
        self.fc3 = nn.Linear(self.n_conv, 2)

    def forward(self, x):
        # shape x for conv 1d op
        x = x.view(-1, 1, self.in_features)
        x = self.conv1(x)
        x = F.tanh(x)
        x = F.max_pool1d(x, 64, 64)
        x = self.conv2(x)
        x = F.tanh(x)
        x = F.max_pool1d(x, 64, 64)
        x = x.view(-1, self.n_conv)

        x = self.dp(x)
        x = self.fc3(x)
        x = F.log_softmax(x, dim=1)

        return x

Ran the code above and this error popped up :

RuntimeError: Given groups=1, weight of size [16, 16, 3], expected input[500, 1, 19357] to have 16 channels, but got 1 channels instead.

Anyone able to advise on this? It says discrepancies in input , but the above codes works well earlier , unsure what happened after I re-arranged the code.

Upvotes: 2

Views: 2022

Answers (1)

Guillem
Guillem

Reputation: 2647

Well, just after entering in the forward method you are reshaping your input array so it has only a single channel:

x = x.view(-1, 1, self.in_features)

And at the same time at the model constructor you are specifying that conv1 has 16 channels as input:

self.conv1 = nn.Conv1d(16, 16, 3, 1)

Thus the error of expecting 16 channels but received 1.

There are two things to note here:

  • If you are used to tensorflow, maybe you are thinking that channels are the last dimension but in pytorch channels are located at the first dimension. Take a look at the Conv1d torch documentation. Take this into account when reshaping the data.
  • Conv1d are agnostic to the length of your input (I am telling you this just in case in_features represents the length)

I Cannot provide you with a concrete solution since I am not sure of what you are trying to do.

Upvotes: 2

Related Questions