Reputation: 323
I have a model that I am trying to get working. I am working through the errors, but now I think it has come down to the values in my layers. I get this error:
RuntimeError: Given groups=1, weight of size 24 1 3 3, expected input[512, 50, 50, 3] to have 1 channels,
but got 50 channels instead
My parameters are:
LR = 5e-2
N_EPOCHS = 30
BATCH_SIZE = 512
DROPOUT = 0.5
My image information is:
depth=24
channels=3
original height = 1600
original width = 1200
resized to 50x50
This is the size of my data:
Train shape (743, 50, 50, 3) (743, 7)
Test shape (186, 50, 50, 3) (186, 7)
Train pixels 0 255 188.12228712427097 61.49539262385051
Test pixels 0 255 189.35559211469533 60.688278787628775
I looked here to try to understand what values each layer is expecting, but when I put in what it says here, https://towardsdatascience.com/pytorch-layer-dimensions-what-sizes-should-they-be-and-why-4265a41e01fd, it gives me errors about wrong channels and kernels.
I found torch_summary to give me more understanding about the outputs, but it only poses more questions.
This is my torch_summary code:
from torchvision import models
from torchsummary import summary
import torch
import torch.nn as nn
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Conv2d(1,24, kernel_size=5) # output (n_examples, 16, 26, 26)
self.convnorm1 = nn.BatchNorm2d(24) # channels from prev layer
self.pool1 = nn.MaxPool2d((2, 2)) # output (n_examples, 16, 13, 13)
self.conv2 = nn.Conv2d(24,48,kernel_size=5) # output (n_examples, 32, 11, 11)
self.convnorm2 = nn.BatchNorm2d(48) # 2*channels?
self.pool2 = nn.AvgPool2d((2, 2)) # output (n_examples, 32, 5, 5)
self.linear1 = nn.Linear(400,120) # input will be flattened to (n_examples, 32 * 5 * 5)
self.linear1_bn = nn.BatchNorm1d(400) # features?
self.drop = nn.Dropout(DROPOUT)
self.linear2 = nn.Linear(400, 10)
self.act = torch.relu
def forward(self, x):
x = self.pool1(self.convnorm1(self.act(self.conv1(x))))
x = self.pool2(self.convnorm2(self.act(self.conv2(x))))
x = self.drop(self.linear1_bn(self.act(self.linear1(x.view(len(x), -1)))))
return self.linear2(x)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model=CNN().to(device)
summary(model, (3, 50, 50))
This is what it gave me:
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 342, in conv2d_forward
self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size 24 1 5 5, expected input[2, 3, 50, 50] to have 1 channels, but got 3 channels instead
When I run my whole code, and unsqueeze_(0) my data, like so....x_train = torch.from_numpy(x_train).unsqueeze_(0)
I get this error:
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 342, in conv2d_forward
self.padding, self.dilation, self.groups)
RuntimeError: Expected 4-dimensional input for 4-dimensional weight 24 1 5 5, but got 5-dimensional input of size [1, 743, 50, 50, 3] instead
I don't know how to figure out how to fill in the proper values in the layers. Will someone please help me find the correct values and understand how to understand this? I do know that the output of one layer should be the input of another layer. Nothing is matching up with what I thought I knew. Thanks in advance!!
Upvotes: 2
Views: 1964
Reputation: 7693
It seems you have wrong order of input x
tensor axis.
As you can see in the doc
Conv2d
input must be (N, C, H, W)
N
is a batch size,C
denotes a number of channels,H
is a height of input planes in pixels, andW
is width in pixels.
So, To make it right use torch.permute
to swap axis in forward pass.
...
def forward(self, x):
x = x.permute(0, 3, 1, 2)
...
...
return self.linear2(x)
...
Example of permute
:
t = torch.rand(512, 50, 50, 3)
t.size()
torch.Size([512, 50, 50, 3])
t = t.permute(0, 3, 1, 2)
t.size()
torch.Size([512, 3, 50, 50])
Upvotes: 1