Reputation: 43491
I understand Conv1d
strides in one dimension. But my input is of shape [64, 20, 161]
, where 64 is the batches, 20 is the sequence length and 161 is the dimension of my vector.
I'm not sure how to set up my Conv1d
to stride over the vector.
I'm trying:
self.conv1 = torch.nn.Conv1d(batch_size, 20, 161, stride=1)
but getting:
RuntimeError: Given groups=1, weight of size 20 64 161, expected input[64, 20, 161] to have 64 channels, but got 20 channels instead
Upvotes: 1
Views: 2252
Reputation: 3775
According to the documentation:
torch.nn.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')
in_channels is the number of channels in your input, number of channels usually an computer vision term, in your case this number is 20. out_channels size of your output, it depends on how much output you want.
For 1D convolution, you can think of number of channels as "number of input vectors" and "number of output feature vectors". And size (not number) of output feature vectors are decided from other parameters like kernel_size, strike, padding, dilation.
An example usage:
t = torch.randn((64, 20, 161))
conv = torch.nn.Conv1d(20, 100)
conv(t)
Note: You never specify batch size in torch.nn modules, first dimension is always assumed to be batch size.
Upvotes: 2