Reputation: 219
I am really new to pytorch, and I've been making code convolution myself.
To apply convolution on input data, I use conv2d.
In the documentation,
torch.nn.Conv2d(in_channels, out_channels, kernel_size ...)
But where is a filter? To convolute, we should do it on input data with kernel. But there is only kernel size, not the elements of the kernel.
For example, There is an input data 5x5 and with 2x2 kernel with all 4 kernel's elements are 1 then I can make 4x4 output. So where can I put the elements of kernel?
Upvotes: 0
Views: 5753
Reputation: 8537
The filter weights can be accessed using the weight
parameter of the Conv2d
object. For example,
>>> c = torch.nn.Conv2d(in_channels=2, out_channels=2, kernel_size=3)
>>> c.weight
Parameter containing:
tensor([[[[ 0.2156, 0.0930, -0.2319],
[ 0.1333, -0.0846, 0.1848],
[ 0.0765, -0.1799, -0.1273]],
[[ 0.1173, 0.1650, -0.0876],
[-0.1353, 0.0616, -0.1136],
[-0.2326, -0.1509, 0.0651]]],
[[[-0.2026, 0.2210, 0.0409],
[-0.0818, 0.0793, 0.1074],
[-0.1430, -0.0118, -0.2100]],
[[-0.2025, -0.0508, -0.1731],
[ 0.0217, -0.1616, 0.0702],
[ 0.1903, -0.1864, 0.1523]]]], requires_grad=True)
The weights are initialized by default by sampling from a uniform distribution. You can also initialize weights using various weight initialization schemes.
If you want to manually change the weights, you can do it by modifying the weight
parameter directly. For example, to set all the weights to 1, use,
>>> c.weight.data = torch.ones_like(c.weight)
>>> c.weight
Parameter containing:
tensor([[[[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]],
[[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]]],
[[[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]],
[[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]]]], requires_grad=True)
Note that during training, the convolutional layers are typically a part of the computational graph, and their weights get automatically updated when making a backward()
call.
Upvotes: 2
Reputation: 3573
You can use the functional conv2d function, which takes an additional tensor of filters (as the argument weights
).
The nn.Conv2d
layer relies on this operation but handles the learning of the filters/weights automatically, which is generally more convenient
Upvotes: 2