Reputation: 487
Say we have a single channel image (5x5)
A = [ 1 2 3 4 5
6 7 8 9 2
1 4 5 6 3
4 5 6 7 4
3 4 5 6 2 ]
And a filter K (2x2)
K = [ 1 1
1 1 ]
An example of applying convolution (let us take the first 2x2 from A) would be
1*1 + 2*1 + 6*1 + 7*1 = 16
This is very straightforward. But let us introduce a depth factor to matrix A i.e., RGB image with 3 channels or even conv layers in a deep network (with depth = 512 maybe). How would the convolution operation be done with the same filter ? A similiar work out will be really helpful for an RGB case.
Upvotes: 31
Views: 32445
Reputation: 1255
In Convolution Neural Network, Convolution operation is implemented as follows, (NOTE: COnvolution in blur / filter operation is separate)
For RGB-like inputs, the filter is actually 223, each filter corresponse to one color channel, resulting three filter response. These three add up to one flowing by bias and activation. finally, this is one pixel in the output map.
Upvotes: 13
Reputation: 468
If you're trying to implement a Conv2d on an RGB image this implementation in pytorch should help.
Grab an image and make it a numpy ndarray of uint8 (note that imshow needs uint8 to be values between 0-255 whilst floats should be between 0-1):
link = 'https://oldmooresalmanac.com/wp-content/uploads/2017/11/cow-2896329_960_720-Copy-476x459.jpg'
r = requests.get(link, timeout=7)
im = Image.open(BytesIO(r.content))
pic = np.array(im)
You can view it with
f, axarr = plt.subplots()
axarr.imshow(pic)
plt.show()
Create your convolution layer (initiates with random weights)
conv_layer = nn.Conv2d(in_channels=3,
out_channels=3,kernel_size=3,
stride=1, bias=None)
Convert input image to float and add an empty dimension because that is the input pytorch expects
pic_float = np.float32(pic)
pic_float = np.expand_dims(pic_float,axis=0)
Run the image through the convolution layer (permute changes around the dimension location so they match what pytorch is expecting)
out = conv_layer(torch.tensor(pic_float).permute(0,3,1,2))
Remove the extra first dim we added (not needed for visualization), detach from GPU and convert to numpy ndarray
out = out.permute(0,2,3,1).detach().numpy()[0, :, :, :]
Visualise the output (with a cast to uint8 which is what we started with)
f, axarr = plt.subplots()
axarr.imshow(np.uint8(out))
plt.show()
You can then change the weights of the filters by accessing them. For example:
kernel = torch.Tensor([[[[0.01, 0.02, 0.01],
[0.02, 0.04, 0.02],
[0.01, 0.02, 0.01]]]])
kernel = kernel.repeat(3, 3, 1, 1)
conv_layer.weight.data = kernel
Upvotes: 1
Reputation: 2197
Lets say we have a 3 Channel (RGB) image given by some matrix A
A = [[[198 218 227] [196 216 225] [196 214 224] ... ... [185 201 217] [176 192 208] [162 178 194]]
and a blur kernal as
K = [[0.1111, 0.1111, 0.1111], [0.1111, 0.1111, 0.1111], [0.1111, 0.1111, 0.1111]] #which is actually 0.111 ~= 1/9
The convolution can be represented as shown in the image below
As you can see in the image, each channel is individually convoluted and then combined to form a pixel.
Upvotes: 21
Reputation: 12867
They will be just the same as how you do with a single channel image, except that you will get three matrices instead of one. This is a lecture note about CNN fundamentals, which I think might be helpful for you.
Upvotes: 17