Reputation: 342
I'm using convolution for neural networks, currently it's being implemented like this
for f = 1:NumberOfKernels
tempC = conv2(input(:,:,1),kernels(:,:,1,f),'same');
tempM = conv2(input(:,:,2),kernels(:,:,2,f),'same');
tempY = conv2(input(:,:,3),kernels(:,:,3,f),'same');
preactivation(:,:,f) = tempC + tempM + tempY;
end
Can this be done in a single line without writing out conv2 for each color channel individually? Can this function from Image Processing Toolbox speed it up? Take into account that I have no GPU.
Upvotes: 0
Views: 455
Reputation: 2063
You could do the following:
szk = size(kernels);
temp = zeros(size(input)+szk(1:3)-1);
szk = ceil(szk(1:2) / 2);
temp(szk(1):szk(1)-1+size(input,1),szk(2):szk(2)-1+size(input,2),:) = input;
for f = 1:NumberOfKernels
preactivation(:,:,f) = convn(temp,kernels(:,:,:,f),'valid');
end
However, I wouldn't expect it to be much faster. What would make things faster was if the kernels were separable.
Upvotes: 1