Reputation: 545
I am doing a column wise logical indexing of a matrix in MATLAB. An example would be :
tic
N = 5*10^6;
input = randi(100,N,12);
output = zeros(N,1);
d_sn2 = randi(25,N,12);
d_sd2 = randi(25,N,12);
LL1 = randi(8,N,12);
UL1 = randi([12,20],N,12);
LL2 = randi(8,N,12);
UL2 = randi([12,20],N,12);
for p = 1:N
temp = zeros(12,12);
for i = 1:12
I2 = (d_sn2(:,i)>LL1(p,i) & d_sn2(:,i)<UL1(p,i)) & (d_sd2(:,i)>LL2(p,i) & d_sd2(:,i)<UL2(p,i));
temp(i,:) = mean(input(I2,:));
end
output(p) = max(temp(:));
end
toc
I would like to know if I can vectorize this operation or make any faster ?
Upvotes: 1
Views: 76
Reputation: 60494
The computation of I2
in the inner loop can be easily vectorized. This:
temp = zeros(12,12);
for i = 1:12
I2 = (d_sn2(:,i)>LL1(p,i) & d_sn2(:,i)<UL1(p,i)) & (d_sd2(:,i)>LL2(p,i) & d_sd2(:,i)<UL2(p,i));
temp(i,:) = mean(input(I2,:));
end
is the same as this:
I2 = d_sn2>LL1(p,:) & d_sn2<UL1(p,:) & d_sd2>LL2(p,:) & d_sd2<UL2(p,:);
temp = zeros(12,12);
for i = 1:12
temp(i,:) = mean(input(I2(:,i),:));
end
This code uses implicit singleton expansion, if you have a version of MATLAB prior to R2016b, you will need to write each >
(gt
) and <
(lt
) call using bsxfun
: bsxfun(@gt,d_sn2,LL1(p,:))
, etc.
Unfortunately, the indexing into input
is a lot harder to vectorize. Because every iteration of i
a different number of elements of input
are accessed, there is no simple way of creating the matrix temp
without loops. The few approaches I tried are all much slower than the loop code.
If you are on a fairly recent version of MATLAB, your code will be quite efficient. MATLAB's interpreter uses a JIT (just-in-time compiler) making loops not nearly as slow as they used to be. For example, the difference of a trivial loop to add all elements of a matrix is only 2-3 times slower than using the function sum
. Back in the old days, this used to be maybe 100 times slower. So the benefits of vectorization are not the same as what they used to be. Coupling that with the very large size of arrays you're using, it means that vectorization is going to be a pessimization, since vectorization often implies creating even larger intermediate matrices.
Upvotes: 1