fabian789
fabian789

Reputation: 8412

Given a List of Matrices, Sum Them but Ignore Some Matrix's Rows

As an example, let

A = np.array([[a1, a2],
              [a3, a4]])
B = np.array([[b1, b2],
              [b3, b4]])
C = np.array([[c1, c2],
              [c3, c4]]),

let l = [A, B, C], and let

I = np.array([[1, 0, 1],
              [0, 1, 1]]).

I want to calculate the Matrix R

R = np.array([[a1 + c1, a2 + c2],
              [b3 + c3, b4 + c4]])

I.e. in general, I have a list of k nxm matrices stored in l and an index matrix I of dimensions nxk that specifies for each row of the result R which of the k matrices in l should be used in the calculation of that row of R. Above, the first row of I is [1, 0, 1] and thus A and C are used in the calculation of the first row of R.

Which numpy function can I use for this?

Upvotes: 2

Views: 202

Answers (3)

Divakar
Divakar

Reputation: 221594

Assuming l as the input list of (n,m) shaped arrays, you can use the very efficient np.einsum, like so -

np.einsum('ij,jik->ik',I,l)

Sample run -

In [75]: # Inputs
    ...: a1,a2,a3,a4 = 4,6,8,2
    ...: b1,b2,b3,b4 = 5,11,4,3
    ...: c1,c2,c3,c4 = 6,99,2,5
    ...: 
    ...: A = np.array([[a1, a2],
    ...:               [a3, a4]])
    ...: B = np.array([[b1, b2],
    ...:               [b3, b4]])
    ...: C = np.array([[c1, c2],
    ...:               [c3, c4]])
    ...: l = [A,B,C]
    ...: 
    ...: I = np.array([[1, 0, 1],
    ...:               [0, 1, 1]])
    ...: # Expected output
    ...: R = np.array([[a1 + c1, a2 + c2],
    ...:               [b3 + c3, b4 + c4]])
    ...: 

In [76]: R
Out[76]: 
array([[ 10, 105],
       [  6,   8]])

In [77]: np.einsum('ij,jik->ik',I,l)
Out[77]: 
array([[ 10, 105],
       [  6,   8]])

Upvotes: 1

Avish
Avish

Reputation: 4626

Assuming:

terms = np.array([A, B, C])  # A,B,C are 2x2 matrices
coeffs = np.array([[1,0,1], [0,1,1]])

You can do:

# transpose terms to 2x2 matrix of [A,B,C] element vectors,
# with shape (2,2,3):
transposed_terms = np.transpose(terms, (1,2,0)) 
# tile coefficients to align:
tiled_coeffs = np.tile(coeffs, 2).reshape(2,2,3)
# element-wise multiplication and sum over last dimension
return np.sum(transposed_terms, tiled_coeffs, 2)

There's probably a simpler way to do this using something like numpy.tensordot, but my numpy-fu is not strong enough for this.

Upvotes: 0

user2285236
user2285236

Reputation:

I agree with @Avish that this is very much like a matrix multiplication but in the end you just want the diagonal entries:

np.diagonal(l.T.dot(I.T).T).T

How it works:

l.T will give [[a1, b1, c1], [a3, b3, c3], [a2, b2, c2], [a4, b4, c4]]

l.T.dot(I.T) will give [[[a1+c1, b1+c1], [a3+c3, b3+c3]], [[a2+c2, b2+c2], [a4+c4, b4+c4]]]

l.T.dot(I.T).T is [[[a1+c1, a2+c2], [a3+c3, a4+c4]], [[b1+c1, b2+c2], [b3+c3, b4+c4]]]

And the diagonal entries (np.diagonal(l.T.dot(I.T).T)) are [[a1+c1, b3+c3], [a2+c2, b4+c4]]

Finally, its transpose is: [[a1+c1, a2+c2], [b3+c3, b4+c4]]

Upvotes: 1

Related Questions