Branden Keck
Branden Keck

Reputation: 572

How would you vectorize a fraction of sums of matrices (Expectation Maximization) in numpy?

I am trying to vectorize the following Expectation-Maximization / clustering equation for a 2-dimensional Gaussian distribution using numpy. I have a naive approach that I will include at the end of my question:

Expectation Maximization Covariance Matrix

For context, the variables and dimensions are defined as follows:

The end product is a numerator that is a sum of (2, 2) shape matrices and the denominator is a scalar. The final value is a (2, 2) covariate matrix estimate. This must also be done for each value of "k" (1, 2, 3).

I've achieved a vectorized approach for other values by defining the following numpy arrays:

My naive code is as follows:

for kk in range(k):
    numsum = 0
    for ii in range(X.shape[0]):
        diff = (X[ii, :]-mu[kk, :]).reshape(-1, 1)
        numsum = numsum + Z[ii, kk]*np.matmul(diff, diff.T)
    sigma[kk] = numsum / np.sum(Z[:, kk])

Long story long - is there any better way to do this?

Upvotes: 0

Views: 89

Answers (2)

Onyambu
Onyambu

Reputation: 79338

You can use np.einsum:

d = X - mu[:,None]
np.einsum('ijk,ijm,ji->imk', d, d, Z/Z.sum(0, keepdims=True))

Upvotes: 4

simon
simon

Reputation: 5496

The following should work:

diff = X[np.newaxis, :, :] - mu[:, np.newaxis, :]  # kxnx2
numsum = np.matmul(Z.T[:, np.newaxis, :] * diff.transpose(0, 2, 1), diff)  # kx2x2
sigma_proposed = numsum / Z.sum(axis=0)[:, np.newaxis, np.newaxis]  # kx2x2

Altogether, I checked it with the following code:

import numpy as np

n, k = 1000, 3

# Create some data
rand = np.random.default_rng(seed=0xC0FFEE)  # For reproducibility
Z = rand.uniform(size=(n, k))
X = rand.normal(size=(n, 2))
mu = rand.normal(size=(k, 2))
sigma = np.zeros((k, 2, 2))

# Code from question
for kk in range(k):
    numsum = 0
    for ii in range(X.shape[0]):
        diff = (X[ii, :]-mu[kk, :]).reshape(-1, 1)
        numsum = numsum + Z[ii, kk]*np.matmul(diff, diff.T)
    sigma[kk] = numsum / np.sum(Z[:, kk])
    
# Proposed
diff = X[np.newaxis, :, :] - mu[:, np.newaxis, :]  # kxnx2
numsum = np.matmul(Z.T[:, np.newaxis, :] * diff.transpose(0, 2, 1), diff)  # kx2x2
sigma_proposed = numsum / Z.sum(axis=0)[:, np.newaxis, np.newaxis]  # kx2x2

assert np.allclose(sigma, sigma_proposed)

Upvotes: 2

Related Questions