tokatokeari
tokatokeari

Reputation: 11

maximum variance unfolding

I was wondering if somebody knows an intuitive way of explaining how the maximum variance unfolding algorithm works and the difference between this and maximum variance correction, and could share. I'm still trying to wrap my head around manifold learning and semidefinite programming in general, and would appreciate any help in tying this all together.

Thank you!

Upvotes: 0

Views: 983

Answers (1)

Wasi Ahmad
Wasi Ahmad

Reputation: 37711

Maximum variance unfolding (MVU) can be viewed as a non-linear generalization of principal component analysis. So, MVU is an approach for non-linear dimensionality reduction. The goal of maximum variance unfolding is to learn faithful low dimensional representations of high dimensional data.

PCA works poorly if the most important modes of variability are nonlinear and MVU tries to improve it. The algorithm for maximum variance unfolding is based on a simple intuition. Imagine the inputs are connected to their k nearest neighbors by rigid rods. (The value of k is the algorithm’s one free parameter.) The algorithm attempts to pull the inputs apart, maximizing the sum total of their pairwise distances without breaking (or stretching) the rigid rods that connect nearest neighbors. The outputs are obtained from the final state of this transformation.

The algorithm can be summarized as below:

(1) Form a graph that connects each point to its k neighbors.
(2) Add additional edges by connecting points that are common neighbors of another point in the data set.
(3) Compute the Gram matrix (centered on the origin) that corresponds to the maximum data variance and also preserves the distances between all connected points.
(4) Find the lower dimensional embedding using kernel PCA.

I would encourage you to go through this document.

Upvotes: 1

Related Questions