Reputation: 443
PCA is a dimensionality reduction algorithm that helps in reducing the dimensions of our data. The thing I haven't understood is that PCA gives an output of eigen vectors in decreasing order such as PC1,PC2,PC3 and so on. So this will become new axes for our data.
Where could we apply this new axes to predict the test set data?
We achieved dimensionality reduction from n to some n-k.
Upvotes: 1
Views: 5561
Reputation: 173
The idea of PCA is to reduce the dimensions to a subspace created of the n-k eigen vectors with the largest variance, resulting in the largest variance in the data mapped to your new subspace.
Furthermore it is possible to use PCA to reduce your dimensionality without knowing the classes of your training data, meaning it is unsupervised.
Another option, if you know the classes of your training data, is to use LDA which tries to find the feature space that maximize the between class variation.
Upvotes: 1