Chris Parry
Chris Parry

Reputation: 3057

PCA memory error in Sklearn: Alternative Dim Reduction?

I am trying to reduce the dimensionality of a very large matrix using PCA in Sklearn, but it produces a memory error (RAM required exceeds 128GB). I have already set copy=False and I'm using the less computationally expensive randomised PCA.

Is there a workaround? If not, what other dim reduction techniques could I use that require less memory. Thank you.


Update: the matrix I am trying to PCA is a set of feature vectors. It comes from passing a set of training images through a pretrained CNN. The matrix is [300000, 51200]. PCA components tried: 100 to 500.

I want to reduce its dimensionality so I can use these features to train an ML algo, such as XGBoost. Thank you.

Upvotes: 15

Views: 10612

Answers (3)

mon
mon

Reputation: 22368

import numpy as np
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1)
mnist.target = mnist.target.astype(np.uint8)

# Split data into training and test
X, y = mnist["data"], mnist["target"]
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
del mnist

# Use Incremental PCA to avoid MemoryError: Unable to allocate array with shape
from sklearn.decomposition import IncrementalPCA
m, n = X_train.shape
n_batches = 100
n_components=154

ipca = IncrementalPCA(
    copy=False,
    n_components=n_components,
    batch_size=(m // n_batches)
)
X_train_recuced_ipca = ipca.fit_transform(X_train)

Upvotes: 0

Vivek Puurkayastha
Vivek Puurkayastha

Reputation: 536

You Could use IncrementalPCA available in SK learn. from sklearn.decomposition import IncrementalPCA. Rest of the interface is same as PCA. You need to pass an extra argument batch_size, which needs to <= #components.

However, in case there is a need to apply a non linear version like KernelPCA there does not seem to be a support for the something similar. KernelPCA absolutely explodes in it's memory requirement, see this article about Non Linear Dimensionality Reduction on Wikipedia

Upvotes: 6

Chris Parry
Chris Parry

Reputation: 3057

In the end, I used TruncatedSVD instead of PCA, which is capable of handling large matrices without memory issues:

from sklearn import decomposition

n_comp = 250
svd = decomposition.TruncatedSVD(n_components=n_comp, algorithm='arpack')
svd.fit(train_features)
print(svd.explained_variance_ratio_.sum())

train_features = svd.transform(train_features)
test_features = svd.transform(test_features)

Upvotes: 10

Related Questions