Deepankar Dey
Deepankar Dey

Reputation: 137

SVC classifier taking too much time for training

I am using SVC classifier with Linear kernel to train my model. Train data: 42000 records

    model = SVC(probability=True)
    model.fit(self.features_train, self.labels_train)
    y_pred = model.predict(self.features_test)
    train_accuracy = model.score(self.features_train,self.labels_train)
    test_accuracy = model.score(self.features_test, self.labels_test)

It takes more than 2 hours to train my model. Am I doing something wrong? Also, what can be done to improve the time

Thanks in advance

Upvotes: 9

Views: 27091

Answers (4)

Nikolay Petrov
Nikolay Petrov

Reputation: 88

You can try using accelerated implementations of algorithms - such as scikit-learn-intelex - https://github.com/intel/scikit-learn-intelex

For SVM you for sure would be able to get higher compute efficiency.

First install package

pip install scikit-learn-intelex

And then add in your python script

from sklearnex import patch_sklearn
patch_sklearn()

Note that: "You have to import scikit-learn after these lines. Otherwise, the patching will not affect the original scikit-learn estimators." (from docs)

Upvotes: 1

shejomamu
shejomamu

Reputation: 141

I had the same issue, but scaling the data solved the problem

# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)

Upvotes: 2

Sripathi
Sripathi

Reputation: 59

Try using the following code. I had similar issue with similar size of the training data. I changed it to following and the response was way faster

model = SVC(gamma='auto') 

Upvotes: 0

rvf
rvf

Reputation: 1449

There are several possibilities to speed up your SVM training. Let n be the number of records, and d the embedding dimensionality. I assume you use scikit-learn.

  • Reducing training set size. Quoting the docs:

    The fit time complexity is more than quadratic with the number of samples which makes it hard to scale to dataset with more than a couple of 10000 samples.

    O(n^2) complexity will most likely dominate other factors. Sampling fewer records for training will thus have the largest impact on time. Besides random sampling, you could also try instance selection methods. For example, principal sample analysis has been proposed recently.

  • Reducing dimensionality. As others have hinted at in their comments, embedding dimension also impacts runtime. Computing inner products for the linear kernel is in O(d). Dimensionality reduction can, therefore, also reduce runtime. In another question, latent semantic indexing was suggested specifically for TF-IDF representations.

  • Parameters. Use SVC(probability=False) unless you need the probabilities, because they "will slow down that method." (from the docs).
  • Implementation. To the best of my knowledge, scikit-learn just wraps around LIBSVM and LIBLINEAR. I am speculating here, but you may be able to speed this up by using efficient BLAS libraries, such as in Intel's MKL.
  • Different classifier. You may try sklearn.svm.LinearSVC, which is...

    [s]imilar to SVC with parameter kernel=’linear’, but implemented in terms of liblinear rather than libsvm, so it has more flexibility in the choice of penalties and loss functions and should scale better to large numbers of samples.

    Moreover, a scikit-learn dev suggested the kernel_approximation module in a similar question.

Upvotes: 15

Related Questions