Reputation: 1751
I am trying to optimize an SVR model and facing a problem because of overfitting, to overcome this I have tried to decrease the number of iterations instead of leaving it until convergence.
To compare the both models I need the number of iterations for both cases. How can I know the number of iterations needed for convergence in the case it is open (max_iter=-1)?
This is my code:
model_1=SVR(kernel='rbf', C=316, epsilon=0, gamma=0.003162,max_iter=2500)
model_1.fit(tr_sets[:,:2],tr_sets[:,2])
print(model_1.score)
model_2=SVR(kernel='rbf', C=316, epsilon=0, gamma=0.003162,max_iter=-1)
model_2.fit(tr_sets[:,:2],tr_sets[:,2])
print(model_2.score)
Edit: the problem now is solved for IPython IDE by setting verbose=2
but still need to be viewed in Jupyter notebook, spyder or to be written to an external file as the verbose option seems only to work with IPython IDE
Upvotes: 6
Views: 9817
Reputation: 3453
In modern versions of Scikit-learn you have the n_iter_
attribute (docs) which it's a ndarray of shape n_classes * (n_classes - 1) // 2
with all the needed iterations to fit all the models for every class:
import numpy as np
model = SVR(...parameters)
model.fit(x, y)
print('Iterations needed:', np.sum(model.n_iter_))
Upvotes: 0
Reputation: 12647
If you want to see the progress of your SVR, enter verbose=2
to the constructor of SVR - notice this can make progress slower by a magnitude
from sklearn.svm import SVR
import numpy as np
n_samples, n_features = 10, 5
np.random.seed(0)
y = np.random.randn(n_samples)
X = np.random.randn(n_samples, n_features)
clf = SVR(C=1.0, epsilon=0.2,verbose=2)
clf.fit(X, y)
Output will be
optimization finished, #iter = 4
obj = -4.366801, rho = -0.910470
nSV = 7, nBSV = 5
Where #iter
is what you are looking for
Upvotes: 6