Reputation: 15
I am using the sklearn.neural_network MLPClassifier, want to plot the learning rate(0.0001:10) with accuracy. I have getten the data code as follows.
from sklearn.neural_network import MLPClassifier
import numpy as np
X_train = Input_Data['Train_Input']
X_test = Input_Data['Test_Input']
Y_train = Input_Data['Train_Target']
Y_test = Input_Data['Test_Target']
Y_train = Y_train.astype('int')
Y_test = Y_test.astype('int')
classifier = svm.SVC(kernel='linear', C=0.01)
Y_pred = classifier.fit(X_train, Y_train).predict(X_test)
for lr in np.r_[0.001:10:0.002]:
mlp = MLPClassifier(hidden_layer_sizes=(8, 8), max_iter=10, alpha=1e-4,
solver='sgd', verbose=10, tol=1e-4, random_state=1,
learning_rate=lr)
print(mlp.fit(X_train, Y_train))
Acc = accuracy_score(Y_test, Y_pred)
plt(learning_rate, Acc)
with error as follows:
ValueError: learning rate 0.001 is not supported.
Upvotes: 0
Views: 2614
Reputation: 1250
Do you need to run the MLPClassifier with every learning rate from 0.0001 to 10? If so, then you'd have to run the classifier in a loop, changing the learning rate each time. You'd also have to define the step size between 0.001 to 10 if you need the learning rate at certain intervals - say 0.0001, 0.0005, 0.0010, ....10.
Say you have a list of learning rates at these intervals,
learning_rates = [0.001, 0.005, ..., 10]
for lr in learning_rates:
mlp = MLPClassifier(hidden_layer_sizes=(8, 8), max_iter=10, alpha=1e-4,
solver='sgd', verbose=10, tol=1e-4, random_state=1,
learning_rate=lr)
print(mlp.fit(X_train, Y_train))
print("Training set score: %f" % mlp.score(X_train, Y_train))
print("Test set score: %f" % mlp.score(X_test, Y_test))
You can now collect mlp.score for train and test sets into separate lists and plot them against the learning rate in matplotlib.
Hope this helps!
I think your confusion is about the max_iter
parameter which relates to the algorithm and not the upper bound for the learning rate.
Upvotes: 1