karennml
karennml

Reputation: 11

MLPClassifier with warm_Start=True converges on one iteration

I am using scikit learn's MLPClassifier with the following parameters

mlp = MLPClassifier(hidden_layer_sizes=(3,2,),solver='sgd', verbose=True,
              learning_rate='constant',learning_rate_init=0.001, random_state=rr, 
              warm_start=True, max_iter=400, n_iter_no_change=20)

I want fit my classifier on different, but very similar data and see how long the NN takes to converge.

I've generated a very simple dataset. It is a data set of 50,000 (x,y) points and the colours denote how I have classified the points.

My classifier was initially trained on the first plot, then I did

mlp.fit(new_data, new_data_labels)

where new_data = my old data + the new data set, for each of the plots.

This runs fine, however, when I fit my classifier to my new, bigger data set it converges on one iteration. It seems no matter how I vary my data my classifier coverages straight away but my loss graph looks terrible. I'm not too sure where I'm going wrong.

My output looks like so

Iteration 134, loss = 0.55557070
Iteration 135, loss = 0.55550839
Training loss did not improve more than tol=0.000100 for 20 consecutive epochs. Stopping.
Training set score: 0.663680
Training set loss: 0.555508
Iteration 136, loss = 0.56689723
Training loss did not improve more than tol=0.000100 for 20 consecutive epochs. Stopping.
Training set score: 0.643810
Training set loss: 0.566897
Iteration 137, loss = 0.57723775
Training loss did not improve more than tol=0.000100 for 20 consecutive epochs. Stopping.
Training set score: 0.624447
Training set loss: 0.577238
Iteration 138, loss = 0.58684895
Training loss did not improve more than tol=0.000100 for 20 consecutive epochs. Stopping.

Upvotes: 1

Views: 1032

Answers (1)

frank
frank

Reputation: 11

You can use mlp.loss_curve_ to get the loss curve of the model.

Upvotes: 1

Related Questions