Reputation: 1
I want to implement L1 Regularization in sklearn's MLPClassifier. Here is my code where alpha=0.0001
is the default for L2 regularization. I want to use L1 Regularization instead of L2.
# evaluate a Neural Networks with ReLU and L1 norm regularization
from numpy import mean
from numpy import std
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.neural_network import MLPClassifier
# prepare the cross-validation procedure (10X10)
cv = KFold(n_splits=10, random_state=1, shuffle=True)
# create model L2 Regularization ["alpha" here is used as a hyperparamter for L2
regularization]
model = MLPClassifier(alpha=0.0001, hidden_layer_sizes=(100,), activation='relu',
solver='adam')
# evaluate model
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1)
# report performance
print('Accuracy: %.3f (%.3f)' % (mean(scores), std(scores)))
Upvotes: 0
Views: 664
Reputation: 66835
It is not possible. Scikit-learn has many very long discussions about support for neural networks and decided against it. They provide extremely basic/rigid implementation and that is it. For customisation you need to look at keras, tf, torch, jax etc.
Even scikit learn itself recommends other libraries for that https://scikit-learn.org/stable/related_projects.html#related-projects
Deep neural networks etc.
nolearn A number of wrappers and abstractions around existing neural network libraries
Keras High-level API for TensorFlow with a scikit-learn inspired API.
lasagne A lightweight library to build and train neural networks in Theano.
skorch A scikit-learn compatible neural network library that wraps PyTorch.
scikeras provides a wrapper around Keras to interface it with scikit-learn. SciKeras is the successor of tf.keras.wrappers.scikit_learn.
Upvotes: 0