Reputation: 608
I am running below code to generate all the coefficient positive:
from sklearn.linear_model import Lasso
pos = Lasso(positive=True)
pos.fit(X,y)
list(pos.coef_)
the above code give me positive coefficient or "0" but I need all to be positive with some positive impact.
Requirement = All Positive coefficient(Coefficient should not be Zero(0))
How can I perform above task?
Upvotes: 3
Views: 2567
Reputation: 1688
Lasso does not solve the l0
-penalized least squares but instead l1
-penalized least squares. The solution you get for alpha=0.01
is the Lasso solution (with a single non zero coef of ~0.245 for feature #10).
Even if your solution has a squared reconstruction error of 0.0
, it still has a penalty of 1.0
(multiplied by alpha).
The solution for lasso with alpha=1.0
has a small squared reconstruction error of 0.04387
(divided by 2 * n_samples == 6
) and a smaller l1
penalty of 0.245
(multiplied by alpha).
The objective function minimized by lasso is given in the docstring:
http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html
To summarize the different priors (or penalties) commonly used to regularize least squares regression:
l2
penalty favors any number of non-zero coefficients but with very
small absolute values (close to zero)
l1
penalty favors a small number of non-zero coefficients with
small absolute values.
l0
favors a small number of non zero coefficients of any absolute
value.
l0
being non-convex, it is often not as easy to optimize as l1
and l2
. This is why people use l1
(lasso) or l1 + l2
(elastic net) in practice to find sparse solutions even if not as clean as l0
.
Upvotes: 0
Reputation: 11
lasso = Lasso(alpha=1, positive=True)
lasso_coeff['Coefficient Estimates] = pd.Series(lasso.coef_)
print(lasso_coeff)
# The above set of lines will force the coefficients to positive.
Upvotes: 1