美文网首页
线性回归 Lasso

线性回归 Lasso

作者: 麒麟楚庄王 | 来源:发表于2018-11-27 16:55 被阅读0次

    https://www.analyticsvidhya.com/blog/2017/06/a-comprehensive-guide-for-linear-ridge-and-lasso-regression/

    LASSO (Least Absolute Shrinkage Selector Operator), is quite similar to ridge, but lets understand the difference them by implementing it in our big mart problem.

    from sklearn.linear_model import Lasso

    lassoReg = Lasso(alpha=0.3, normalize=True)

    lassoReg.fit(x_train,y_train)

    pred = lassoReg.predict(x_cv)

    # calculating mse

    mse = np.mean((pred_cv - y_cv)**2)

    mse

    1346205.82

    lassoReg.score(x_cv,y_cv)

    0.5720

    As we can see that, both the mse and the value of R-square for our model has been increased. 

    Therefore, lasso model is predicting better than both linear and ridge.

    Again lets change the value of alpha and see how does it affect the coefficients.

    lasso,alpha值为0.05 lasso,alpha值为0.5

    So, we can see that even at small values of alpha, the magnitude of coefficients have reduced a lot.

    We can see that as we increased the value of alpha, coefficients were approaching towards zero, but if you see in case of lasso, even at smaller alpha’s, our coefficients are reducing to absolute zeroes. Therefore, lasso selects the only some feature while reduces the coefficients of others to zero. This property is known as feature selection and which is absent in case of ridge.

    Here too, λ is the hypermeter, whose value is equal to the alpha in the Lasso function.

    要点:

    1. 它使用了L1正则化技术( L1 regularization technique)

    2. It is generally used when we have more number of features, because it automatically does feature selection.(L1正则化项自带特征选择的功能)

    相关文章

      网友评论

          本文标题:线性回归 Lasso

          本文链接:https://www.haomeiwen.com/subject/nwtnqqtx.html