https://www.analyticsvidhya.com/blog/2017/06/a-comprehensive-guide-for-linear-ridge-and-lasso-regression/
“Everything should be made simple as possible, but not simpler – Albert Einstein”
from sklearn.linear_model import Ridge
## training the model
ridgeReg = Ridge(alpha=0.05, normalize=True)
ridgeReg.fit(x_train,y_train)
pred = ridgeReg.predict(x_cv)
calculating mse
mse = np.mean((pred_cv - y_cv)**2)
mse 1348171.96 ## calculating score ridgeReg.score(x_cv,y_cv) 0.5691




可以看到:
随着alpha值的增加,系数的数量也在减小,向零靠近。
计算R-2值,发现等alpha为0.05时,R-2值最大。应当选误差平方和最小的。
改变alpha值就是改变惩罚项,alpha值越大,惩罚项越大从而系数也越来越小。
要点:
1. 岭回归It shrinks the parameters, therefore it is mostly used to prevent multicollinearity.
2. It reduces the model complexity by coefficient shrinkage.
3. It uses L2 regularization technique. (which I will discussed later in this article)
网友评论