美文网首页
【ML】Normalization & Regularizati

【ML】Normalization & Regularizati

作者: 盐果儿 | 来源:发表于2023-06-10 16:26 被阅读0次

Normalization: Rescaling input features to have a consistent scale or range. The goal is to bring all features to a similar magnitude to prevent certain features from dominating others during the learning process. Normalization helps algorithms converge faster and avoids numerical instability. It is typically applied to input features before training a model.

Common normalization technique:

1. min-max scaling (scaling features to a specific range, often [0, 1]) 

2. z-score normalization (scaling features to have zero mean and unit variance). 

Regularization: It is a technique used to prevent overfitting, which occurs when a model becomes too complex and fits the training data too closely, resulting in poor generalization to unseen data. Regularization introduces a penalty term to the loss function, encouraging the model to have simpler or smoother representations. This penalty can take different forms, such as L1 regularization (Lasso), which adds the absolute values of the model's coefficients to the loss function, or L2 regularization (Ridge), which adds the squared values of the coefficients. The regularization term controls the trade-off between fitting the training data and keeping the model parameters small. Regularization is commonly used in linear regression, logistic regression, and neural networks, among other algorithms.

相关文章

网友评论

      本文标题:【ML】Normalization & Regularizati

      本文链接:https://www.haomeiwen.com/subject/hcupedtx.html