Overfitting and Regularization
- What should we do if our model is too complicated?
- Fundamental causes of overfitting:
complicated model (通常情况下是variance过大); limited learning data/labels - increase training data size
- avoid over-training your dataset
- filter out features:
feture reduction
principle component analysis (PCA) - regularization:
ridge regression
least absolute shrinkage and selection operator (LASSO) Logistic Regression-L2, -L1
- filter out features:
- Fundamental causes of overfitting:
Model Error
- Error of regression models
-
Bias measures how far off in general the model's predictions are from the correct value.
Variance is how much the predictions for a given point vary between different realizations of the model.
tradeoff.png
image.png
image.png
Ridge Regression(^2 regression); LASSO(^1 regression)
- choose model -> calculate loss function -> numerical optimization
- ridge regression improves the loss function definition of linear regression by introducing variance into the formula
- L2 penalty
constraint conditioned optimization problem. Lagrange multiplier - Hyperparameter Optimization
its value is usually obtained by test: cross validation
L1 L2 (^1 ^2 classification)
- L1 penalty in loss function
- L1 penalty in loss function
- Norm1 = (|x1| + |x2|)
Norm2 = sqrt(x1^2 +x2^2)
Norm3 = cub(x1^3 + x2^3)
Cross Validation
- what is it:
assess how your model result will generalize to another independent dataset - K-fold Cross Validation
- classify the dataset into three parts -training dataset, validation set and testing set; then train a model from the training set, try (3) different lambdas to get three new models, calculate (3) errors in the validation set and determine which lambda has the least error. Then use this lambda and the whole dataset (traing, validation) to get a final model.Sometimes this method is biased because the traing set and validation set may have different characteristic distributions, so we k-fold our set and do cross validation on each choice of classification of set and calculate the average error.
- model selection with cross validation:
use cross validation method to do hyperparameter tuning
cross validation can only validate your model selection
Confusion Matrix
p | n | |
---|---|---|
y | true positive | false positive |
n | false negative | true negative |
- in "true positive":
"true" means: you made a correct prediction
"positive" means: what your prediction is - Different metrics(量度) for model evaluation:
precision = tp / (tp + fp) spam
recall = tp / (tp + fn) recall
accuracy = (tp + tn) / all
cyber security: recall is necessary, improve precision as much as possible
if data is really imbalanced (a huge difference in n and p), look at precision and recall but not accuracy because negative can be too many and accuracy should be high by nature)
Result Evaluation Metric - ROC curve
-
receiver operating characteristic curve
image.png - false positive rate = number of flase positive / number of real negative
true positive rate = number of true positive / number of real positive - 同样的模型 同样的data 取不同的threshold得到的rate
classifiter越凸、凹越好(积分与random chance积分的差越大越好)面积[0.5, 1] 面积越大分类越好 - special points in ROC space
best case (0, 1)
worst case:(1, 0) - Area under the Curve of ROC (AUC)
AUC value: [0, 1]
The larger the value is, the better classification performance your classifier has.
ROC AUC is the probability a randomly-chosen positive example is ranked more highly than a randomly-chosen negative example (campared to the ground truth).- 00110
abcde - cabde
10010 AUC= 0.8 - dcabe ACU = 1
- 00110
网友评论