Evaluating a hypothesis
Data sets are seperated to 3 parts:
1.training sets (60%)
2.validation sets (20%)
3.test sets (20%)
So there are 3 kinds of errors:
1.Training error -> train model
2.Cross Validation error -> select model
3.Test error -> estimate generalization error
Diagnosing bias vs. variance
-By the relationship between degree of polynomial and error
-By the relationship between regularization parameter lambda and error
-By learning curves
What to try next?
1.Get more training example -> fix high variance (overfit)
2.Try smaller sets of features -> fix high variance (overfit)
3.Try get additional features -> fix high bias (underfit)
4.Try adding polynomial features -> fix high bias (underfit)
5.Try descreasing lambda -> fix high bias (underfit)
6.Try increasing lambda -> fix high variance (overfit)
7.Try larger neural network -> fix high bias (underfit)
8.Try smaller neural network -> fix high variance (overfit)
网友评论