Preface
只用看一句:Statistical learning refers to a set of tools for modeling and understanding complex datasets.
Introduction
简要的介绍了本书用到的3组数据:薪水、股票、基因。介绍了statistical learning的历史,高斯很早就发了linear regression的paper,最近几十年飞速发展。然后介绍了本书和ESL的区别,比较浅但是偏工程。最后说了本书的结构。
What Is Statistical Learning
More generally, suppose that we observe a quantitative response Y and p different predictors, X1,X2, . . .,Xp. We assume that there is some relationship between Y and X= (X1,X2, . . .,Xp), which can be written in the very general form Y= f (X ) + ε
In essence, statistical learning refers to a set of approaches for estimating f. In this chapter we outline some of the key theoretical concepts that arise in estimating f, as well as tools for evaluating the estimates obtained.
Why Estimate f
There are two main reasons that we may wish to estimate f: prediction and inference.
Prediction
In many situations, a set of inputs X are readily available, but the output Y cannot be easily obtained. In this setting, since the error term averages to zero, we can predict Y using ˆ Y= ˆ f (X ). where ˆ f represents our estimate for f , and ˆ Y represents the resulting prediction for Y. In this setting, ˆ f is often treated as a black box , in the sense that one is not typically concerned with the exact form of ˆ f , provided that it yields accurate predictions for Y.
reducible error and the irreducible error:we can potentially improve the accuracy of ˆ f by using the most appropriate statistical learning technique to estimate f . no matter how well we estimate f, we cannot reduce the error introduced by ε. ε is a random error term, which is independent of X and has mean zero.
It is important to keep in mind that the irreducible error will always provide an upper bound on the accuracy of our prediction for Y. This bound is almost always unknown in practice.
Inference:We are often interested in understanding the way that Y is affected as X1, . . . , Xp change. In this situation we wish to estimate f , but our goal is not necessarily to make predictions for Y. We instead want to understand the relationship between Xand Y, or more specifically, to understand how Y changes as a function of X1, . . .,Xp. Now ˆ f cannot be treated as a black box, because we need to know its exact form.Historically, most methods for estimating f have taken a linear form.
How Do We Estimate f
Parametric Methods
Non-parametric Methods
The Trade-Off Between Prediction Accuracy and Model Interpretability
Supervised Versus Unsupervised Learning
Many problems fall naturally into the supervised or unsupervised learning paradigms. However, sometimes the question of whether an analysis should be considered supervised or unsupervised is less clear-cut.a semi-supervised learning problem.
Regression Versus Classification Problems
Least squares linear regression (Chapter 3) is used with a quantitative response, whereas logistic regression (Chapter 4) is typically used with a qualitative (two-class, or binary ) response. As such it is often used as a classification method.
Some statistical methods, such as K-nearest neighbors (Chapters 2 and 4) and boosting (Chapter 8), can be used in the case of either quantitative or qualitative responses.
Assessing Model Accuracy
There is no free lunch in statistics: no one method dominates all others over all possible data sets.
Measuring the Quality of Fit
In the regression setting, the most commonly-used measure is the mean squared error(MSE), given by mean square error.
the training MSE given by (2.5) is small. However, we are really not interested in whether ˆf(xi) ≈yi; instead, we want to know whether ˆf(x0) is approximately equal to y0, where (x0, y0) is a previously unseen test observation not used to train the statistical learning method . We want to choose the method that gives the lowest test MSE , as opposed to the lowest training MSE. In other words, test MSEif we had a large number of test observations, we could compute:
The Bias-Variance Trade-Off
Equation 2.7 tells us that in order to minimize the expected test error, we need to select a statistical learning method that simultaneously achieves low variance and low bias .
Variance refers to the amount by which ˆ f would change if we estimated it using a different training data set.In general, more flexible statistical methods have higher variance.
As a general rule, as we use more flexible methods, the variance will increase and the bias will decrease.
网友评论