美文网首页
《机器学习及实践——从零开始通往KAGGLE竞赛之路》读书笔记六

《机器学习及实践——从零开始通往KAGGLE竞赛之路》读书笔记六

作者: 风之旅人c | 来源:发表于2020-02-28 17:26 被阅读0次

    K邻近

    模型介绍

    K邻近模型本身非常直观并且容易理解。算法描述起来也很简单,如图所示。假设我们有一些携带分类标记的训练样本,分布于特征空间中,不同颜色代表各自类别。我们需要寻找与这个待分类的样本在特征空间中距离最近的K个已标记样本作为参考,来帮助我们做出分类决策。


    K邻近

    数据描述

    我们利用K邻近算法对生物物种进行分类,使用著名的鸢尾数据集。

    # 从sklearn.datasets 导入 iris数据加载器。
    from sklearn.datasets import load_iris
    # 使用加载器读取数据并且存入变量iris。
    iris = load_iris()
    # 查验数据规模。
    iris.data.shape
    
    (150, 4)
    
    # 查看数据说明。对于一名机器学习的实践者来讲,这是一个好习惯。
    print (iris.DESCR)
    
    .. _iris_dataset:
    
    Iris plants dataset
    --------------------
    
    **Data Set Characteristics:**
    
        :Number of Instances: 150 (50 in each of three classes)
        :Number of Attributes: 4 numeric, predictive attributes and the class
        :Attribute Information:
            - sepal length in cm
            - sepal width in cm
            - petal length in cm
            - petal width in cm
            - class:
                    - Iris-Setosa
                    - Iris-Versicolour
                    - Iris-Virginica
                    
        :Summary Statistics:
    
        ============== ==== ==== ======= ===== ====================
                        Min  Max   Mean    SD   Class Correlation
        ============== ==== ==== ======= ===== ====================
        sepal length:   4.3  7.9   5.84   0.83    0.7826
        sepal width:    2.0  4.4   3.05   0.43   -0.4194
        petal length:   1.0  6.9   3.76   1.76    0.9490  (high!)
        petal width:    0.1  2.5   1.20   0.76    0.9565  (high!)
        ============== ==== ==== ======= ===== ====================
    
        :Missing Attribute Values: None
        :Class Distribution: 33.3% for each of 3 classes.
        :Creator: R.A. Fisher
        :Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov)
        :Date: July, 1988
    
    The famous Iris database, first used by Sir R.A. Fisher. The dataset is taken
    from Fisher's paper. Note that it's the same as in R, but not as in the UCI
    Machine Learning Repository, which has two wrong data points.
    
    This is perhaps the best known database to be found in the
    pattern recognition literature.  Fisher's paper is a classic in the field and
    is referenced frequently to this day.  (See Duda & Hart, for example.)  The
    data set contains 3 classes of 50 instances each, where each class refers to a
    type of iris plant.  One class is linearly separable from the other 2; the
    latter are NOT linearly separable from each other.
    
    .. topic:: References
    
       - Fisher, R.A. "The use of multiple measurements in taxonomic problems"
         Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to
         Mathematical Statistics" (John Wiley, NY, 1950).
       - Duda, R.O., & Hart, P.E. (1973) Pattern Classification and Scene Analysis.
         (Q327.D83) John Wiley & Sons.  ISBN 0-471-22361-1.  See page 218.
       - Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System
         Structure and Classification Rule for Recognition in Partially Exposed
         Environments".  IEEE Transactions on Pattern Analysis and Machine
         Intelligence, Vol. PAMI-2, No. 1, 67-71.
       - Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule".  IEEE Transactions
         on Information Theory, May 1972, 431-433.
       - See also: 1988 MLC Proceedings, 54-64.  Cheeseman et al"s AUTOCLASS II
         conceptual clustering system finds 3 classes in the data.
       - Many, many more ...
    
    from sklearn.model_selection import train_test_split
    X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.25, random_state=33)
    
    # 从sklearn.preprocessing里选择导入数据标准化模块。
    from sklearn.preprocessing import StandardScaler
    # 从sklearn.neighbors里选择导入KNeighborsClassifier,即K近邻分类器。
    from sklearn.neighbors import KNeighborsClassifier
    
    # 对训练和测试的特征数据进行标准化。
    ss = StandardScaler()
    X_train = ss.fit_transform(X_train)
    X_test = ss.transform(X_test)
    
    # 使用K近邻分类器对测试数据进行类别预测,预测结果储存在变量y_predict中。
    knc = KNeighborsClassifier()
    knc.fit(X_train, y_train)
    y_predict = knc.predict(X_test)
    
    # 使用模型自带的评估函数进行准确性测评。
    print ('The accuracy of K-Nearest Neighbor Classifier is', knc.score(X_test, y_test))
    
    The accuracy of K-Nearest Neighbor Classifier is 0.8947368421052632
    
    # 依然使用sklearn.metrics里面的classification_report模块对预测结果做更加详细的分析。
    from sklearn.metrics import classification_report
    print (classification_report(y_test, y_predict, target_names=iris.target_names))
    
                  precision    recall  f1-score   support
    
          setosa       1.00      1.00      1.00         8
      versicolor       0.73      1.00      0.85        11
       virginica       1.00      0.79      0.88        19
    
        accuracy                           0.89        38
       macro avg       0.91      0.93      0.91        38
    weighted avg       0.92      0.89      0.90        38
    
    ####特点分析
    

    K邻近算法是非常直观的机器学习模型。K邻近算法与其他算法最大的不同在于,该模型没有参数训练过程 ,我们并没有通过任何学习算法分析训练数据,而只是根据测试样本在训练数据的分布直接做出分类决策。因此K邻近算法属于无参数模型中非常简单的一种。然而,正是这样的决策算法,导致了非常高的计算复杂度和内存消耗,当然可以用KD-Tree。

    相关文章

      网友评论

          本文标题:《机器学习及实践——从零开始通往KAGGLE竞赛之路》读书笔记六

          本文链接:https://www.haomeiwen.com/subject/lzvlhhtx.html