美文网首页我爱编程
StackingClassifier

StackingClassifier

作者: taojinglong | 来源:发表于2018-02-08 09:55 被阅读0次

    写在前面

    scikit-learn 官网的Ensemble methods 文档部分只介绍了/bagging / boosting / voting / 三种模型组合方式;但是通过查找学习,受周志华《机器学习》集成学习部分的学习法启发,了解并学习了 stacking,在此以作记录。

    概述

    Stacking 是一种集合学习技术,通过元分类器组合多个分类模型。基于完整训练集训练各个分类模型; 然后,基于整体中的各个分类模型的输出 - 元特征来拟合元分类器。元分类器可以根据预测类标签或来自集合的概率进行训练。

    流程图:

    流程图
    OR
    [图片上传失败...(image-95bf0e-1518054917176)]

    算法总结:
    [图片上传失败...(image-f3b4e9-1518054917176)]


    下面直接上实现过程

    环境

    • ubantu 16.04 + jupyter + python2.7
    • scikit-learn + mlxtend + anconda

    示例1.基础StackingClassifier

    from sklearn import model_selection
    from sklearn.linear_model import LogisticRegression
    from sklearn.neighbors import KNeighborsClassifier
    from sklearn.naive_bayes import GaussianNB 
    from sklearn.ensemble import RandomForestClassifier
    from mlxtend.classifier import StackingClassifier
    import numpy as np
    
    clf1 = KNeighborsClassifier(n_neighbors=1)
    clf2 = RandomForestClassifier(random_state=1)
    clf3 = GaussianNB()
    lr = LogisticRegression()
    sclf = StackingClassifier(classifiers=[clf1, clf2, clf3], 
                              meta_classifier=lr)
    
    print('3-fold cross validation:\n')
    
    for clf, label in zip([clf1, clf2, clf3, sclf], 
                          ['KNN', 
                           'Random Forest', 
                           'Naive Bayes',
                           'StackingClassifier']):
    
        scores = model_selection.cross_val_score(clf, X, y, 
                                                  cv=3, scoring='accuracy')
        print("Accuracy: %0.2f (+/- %0.2f) [%s]" 
              % (scores.mean(), scores.std(), label))
    
    3-fold cross validation:
    
    Accuracy: 0.91 (+/- 0.01) [KNN]
    Accuracy: 0.91 (+/- 0.06) [Random Forest]
    Accuracy: 0.92 (+/- 0.03) [Naive Bayes]
    Accuracy: 0.95 (+/- 0.03) [StackingClassifier]
    
    import matplotlib.pyplot as plt
    from mlxtend.plotting import plot_decision_regions
    import matplotlib.gridspec as gridspec
    import itertools
    
    gs = gridspec.GridSpec(2, 2)
    
    fig = plt.figure(figsize=(10,8))
    
    for clf, lab, grd in zip([clf1, clf2, clf3, sclf], 
                             ['KNN', 
                              'Random Forest', 
                              'Naive Bayes',
                              'StackingClassifier'],
                              itertools.product([0, 1], repeat=2)):
    
        clf.fit(X, y)
        ax = plt.subplot(gs[grd[0], grd[1]])
        fig = plot_decision_regions(X=X, y=y, clf=clf)
        plt.title(lab)
    

    [图片上传失败...(image-2a710d-1518054917176)]


    示例2.使用概率作为原特征的分类

    或者,第一级分类器的类概率可用于通过设置来训练元分类器(第二级分类器)use_probas=True。如果average_probas=True,平均1级分类器的概率,如果average_probas=False,概率被学习法(推荐)。例如,在具有2个1级分类器的3类设置中,这些分类器可以对1个训练样本进行以下“概率”预测:

    • 分类器1:[0.2,0.5,0.3]
    • 分类器2:[0.3,0.4,0.4]

    如果average_probas=True,元特征将是:

    • [0.25,0.45,0.35]

    相反,使用average_probas=Falsek个特征中的结果,其中,k = [n_classes * n_classifiers],通过学习法这些1级概率:

    • [0.2,0.5,0.3,0.3,0.4,0.4]
    clf1 = KNeighborsClassifier(n_neighbors=1)
    clf2 = RandomForestClassifier(random_state=1)
    clf3 = GaussianNB()
    lr = LogisticRegression()
    sclf = StackingClassifier(classifiers=[clf1, clf2, clf3],
                              use_probas=True,
                              average_probas=False,
                              meta_classifier=lr)
    
    print('3-fold cross validation:\n')
    
    for clf, label in zip([clf1, clf2, clf3, sclf], 
                          ['KNN', 
                           'Random Forest', 
                           'Naive Bayes',
                           'StackingClassifier']):
    
        scores = model_selection.cross_val_score(clf, X, y, 
                                                  cv=3, scoring='accuracy')
        print("Accuracy: %0.2f (+/- %0.2f) [%s]" 
              % (scores.mean(), scores.std(), label))
    
    3-fold cross validation:
    Accuracy: 0.91 (+/- 0.01) [KNN]
    Accuracy: 0.91 (+/- 0.06) [Random Forest]
    Accuracy: 0.92 (+/- 0.03) [Naive Bayes]
    Accuracy: 0.94 (+/- 0.03) [StackingClassifier]
    

    示例3 - 学习法分类和GridSearch

    要为scikit-learn设置参数网格GridSearch,需在参数网格中提供分类器的名称 - 在元回归的特殊情况下,添加'meta-'前缀。

    from sklearn.linear_model import LogisticRegression
    from sklearn.neighbors import KNeighborsClassifier
    from sklearn.naive_bayes import GaussianNB 
    from sklearn.ensemble import RandomForestClassifier
    from sklearn.model_selection import GridSearchCV
    from mlxtend.classifier import StackingClassifier
    
    # Initializing models
    
    clf1 = KNeighborsClassifier(n_neighbors=1)
    clf2 = RandomForestClassifier(random_state=1)
    clf3 = GaussianNB()
    lr = LogisticRegression()
    sclf = StackingClassifier(classifiers=[clf1, clf2, clf3], 
                              meta_classifier=lr)
    
    params = {'kneighborsclassifier__n_neighbors': [1, 5],
              'randomforestclassifier__n_estimators': [10, 50],
              'meta-logisticregression__C': [0.1, 10.0]}
    
    grid = GridSearchCV(estimator=sclf, 
                        param_grid=params, 
                        cv=5,
                        refit=True)
    grid.fit(X, y)
    
    cv_keys = ('mean_test_score', 'std_test_score', 'params')
    
    for r, _ in enumerate(grid.cv_results_['mean_test_score']):
        print("%0.3f +/- %0.2f %r"
              % (grid.cv_results_[cv_keys[0]][r],
                 grid.cv_results_[cv_keys[1]][r] / 2.0,
                 grid.cv_results_[cv_keys[2]][r]))
    
    print('Best parameters: %s' % grid.best_params_)
    print('Accuracy: %.2f' % grid.best_score_)
    

    在此对于寻参方法,与之前的VoteClassifier 设计相同,但是作者本身还是更喜欢将一级分类器逐个寻优 ->之后才带入一级模型训练 - > 接着二级分类器模型寻参 -> 二级模型训练

    如果我们计划多次使用回归算法,我们需要做的是在参数网格中添加一个额外的数字后缀,如下所示:

    from sklearn.model_selection import GridSearchCV
    
    # Initializing models
    
    clf1 = KNeighborsClassifier(n_neighbors=1)
    clf2 = RandomForestClassifier(random_state=1)
    clf3 = GaussianNB()
    lr = LogisticRegression()
    sclf = StackingClassifier(classifiers=[clf1, clf1, clf2, clf3],  # 此处变化
                              meta_classifier=lr)
    
    params = {'kneighborsclassifier-1__n_neighbors': [1, 5],
              'kneighborsclassifier-2__n_neighbors': [1, 5],   # 此处变化
              'randomforestclassifier__n_estimators': [10, 50],
              'meta-logisticregression__C': [0.1, 10.0]}
    
    grid = GridSearchCV(estimator=sclf, 
                        param_grid=params, 
                        cv=5,
                        refit=True)
    grid.fit(X, y)
    
    cv_keys = ('mean_test_score', 'std_test_score', 'params')
    
    for r, _ in enumerate(grid.cv_results_['mean_test_score']):
        print("%0.3f +/- %0.2f %r"
              % (grid.cv_results_[cv_keys[0]][r],
                 grid.cv_results_[cv_keys[1]][r] / 2.0,
                 grid.cv_results_[cv_keys[2]][r]))
    
    print('Best parameters: %s' % grid.best_params_)
    print('Accuracy: %.2f' % grid.best_score_)
    

    API 说明

    StackingClassifier(classifiers,meta_classifier,use_probas = False,average_probas = False,verbose = 0)

    参数

    • classifiers :array-like,shape = [n_classifiers]

      一级分类器列表

    • meta_classifier :Object

      二级分类器(元分类器)

    • use_probas :bool(默认值:False)

      如果为True,则基于预测的概率而不是类标签来训练元分类器。

    • average_probas :bool(默认值:False)

      如果为真,将概率平均为元特征。

    • verbose :int,optional(default = 0)

      Controls the verbosity of the building process. - verbose=0 (default): Prints nothing - verbose=1: Prints the number & name of the regressor being fitted - verbose=2: Prints info about the parameters of the regressor being fitted - verbose>2: Changes verbose param of the underlying regressor to self.verbose - 2

    属性

    • clfs_ :list,shape = [n_classifiers]

      一级分类器

    • meta_clf_ :estimators器

      二级分类器(元分类器)

    方法

    fit(X,y)
    拟合合成分类器和元分类器。

    Parameters

    • X :{array-like,sparse matrix},shape = [n_samples,n_features]

      训练向量,其中n_samples是样本的数量,n_features是特征的数量。

    • y :array-like,shape = [n_samples]

      target values。

    Returns
    self :Object


    fit_transform(X,y = None,fit_params)

    进行数据规整化

    Parameters

    • X :numpy array of shape [n_samples, n_features]

      训练集

    • y :numpy array of shape [n_samples]

      标签

    Returns

    • X_new :numpy array of shape [n_samples, n_features_new]

      转换数组


    get_params(deep = True)

    返回GridSearch支持的estimators参数名称。


    predict(X)

    预测X的标签

    Parameters

    • X :{array-like,sparse matrix},shape = [n_samples,n_features]

      训练向量,其中n_samples是样本的数量,n_features是特征的数量。

    Returns

    • labels :array-like,shape = [n_samples]

    Predicted class labels.


    predict_proba(X)

    Predict class probabilities for X.

    Parameters

    • X :{array-like,sparse matrix},shape = [n_samples,n_features]

      训练向量,其中n_samples是样本的数量,n_features是特征的数量。

    Returns

    • proba :array-like,shape = [n_samples,n_classes]

      每个样本的概率。


    • score(X,y,sample_weight = None)

      Returns the mean accuracy on the given test data and labels.

    在多标签分类中,这是子集精度,其是苛刻的度量,因为对于每个样本需要正确地预测每个标号集合。

    Parameters

    • X :array-like,shape =(n_samples,n_features)

      测试

    • y :array-like,shape =(n_samples)或(n_samples,n_outputs)

      X的真实标签。

    • sample_weight :array-like,shape = [n_samples],可选

    Returns

    • score :float

      Mean accuracy of self.predict(X) wrt. y.


    set_params(params)

    设置estimators的参数。

    该方法适用于简单estimators以及嵌套对象(例如pipelines)后者具有形式的参数 <component>__<parameter> 以便可以更新嵌套对象的每个组件。

    相关文章

      网友评论

        本文标题:StackingClassifier

        本文链接:https://www.haomeiwen.com/subject/mueozxtx.html