美文网首页
统计学习方法 李航 决策树模型 python sklearn 实

统计学习方法 李航 决策树模型 python sklearn 实

作者: 蒜苗爱妞妞 | 来源:发表于2017-12-13 17:37 被阅读0次
    • 李航
      决策树(decision)是一种基本的分类与回归算法。
      决策树呈树形结构,在分类问题中,表示基于特征对实例进行分类的过程。
      它可以认为是if-then规则的集合,也可以认为定义在特征空间与类空间上 的条件概率分布。
      其主要优点在于模型具有可读性,分类速度快。
      学习时,利用训练数据,根据损失函数最小化的原则建立决策树模型。 预测时,对新的数据,利用决策树模型进行分类。
      决策树的学习通常包括三个部分:特征选择、决策树生成和决策树的修剪
      决策树的思想主要来源于Quinlan在1986年提出的ID3算法和1993年的C4.5算 法,以及Breiman等人在1984年提出的CART算法

    • 决策树的一些优点:
      易于理解和解释。数可以可视化。 几乎不需要数据预处理。其他方法经常需要数据标准化,创建虚拟变量和删除缺失值。决策树还不支持缺失值。 使用树的花费(例如预测数据)是训练数据点(data points)数量的对数。 可以同时处理数值变量和分类变量。其他方法大都适用于分析一种变量的集合。 可以处理多值输出变量问题。 使用白盒模型。如果一个情况被观察到,使用逻辑判断容易表示这种规则。相反,如果是黑盒模型(例如人工神经网络),结果会非常难解释。 可以使用统计检验检验模型。这样做被认为是提高模型的可行度。 即使对真实模型来说,假设无效的情况下,也可以较好的适用。

    • 决策树的一些缺点:
      决策树学习可能创建一个过于复杂的树,并不能很好的预测数据。也就是过拟合。修剪机制(现在不支持),设置一个叶子节点需要的最小样本数量,或者数的最大深度,可以避免过拟合。 决策树可能是不稳定的,因为即使非常小的变异,可能会产生一颗完全不同的树。这个问题通过decision trees with an ensemble来缓解。 学习一颗最优的决策树是一个NP-完全问题under several aspects of optimality and even for simple concepts。因此,传统决策树算法基于启发式算法,例如贪婪算法,即每个节点创建最优决策。这些算法不能产生一个全家最优的决策树。对样本和特征随机抽样可以降低整体效果偏差。 概念难以学习,因为决策树没有很好的解释他们,例如,XOR, parity or multiplexer problems. 如果某些分类占优势,决策树将会创建一棵有偏差的树。因此,建议在训练之前,先抽样使样本均衡

    • 数据

    data = np.array([[1,2,2,3],
                     [1,2,2,2],
                     [1,1,2,2],
                     [1,1,1,3],
                     [1,2,2,3],
                     [2,2,2,3],
                     [2,2,2,2],
                     [2,1,1,2],
                     [2,2,1,1],
                     [2,2,1,1],
                     [3,2,1,1],
                     [3,2,1,2],
                     [3,1,2,2],
                     [3,1,2,1],
                     [3,2,2,3]])
    label = np.array([0,0,1,1,0,0,0,1,1,1,1,1,1,1,0])
    target = [3,1,2,1]
    

    python代码5.1例题ID3算法实现

    import numpy as np
    
    class Tree(object):
        def __init__(self,node_type, Class = None, features = None):
            self.node_type = node_type
            self.dict = {}
            self.Class = Class
            self.feature_index = features
    
        def add_tree(self,val,tree):
            self.dict[val] = tree
    
        def predict(self,features):
            if self.node_type == 'leaf':
                return self.Class
    
            tree = self.dict[features[self.feature_index]]
            return tree.predict(features)
    
    class Id3_tree(object):
        def __init__(self, data, label, features, epsilon):
            self.leaf = 'leaf'
            self.internal = 'internal'
            self.epsilon = epsilon
            self.root = self.__build(data, label, features)
    
        def __build(self, data, labels,features):
            label_kinds = np.unique(labels)
            if len(np.unique(label_kinds)) == 1:
                return Tree(self.leaf, label_kinds[0])
            (max_class, max_len) = max([(i, len(list(filter(lambda x: x == i, labels))))
                                        for i in range(len(label_kinds))],key=lambda x: x[1])
            features_num = len(features)
            if features_num == 0:
                return Tree(self.leaf, label_kinds[0])
    
            Hd = self.__caclulate_hd(labels)
            Hda = self.__caclulate_hda(data,labels,features_num)
            Gda = np.tile(Hd, features_num) - Hda
    
            max_contribution_feature = list(Gda).index(np.max(Gda))
            if Gda[max_contribution_feature] < self.epsilon:
                return Tree(self.leaf, Class=max_class)
            data_tmp = np.hstack((data[:, :max_contribution_feature], data[:, max_contribution_feature + 1:]))
            sub_features = list(filter(lambda x: x != max_contribution_feature, features))
            tree = Tree(self.internal, features=max_contribution_feature)
            feature_s = np.unique(data[:, max_contribution_feature])
            for feature_index, feature in enumerate(feature_s):
                dx = np.where(data[:, max_contribution_feature] == feature)
                sub_tree = self.__build(data_tmp[dx[0]], labels[dx[0]], sub_features)
                tree.add_tree(feature, sub_tree)
            return tree
    
        def __caclulate_hd(self, labels):
            label_kinds = np.unique(labels)
            Hd = 0
            for label in label_kinds:
                count = list(labels).count(label)
                p = float(count) / float(len(labels))
                Hd -= p * np.log2(p)
            return Hd
    
        def __caclulate_hda(self, data, labels, features_num):
            Hda = np.zeros(features_num)
            for feature_index in range(features_num):
                feature_s = np.unique(data[:, feature_index])
                for feature in feature_s:
                    dx = np.where(data[:, feature_index] == feature)
                    p = float(len(dx[0])) / float(len(labels))
                    h = self.__caclulate_hd(labels[dx])
                    Hda[feature_index] += p * h
            return Hda
    id3_tree = Id3_tree(data, label, [i for i in range(4)], 0.1)
    prediction = id3_tree.root.predict(target)
    print('Target belong %s' % prediction)
    

    python代码5.1例题C4.3算法实现

    import numpy as np
    
    class Tree(object):
        def __init__(self,node_type, Class = None, features = None):
            self.node_type = node_type
            self.dict = {}
            self.Class = Class
            self.feature_index = features
    
        def add_tree(self,val,tree):
            self.dict[val] = tree
    
        def predict(self,features):
            if self.node_type == 'leaf':
                return self.Class
    
            tree = self.dict[features[self.feature_index]]
            return tree.predict(features)
    
    class C45_tree(object):
        def __init__(self, data, label, features, epsilon):
            self.leaf = 'leaf'
            self.internal = 'internal'
            self.epsilon = epsilon
            self.root = self.__build(data, label, features)
    
        def __build(self, data, labels,features):
            label_kinds = np.unique(labels)
            if len(np.unique(label_kinds)) == 1:
                return Tree(self.leaf, label_kinds[0])
            (max_class, max_len) = max([(i, len(list(filter(lambda x: x == i, labels))))
                                        for i in range(len(label_kinds))],key=lambda x: x[1])
            features_num = len(features)
            if features_num == 0:
                return Tree(self.leaf, label_kinds[0])
    
            Hd = self.__caclulate_hd(labels)
            Hda, Ha = self.__caclulate_hda_ha(data,labels,features_num)
            Gda = np.tile(Hd, features_num) - Hda
            Grda = Gda / Ha
            max_contribution_feature = list(Grda).index(np.max(Grda))
            if Grda[max_contribution_feature] < self.epsilon:
                return Tree(self.leaf, Class=max_class)
            data_tmp = np.hstack((data[:, :max_contribution_feature], data[:, max_contribution_feature + 1:]))
            sub_features = list(filter(lambda x: x != max_contribution_feature, features))
            tree = Tree(self.internal, features=max_contribution_feature)
            feature_s = np.unique(data[:, max_contribution_feature])
            for feature_index, feature in enumerate(feature_s):
                dx = np.where(data[:, max_contribution_feature] == feature)
                sub_tree = self.__build(data_tmp[dx[0]], labels[dx[0]], sub_features)
                tree.add_tree(feature, sub_tree)
            return tree
    
        def __caclulate_hd(self, labels):
            label_kinds = np.unique(labels)
            Hd = 0
            for label in label_kinds:
                count = list(labels).count(label)
                p = float(count) / float(len(labels))
                Hd -= p * np.log2(p)
            return Hd
    
        def __caclulate_hda_ha(self, data, labels, features_num):
            Hda = np.zeros(features_num)
            Ha = np.zeros(features_num)
            for feature_index in range(features_num):
                feature_s = np.unique(data[:, feature_index])
                for feature in feature_s:
                    dx = np.where(data[:, feature_index] == feature)
                    p = float(len(dx[0])) / float(len(labels))
                    h = self.__caclulate_hd(labels[dx])
                    Hda[feature_index] += p * h
                    Ha[feature_index] -= p * np.log2(p)
            return Hda, Ha
    
    c45_tree = C45_tree(data, label, [i for i in range(4)], 0.1)
    prediction = c45_tree.root.predict(target)
    print('Target belong %s' % prediction)
    

    python代码5.1例题CART算法实现

    import numpy as np
    
    class Tree(object):
        def __init__(self,node_type, Class = None, feature_index = None, feature = None):
            self.node_type = node_type
            self.dict = {}
            self.Class = Class
            self.feature_index = feature_index
            self.feature = feature
    
        def add_tree(self,val,tree):
            self.dict[val] = tree
    
        def predict(self,features):
            if self.node_type == 'leaf':
                return self.Class
            if features[self.feature_index] == self.feature:
                tree = self.dict[self.feature]
            else:
                tree = self.dict[-1]
            return tree.predict(features)
    
    class Id3_tree(object):
        def __init__(self, data, label, features):
            self.leaf = 'leaf'
            self.internal = 'internal'
            self.root = self.__build(data, label, features)
    
        def __build(self, data, labels,features):
            label_kinds = np.unique(labels)
            if len(np.unique(label_kinds)) == 1:
                return Tree(self.leaf, label_kinds[0])
            features_num = len(features)
            if features_num == 0:
                return Tree(self.leaf, label_kinds[0])
    
            Ga = self.__caclulate_ga(data,labels,features_num)
            ga_fea_min = min(Ga[0])
            fea_local = list(Ga[0]).index(ga_fea_min)
            ga_fea_index = 0
            for dx, i in enumerate(Ga[1:]):
                ga_fea_min_tmp = min(i)
                if ga_fea_min_tmp < ga_fea_min:
                    fea_local = list(i).index(ga_fea_min_tmp)
                    ga_fea_min = ga_fea_min_tmp
                    ga_fea_index = dx + 1
            data_tmp = np.hstack((data[:, :ga_fea_index], data[:, ga_fea_index + 1:]))
            sub_features = list(filter(lambda x: x != ga_fea_index, features))
            feature_s = np.unique(data[:, ga_fea_index])
            tree = Tree(self.internal, feature_index=features[ga_fea_index], feature=feature_s[fea_local])
            dx_y = np.where(data[:, ga_fea_index] == feature_s[fea_local])
            sub_tree = self.__build(data_tmp[dx_y], labels[dx_y], sub_features)
            tree.add_tree(feature_s[fea_local], sub_tree)
            dx_n = np.where(data[:, ga_fea_index] != feature_s[fea_local])
            sub_tree = self.__build(data_tmp[dx_n], labels[dx_n], sub_features)
            tree.add_tree(-1, sub_tree)
    
            return tree
    
        def __caclulate_q(self, labels):
            label_kinds = np.unique(labels)
            q = 0
            for label in label_kinds:
                count = list(labels).count(label)
                p = float(count) / float(len(labels))
                q += p * (1 - p)
            return q
    
        def __caclulate_ga(self, data, labels, features_num):
            Ga = []
            for feature_index in range(features_num):
                feature_s = np.unique(data[:, feature_index])
                Gai = np.zeros(len(feature_s))
                for index, feature in enumerate(feature_s):
                    dx_y = np.where(data[:, feature_index] == feature)
                    p = float(len(dx_y[0])) / float(len(labels))
                    q_y = self.__caclulate_q(labels[dx_y])
                    dx_n = np.where(data[:, feature_index] != feature)
                    q_n = self.__caclulate_q(labels[dx_n])
                    dx = np.where(data[:, feature_index] == feature)
                    Gai[index] += (p * q_y + (1 - p) * q_n)
                Ga.append(Gai)
            return Ga
    
    id3_tree = Id3_tree(data, label, [i for i in range(4)])
    prediction = id3_tree.root.predict(target)
    print('Target belong %s' % prediction)
    

    sklearn代码所用数据为kaggle中mnist数据,将特征PCA至六维

    # -*- coding: utf-8 -*-
    """
    使用sklearn实现的DT算法进行分类的一个实例,
    使用数据集是Kaggle数字手写体数据库
    """
    import os
    import pandas as pd
    import numpy as np
    from sklearn import tree
    from sklearn.decomposition import PCA
    
    # 加载数据集
    def load_data(filename, n, mode):
        data_pd = pd.read_csv(filename)
        data = np.asarray(data_pd)
        pca = PCA(n_components=n)
        if not mode == 'test':
            dateset = pca.fit_transform(data[:, 1:])
            return dateset, data[:, 0]
        else:
            dateset = pca.fit_transform(data)
            return dateset, 1
    
    def main(train_data_path, test_data_path, n_dim):
        train_data, train_label = load_data(train_data_path, n_dim, 'train')
        print("Train set :" + repr(len(train_data)))
        test_data, _ = load_data(test_data_path, n_dim, 'test')
        print("Test set :" + repr(len(test_data)))
        dt = tree.DecisionTreeClassifier()
        # 训练数据集
        dt.fit(train_data, train_label)
        # 训练准确率
        score = dt.score(train_data, train_label)
        print(">Training accuracy = " + repr(score))
        predictions = []
        for index in range(len(test_data)):
            # 预测
            result = dt.predict([test_data[index]])
            # 预测,返回概率数组
            predict2 = dt.predict_proba([test_data[index]])
            predictions.append([index + 1, result[0]])
            print(">Index : %s, predicted = %s   p%s" % (index + 1, result[0], predict2))
        columns = ['ImageId', 'Label']
        save_file = pd.DataFrame(columns=columns, data=predictions)
        save_file.to_csv('m.csv', index=False, encoding="utf-8")
    
    if __name__ == "__main__":
        train_data_path = 'train.csv'
        test_data_path = 'train.csv'
        n_dim = 6
        main(train_data_path, test_data_path, n_dim)
    

    课后习题

    喜欢点赞关注哈

    相关文章

      网友评论

          本文标题:统计学习方法 李航 决策树模型 python sklearn 实

          本文链接:https://www.haomeiwen.com/subject/vidzixtx.html