美文网首页
机器学习代码实现 决策树(一)

机器学习代码实现 决策树(一)

作者: lmem | 来源:发表于2017-06-01 20:54 被阅读57次

信息增益

信息熵表示的是不确定度。均匀分布时,不确定度最大,此时熵就最大。当选择某个特征对数据集进行分类时,分类后的数据集信息熵会比分类前的小,其差值表示为信息增益。信息增益可以衡量某个特征对分类结果的影响大小。
假设在样本数据集 D 中,混有 c 种类别的数据。构建决策树时,根据给定的样本数据集选择某个特征值作为树的节点。在数据集中,可以计算出该数据中的信息熵:

Paste_Image.png

其中 D 表示训练数据集,c 表示数据类别数,Pi 表示类别 i 样本数量占所有样本的比例。
对应数据集 D,选择特征 A 作为决策树判断节点时,在特征 A 作用后的信息熵的为 Info(D),计算如下:

Paste_Image.png

信息增益表示数据集 D 在特征 A 的作用后,其信息熵减少的值。公式如下:

Paste_Image.png
优点:
计算复杂度不高, 输出结果易于理解,对中间值的缺失不敏感,可以处理不相关特征数据
缺点:
可能产生过拟合问题
使用数据标称型数值型

代码实现

# -*- coding: utf-8 -*-
from math import log
import operator


class Tree(object):

    def __init__(self):
        self.dataSet = self._init_data_set();

    def _init_data_set(self):
        """Get the data set"""
        # judge is a fish
        dataSet = [[1, 1, 'yes'],
                   [1, 1, 'yes'],
                   [1, 0, 'no'],
                   [0, 1, 'no'],
                   [0, 1, 'no']]
        # first: is always under water can live
        # second: is have a flipper(foot like a duck)
        labels = ['no surfacing', 'flippers']
        return dataSet, labels

    def calc_shannon(self, data_set):
        """Get the shanno value"""
        shannon_result = 0.0
        numEntries = len(data_set)
        label_count ={}
        for item in data_set:
            current_label = item[-1]
            if current_label not in label_count:
                label_count[current_label] = 0
            label_count[current_label] += 1
        for key in label_count:
            # Proportion
            prob = float(label_count[key])/numEntries
            shannon_result -= prob * log(prob, 2)
        return shannon_result

    def split_data_set(self, data_set, axis, value):
        """Give axis and value return the data set
        :param axis index of an item
        :param the index of an item that response to value
        :return the data_set of item
        """
        ret_data_set = []
        for entry in data_set:
            if entry[axis] == value:
                entry_vec = entry[:axis]
                entry_vec.extend(entry[axis+1:])
                ret_data_set.append(entry_vec)
        return ret_data_set

    def choose_best_feature_to_split(self, data_set):
        """choose data set the best split feature"""
        num_features = len(data_set[0]) - 1
        base_entropy = self.calc_shannon(data_set)
        best_info_gain = 0.0; best_feature = -1
        for i in range(num_features):
            feat_list = [example[i] for example in data_set]
            unique_vals = set(feat_list)
            new_entropy = 0.0
            for value in unique_vals:
                sub_data_set = self.split_data_set(data_set, i, value)
                prob = len(sub_data_set)/float(len(data_set))
                new_entropy += prob * self.calc_shannon(sub_data_set)
            info_gain = base_entropy - new_entropy
            if info_gain > best_info_gain:
                best_info_gain = info_gain
                best_feature = i
        return best_feature

    def _majority_count(self, class_list):
        """ when the only have one labels we count the most label as result"""
        class_count = {}
        for vote in class_list:
            if vote not in class_count.keys():
                class_count[vote] = 0
            class_count[vote] +=1
        sorted_class_list = sorted(class_count.iteritems(), key=operator.itemgetter(1), reverse=True)
        return sorted_class_list[0][0]

    def create_tree(self, data_set, labels):
        """ the main func to create a tree"""
        class_list = [example[-1] for example in data_set]
        # the end if have the same class or have no feature to judge
        if class_list.count(class_list[0]) == len(class_list):
            return class_list[0]
        if len(data_set[0]) == 1:
            return self._majority_count(class_list)
        best_feature_index = self.choose_best_feature_to_split(data_set)
        best_feature_label = labels[best_feature_index]
        my_tree = {best_feature_label:{}}
        feat_values = [example[best_feature_index] for example in data_set]
        unique_values = set(feat_values)
        # discard used label and get sub labels
        sub_labels = labels[:best_feature_index]  # deep copy label
        sub_labels.extend(labels[best_feature_index + 1:])
        for value in unique_values:
            my_tree[best_feature_label][value] = self.create_tree(self.split_data_set(data_set, best_feature_index, value), sub_labels)
        return my_tree


class ClassifyByTree(object):
    @classmethod
    def classify(cls, input_tree, labels, test_vec):
        first_str = input_tree.keys()[0]  # the first ke
        second_dict = input_tree[first_str]
        index = labels.index(first_str)
        for key in second_dict.keys():
            if test_vec[index] == key:
                if type(second_dict[key]).__name__ == 'dict':
                    class_label = cls.classify(second_dict[key], labels, test_vec)
                else:   class_label = second_dict[key]
        return class_label

if __name__ == "__main__":
    trees = Tree()
    #print trees.calc_shannon(trees.dataSet[0])
    #print trees.split_data_set(trees.dataSet[0],0,1)
    my_tree = trees.create_tree(trees.dataSet[0], trees.dataSet[1])
    #print trees.dataSet[1]
    print ClassifyByTree.classify(my_tree, trees.dataSet[1], [1, 0])

Java 实现[http://blog.csdn.net/androidlushangderen/article/details/42395865]

相关文章

网友评论

      本文标题:机器学习代码实现 决策树(一)

      本文链接:https://www.haomeiwen.com/subject/nbzefxtx.html