-
信息熵
啥是信息熵?我们高中都学过热力学第二定律,熵是描述系统混乱程度的一个度量。熵值越大,系统越混乱。信息熵同样可以描述数据是否【纯】。

2.维度选择
我们知道ID3是构建决策树的度量,但是到底该怎么选择维度呢?这就涉及到了信息增益
首先我们举个栗子,假设数据集如下
outlook | temperature | humidity | windy | target |
---|---|---|---|---|
0 | 0 | 0 | 0 | N |
0 | 0 | 0 | 1 | N |
1 | 0 | 0 | 0 | Y |
2 | 1 | 0 | 0 | Y |
2 | 2 | 1 | 0 | Y |
2 | 2 | 1 | 1 | N |
1 | 2 | 1 | 1 | Y |
target为“N”的3个,“Y”有4个,所以数据的信息熵为:
H(x)=-(3/7log3/7+4/7log4/7)=0.985228136034
-
信息熵
H(outlook)=-2/7H(outlook=0)-2/7H(outlook=1)-3/7H(outlook=2)=0.393555357452
H(temperature)=3/7H(temperature=0)+1/7H(temperature=1)+2/7H(temperature=1)=0.787110714904
H(humidity)=4/7H(humidity=0)+3/7H(humidity=1)=0.96498392888
H(windy)=4/7H(windy=0)+3/7H(windy=1)=0.857142857143 -
信息增益
InfoGain(outlook)=H(x)-H(outlook)=0.591672778582
InfoGain(temperature)=H(x)-H(temperature)=0.19811742113
InfoGain(humidity)=H(x)-H(humidity)=0.0202442071538
InfoGain(windy)=H(x)-H(windy)=0.128085278891
有上述计算可以看出信息增益最大的是outlook维度,后续选择维度跟该方法相同,直到所有维度被选择完或者分裂后的结果一致停止分裂。
- ID3的局限性
- ID3只能处理离散型数据,无法处理连续性数据
- 如果某个属性的数据含有缺失值,ID3 无法处理。
- 信息增益的缺陷:信息增益在类别多的属性上计算结果会大于类别少的属性上的计算结果,这会导致 ID3 算法偏向选择具有较多分枝的属性,而分枝多的属性不一定是最优的选择。
4.python代码 github
光说不练假把式,上代码
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 2018/6/4 下午10:12
# @Author : liujiatian
# @File : decisionTree.py
from __future__ import division
from math import log
import pandas as pd
def createDataSet():
dataSet = [[0, 0, 0, 0, 'N'],
[0, 0, 0, 1, 'N'],
[1, 0, 0, 0, 'Y'],
[2, 1, 0, 0, 'Y'],
[2, 2, 1, 0, 'Y'],
[2, 2, 1, 1, 'N'],
[1, 2, 1, 1, 'Y']]
df = pd.DataFrame(data=dataSet, columns=[
'outlook', 'temperature', 'humidity', 'windy', 'target'])
return df
def calcShannonEnt(dataSet):
'''
计算信息熵
:return:
'''
entropy = 0.0
length = len(dataSet)
'''
counter={'N':3,'Y':4}
'''
counter = dict(dataSet.iloc[:, -1].value_counts())
for _, count in counter.iteritems():
prob = count / length
log_prob = log(prob, 2)
entropy -= prob * log_prob
return entropy
def chooseFeatureByInfoGain(dataSet):
'''
ID3
根据最大信息增益选择最佳分裂维度
:param dataSet:
:return: 维度的序数
'''
if dataSet.empty:
return
baseEntropy = calcShannonEnt(dataSet)
bestInfoGain = 0.0
bestFeature = -1
featureCount = dataSet.shape[1] - 1
dataSetRows = len(dataSet)
# 维度的数量
for i in range(featureCount):
# 不同元素
uniqueVals = set(list(dataSet.iloc[:, i]))
splitInfo = 0.0
for value in uniqueVals:
subDataFrame = dataSet[dataSet.iloc[:, i] == value]
subDataFrameRows = len(subDataFrame)
prob = subDataFrameRows / dataSetRows
splitInfo += prob * calcShannonEnt(subDataFrame)
infoGain = baseEntropy - splitInfo
if infoGain > bestInfoGain:
bestInfoGain = infoGain
bestFeature = i
return bestFeature
def createTree(dataSet):
'''
创建决策树
:param dataSet:
:return: {'outlook': {0: 'N', 1: 'Y', 2: {'windy': {0: 'Y', 1: 'N'}}}}
'''
# 如果所有的label都一样,停止分裂
if len(dataSet.iloc[:, -1].value_counts()) == 1:
target = dataSet.iloc[:, -1].value_counts().index[0]
return target
# 如果只剩下一个维度,那么取众数
if len(dataSet.columns) == 2:
target = dataSet.iloc[:, -1].mode()[0]
return target
# ID3
bestFeatureIndex = chooseFeatureByInfoGain(dataSet)
bestFeature = dataSet.columns[bestFeatureIndex]
myTree = {bestFeature: {}}
for item, subDataSet in dataSet.groupby(bestFeature):
myTree[bestFeature][item] = createTree(subDataSet)
return myTree
if __name__ == '__main__':
dataSet = createDataSet()
print dataSet
print createTree(dataSet)
网友评论