一、原理
随机森林:属性随机、样本随机
多个算法,合到一起,共同发挥作用
- 取长补短
- 随机森林
- 提高准确率,防止过拟合
- 随机森林:<font color = red>就是多颗普通的决策树 + 随机抽样</font>
- 极限森林:
- 随机性,不是随机抽样
- As in random forests, a random subset of candidate features is used
- 随机性之一,来自,数据特征的随机,葡萄酒 13个特征,随机从13个特征抽取一些(5),进行决策树的构建,'风骚'。<font color =red>随机抽特征</font>
- 随机性之二,裂分时,并不是最大的信息增益,加入随机
- 极限森林使用很简单!
二、代码的实现
import numpy as np
from sklearn import tree
from sklearn.linear_model import LogisticRegression
# ensemble 集成
# 随机森林
from sklearn.ensemble import RandomForestClassifier
from sklearn import datasets
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
X,y = datasets.load_iris(True)
X_train,X_test,y_train,y_test = train_test_split(X,y,random_state = 112)
import warnings
warnings.filterwarnings('ignore')
score = 0
for i in range(300):
X_train,X_test,y_train,y_test = train_test_split(X,y)
lr = LogisticRegression()
lr.fit(X_train,y_train)
score += lr.score(X_test,y_test)/300
print('逻辑斯蒂回归平均准确率是:',score)
逻辑斯蒂回归平均准确率是: 0.961666666666666
score = 0
for i in range(300):
X_train,X_test,y_train,y_test = train_test_split(X,y)
model = RandomForestClassifier()
model.fit(X_train,y_train)
score += model.score(X_test,y_test)/300
print('随机森林平均准确率是:',score)
随机森林平均准确率是: 0.9501754385964911
from sklearn.tree import DecisionTreeClassifier
score = 0
for i in range(300):
X_train,X_test,y_train,y_test = train_test_split(X,y)
model = DecisionTreeClassifier()
model.fit(X_train,y_train)
score += model.score(X_test,y_test)/300
print('随机森林平均准确率是:',score)
随机森林平均准确率是: 0.9448245614035076
X_train,X_test,y_train,y_test = train_test_split(X,y,random_state = 112)
forest = RandomForestClassifier(n_estimators=100,criterion='gini')
forest.fit(X_train,y_train)
print('随机森林准确率:',forest.score(X_test,y_test))
print(forest.predict_proba(X_test))
随机森林准确率: 1.0
[[1. 0. 0. ]
[0.99 0.01 0. ]
[0. 0.93 0.07]
[1. 0. 0. ]
[0. 0.21 0.79]
[0. 1. 0. ]
[0. 0.01 0.99]
[1. 0. 0. ]
[0. 0.99 0.01]
[0. 0.01 0.99]
[0. 0.89 0.11]
[0.02 0.95 0.03]
[0. 0.04 0.96]
[0. 0.98 0.02]
[1. 0. 0. ]
[0. 0. 1. ]
[0. 0.06 0.94]
[0. 1. 0. ]
[0. 0.98 0.02]
[0. 0. 1. ]
[0. 1. 0. ]
[1. 0. 0. ]
[0. 0.14 0.86]
[0. 0. 1. ]
[0. 0.1 0.9 ]
[0. 0. 1. ]
[0. 0. 1. ]
[0. 0.04 0.96]
[0. 1. 0. ]
[0.99 0.01 0. ]
[1. 0. 0. ]
[0. 0.98 0.02]
[0. 0.99 0.01]
[0. 0. 1. ]
[0. 0.94 0.06]
[0. 0. 1. ]
[0. 0. 1. ]
[0. 0. 1. ]]
X_train.shape
(112, 4)
import pandas as pd
pd.Series(y_train).value_counts()
0 42
1 37
2 33
dtype: int64
随机森林:就是多颗普通的决策树 + 随机抽样
plt.figure(figsize=(9,9))
# 第一颗树类别:39,40,33 = 112
_ = tree.plot_tree(forest[0],filled = True)

plt.figure(figsize=(7,6))
# 第二颗树类别:43,33,36 = 112
_ = tree.plot_tree(forest[1],filled = True)

plt.figure(figsize=(9,9))
# 第三颗树类别:40,38,34 = 112
_ = tree.plot_tree(forest[2],filled = True)

plt.figure(figsize=(9,9))
# 第四颗树类别:39,39,34 = 112
_ = tree.plot_tree(forest[3],filled = True)

X_train,X_test,y_train,y_test = train_test_split(X,y,random_state = 112)
model = DecisionTreeClassifier()
model.fit(X_train,y_train)
print('决策树准确率:',model.score(X_test,y_test))
proba_ = model.predict_proba(X_test)
print(proba_)
决策树准确率: 1.0
[[1. 0. 0.]
[1. 0. 0.]
[0. 1. 0.]
[1. 0. 0.]
[0. 0. 1.]
[0. 1. 0.]
[0. 0. 1.]
[1. 0. 0.]
[0. 1. 0.]
[0. 0. 1.]
[0. 1. 0.]
[0. 1. 0.]
[0. 0. 1.]
[0. 1. 0.]
[1. 0. 0.]
[0. 0. 1.]
[0. 0. 1.]
[0. 1. 0.]
[0. 1. 0.]
[0. 0. 1.]
[0. 1. 0.]
[1. 0. 0.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 1. 0.]
[1. 0. 0.]
[1. 0. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 0. 1.]
[0. 1. 0.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]]
网友评论