- Python机器学习中的DictVectorizer(特征向量化
- 利用sklearn进行分类2:良/恶性乳腺癌肿瘤预测(二)
- 利用sklearn进行分类:良/恶性乳腺癌肿瘤预测(一)
- 利用sklearn进行分类3:初级手写数字识别
- 《机器学习及实践——从零开始通往KAGGLE竞赛之路》读书笔记五
- PYTHON机器学习及实践_从零开始通往KAGGLE竞赛之路pd
- Python机器学习及实践_从零开始通往KAGGLE竞赛之路 高
- 《机器学习及实践——从零开始通往KAGGLE竞赛之路》读书笔记二
- 《机器学习及实践——从零开始通往KAGGLE竞赛之路》读书笔记三
- 《机器学习及实践——从零开始通往KAGGLE竞赛之路》读书笔记四
朴素贝叶斯(Naive Bayes)
模型介绍
朴素贝叶斯是一个非常简单,但是实用性很强的分类模型。不过,和之前两个基于线性假设的模型不同,朴素贝叶斯分类器的构造理论基础是贝叶斯理论。
朴素贝叶斯分类器会单独考量每一维度特征被分类的条件概率,进而综合这些概率并对其所在的特征向量做出分类预测。因此,这个模型的基本数学假设是:各个维度上的特征被分类的条件概率之间是相互独立的。
数据描述
朴素贝叶斯模型有着广泛的实际应用环境,特别是在文本分类的任务中。我们将是应用经典的20类新闻文本来作为实验数据。
# 从sklearn.datasets里导入新闻数据抓取器fetch_20newsgroups。
from sklearn.datasets import fetch_20newsgroups
# 与之前预存的数据不同,fetch_20newsgroups需要即时从互联网下载数据。
news = fetch_20newsgroups(subset='all')
# 查验数据规模和细节。
print len(news.data)
print news.data[0]
18846
From: Mamatha Devineni Ratnam <mr47+@andrew.cmu.edu>
Subject: Pens fans reactions
Organization: Post Office, Carnegie Mellon, Pittsburgh, PA
Lines: 12
NNTP-Posting-Host: po4.andrew.cmu.edu
I am sure some bashers of Pens fans are pretty confused about the lack
of any kind of posts about the recent Pens massacre of the Devils. Actually,
I am bit puzzled too and a bit relieved. However, I am going to put an end
to non-PIttsburghers' relief with a bit of praise for the Pens. Man, they
are killing those Devils worse than I thought. Jagr just showed you why
he is much better than his regular season stats. He is also a lot
fo fun to watch in the playoffs. Bowman should let JAgr have a lot of
fun in the next couple of games since the Pens are going to beat the pulp out of Jersey anyway. I was very disappointed not to see the Islanders lose the final
regular season game. PENS RULE!!!
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(news.data, news.target, test_size=0.25, random_state=33)
# 从sklearn.feature_extraction.text里导入用于文本特征向量转化模块。详细介绍请读者参考3.1.1.1 特征抽取一节。
from sklearn.feature_extraction.text import CountVectorizer
vec = CountVectorizer()
X_train = vec.fit_transform(X_train)
X_test = vec.transform(X_test)
# 从sklearn.naive_bayes里导入朴素贝叶斯模型。
from sklearn.naive_bayes import MultinomialNB
# 从使用默认配置初始化朴素贝叶斯模型。
mnb = MultinomialNB()
# 利用训练数据对模型参数进行估计。
mnb.fit(X_train, y_train)
# 对测试样本进行类别预测,结果存储在变量y_predict中。
y_predict = mnb.predict(X_test)
# 从sklearn.metrics里导入classification_report用于详细的分类性能报告。
from sklearn.metrics import classification_report
print 'The accuracy of Naive Bayes Classifier is', mnb.score(X_test, y_test)
print classification_report(y_test, y_predict, target_names = news.target_names)
The accuracy of Naive Bayes Classifier is 0.839770797963
precision recall f1-score support
alt.atheism 0.86 0.86 0.86 201
comp.graphics 0.59 0.86 0.70 250
comp.os.ms-windows.misc 0.89 0.10 0.17 248
comp.sys.ibm.pc.hardware 0.60 0.88 0.72 240
comp.sys.mac.hardware 0.93 0.78 0.85 242
comp.windows.x 0.82 0.84 0.83 263
misc.forsale 0.91 0.70 0.79 257
rec.autos 0.89 0.89 0.89 238
rec.motorcycles 0.98 0.92 0.95 276
rec.sport.baseball 0.98 0.91 0.95 251
rec.sport.hockey 0.93 0.99 0.96 233
sci.crypt 0.86 0.98 0.91 238
sci.electronics 0.85 0.88 0.86 249
sci.med 0.92 0.94 0.93 245
sci.space 0.89 0.96 0.92 221
soc.religion.christian 0.78 0.96 0.86 232
talk.politics.guns 0.88 0.96 0.92 251
talk.politics.mideast 0.90 0.98 0.94 231
talk.politics.misc 0.79 0.89 0.84 188
talk.religion.misc 0.93 0.44 0.60 158
avg / total 0.86 0.84 0.82 4712
特点分析:
朴素贝叶斯模型被广泛应用于海量互联网文本分类任务。由于其较强的特征条件独立假设,使得模型预测所需估计的参数规模从幂指数数量级向线性数量级减少,极大地节约了内存消耗和计算时间。但是也受这种强假设的限制,模型训练时无法将各个特征之间联系起来,使得该模型在其他数据特征关联性较强的分类任务性能不佳。
网友评论