理论
机器学习的样本一般都是特征向量,但是除了特征向量以外经常有非特征化的数据,最常见的就是文本
结构化数据
当某个特征为有限的几个字符串时,可以看成一种结构化数据,处理这种特征的方法一般是将其转为独热码的几个特征。例如仅能取三个字符串的特征:a,b,c,可以将其转换为001,010,100的三个特征和
非结构化数据
当特征仅是一系列字符串时,可以使用词袋法处理,这种方法不考虑词汇顺序,仅考虑出现的频率
- count vectorizer:仅考虑每种词汇出现的频率
- tfidf vectorizer:除了考虑词汇出现的频率,还考虑词汇在样本总体中出现频率的倒数,可以理解为抑制每个样本中都经常出现的词汇
对于经常出现的无意义词汇,如the和a等,可以将其指定为停用词消除其对于结果的干扰
代码实现
导入数据集
from sklearn.datasets import fetch_20newsgroups
news = fetch_20newsgroups(subset='all')
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(news.data,news.target,test_size=0.25,random_state=33)
print(len(x_train),len(x_test))
14134 4712
特征提取
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
count vectorizer
c_vec = CountVectorizer()
x_count_train = c_vec.fit_transform(x_train)
x_count_test = c_vec.transform(x_test)
count vectorizer+去除停用词
c_vec_s = CountVectorizer(analyzer='word',stop_words='english')
x_count_stop_train = c_vec_s.fit_transform(x_train)
x_count_stop_test = c_vec_s.transform(x_test)
tfidf vectorizer
t_vec = TfidfVectorizer()
x_tfidf_train = t_vec.fit_transform(x_train)
x_tfidf_test = t_vec.transform(x_test)
tfidf vectorizer+去除停用词
t_vec_s = TfidfVectorizer(analyzer='word',stop_words='english')
x_tfidf_stop_train = t_vec_s.fit_transform(x_train)
x_tfidf_stop_test = t_vec_s.transform(x_test)
模型训练
from sklearn.naive_bayes import MultinomialNB
count vectorizer
nb_c = MultinomialNB()
nb_c.fit(x_count_train,y_train)
nb_c.score(x_count_test,y_test)
0.83977079796264853
count vectorizer+去除停用词
nb_cs = MultinomialNB()
nb_cs.fit(x_count_stop_train,y_train)
nb_cs.score(x_count_stop_test,y_test)
0.86375212224108655
tfidf vectorizer
nb_t = MultinomialNB()
nb_t.fit(x_tfidf_train,y_train)
nb_t.score(x_tfidf_test,y_test)
0.84634974533106966
tfidf vectorizer+去除停用词
nb_ts = MultinomialNB()
nb_ts.fit(x_tfidf_stop_train,y_train)
nb_ts.score(x_tfidf_stop_test,y_test)
0.88264006791171479
网友评论