超参数
一般训练神经网络中,有两种参数,一种为我们需要训练的参数对应的权重参数,这个参数是由机器自动训练的,而另外一种就是在这个训练之前定义好的参数,目的是寻找最好的训练模型
超参数例子
KNN当中K这个值的定义就是为超参数。那么我们定义这个K多少合适呢?我们可以挨个测试把最后评分最高的k给计算出来
best_score = 0.0
best_k = -1
for k in range(1, 11):
knn_clf = KNeighborsClassifier(n_neighbors=k)
knn_clf.fit(X_train, y_train)
score = knn_clf.score(X_test, y_test)
if score > best_score:
best_k = k
best_score = score
print("best_k =", best_k)
print("best_score =", best_score)
当然knn的例子中除了k这个超参数还有别的超参数吗?当然是有的!举个例子,例如在手写数字识别当中,如果在K为3的情况下测试数据为 2,结果预测出和2最相近的数字为 5,3和2,那么我们应该选择哪个呢?在我们之前的knn算法当中我们是随机选择了一个。而现在有一个更好的解决方案那就是根据当前数字和每个数字之前的距离,来附加不同的权重。最终计算每个分类的权重和, 值大的则为结果值
#只要在创建分类器的时候指定weights为distance,那就表示有权重的情况啦
KNeighborsClassifier(n_neighbors=k, weights="distance")
现在我们已经挖掘了新的超参数,那我们究竟选择要计算距离还是不要就算距离,值选择多少合适呢?我们来遍历一下
best_score = 0.0
best_k = -1
best_method = ""
#"uniform", "distance"为距离有权重以及距离没权重,一般权重为自身的倒数
for method in ["uniform", "distance"]:
for k in range(1, 11):
knn_clf = KNeighborsClassifier(n_neighbors=k, weights=method)
knn_clf.fit(X_train, y_train)
score = knn_clf.score(X_test, y_test)
if score > best_score:
best_k = k
best_score = score
best_method = method
print("best_method =", best_method)
print("best_k =", best_k)
print("best_score =", best_score)
同理我们可以获得搜索明可夫斯基距离相应的p
best_score = 0.0
best_k = -1
best_p = -1
for k in range(1, 11):
for p in range(1, 6):
knn_clf = KNeighborsClassifier(n_neighbors=k, weights="distance", p=p)
knn_clf.fit(X_train, y_train)
score = knn_clf.score(X_test, y_test)
if score > best_score:
best_k = k
best_p = p
best_score = score
print("best_k =", best_k)
print("best_p =", best_p)
print("best_score =", best_score)
最后我们使用sklearn的网格搜索来搜索最合适的超参数
import numpy as np
from sklearn import datasets
digits = datasets.load_digits()
X = digits.data
y = digits.target
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=666)
from sklearn.neighbors import KNeighborsClassifier
#这里定义网格参数
param_grid = [
{
'weights': ['uniform'],
'n_neighbors': [i for i in range(1, 11)]
},
{
'weights': ['distance'],
'n_neighbors': [i for i in range(1, 11)],
'p': [i for i in range(1, 6)]
}
]
knn_clf = KNeighborsClassifier()
#导入网格搜索包
from sklearn.model_selection import GridSearchCV
#n_jobs表示占用cpu数,电脑为4核则可以选择1..4以及-1(使用所有),verbose为打印日志,每计算一次超参数则打印出来日志信息
grid_search = GridSearchCV(knn_clf, param_grid, n_jobs=-1, verbose=2)
grid_search.fit(X_train, y_train)
#得出评分最高的超参数的分类器
grid_search.best_estimator_
#最高的评分为
grid_search.best_score_
http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.DistanceMetric.html
网友评论