美文网首页
Hyper-Parameters

Hyper-Parameters

作者: _PatrickStar | 来源:发表于2019-07-15 00:57 被阅读0次

(笔记)
超参数:在算法运行前需要决定的参数
模型参数:算法过程中学习的参数

import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split  # 拆分训练数据,测试数据
from sklearn.neighbors import KNeighborsClassifier   #knn分类器

digits = datasets.load_digits()
X = digits.data
y = digits.target

X_train,X_test,y_train,y_test = train_test_split(X, y, test_size=0.2, random_state=666)

knn_clf = KNeighborsClassifier(n_neighbors=3)
knn_clf.fit(X_train,y_train)
predict = knn_clf.score(X_test,y_test)
print(predict) # 0.9888888888888889

输出结果:0.9888888888888889
这是一个最简单的算法,输出的结果是预测的准确率(就是y_train的预测数据和y_test比较)
但是此时我们有一个疑问 为什么n_neighbors=3 这是我们默认写的,就是最好的吗?我们如何才能求到最好的k值?

import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split  
from sklearn.neighbors import KNeighborsClassifier   

digits = datasets.load_digits()
X = digits.data
y = digits.target

X_train,X_test,y_train,y_test = train_test_split(X, y, test_size=0.2, random_state=666)

#寻找最好的k
best_score = 0.0
best_k = -1
for k in range(1, 11):  #意思取最近的1-10个数进行比较
    knn_clf = KNeighborsClassifier(n_neighbors=k)
    knn_clf.fit(X_train, y_train)
    score = knn_clf.score(X_test, y_test)
    if score > best_score:
        best_k = k
        best_score = score

print("best_k =", best_k)  #  best_k = 4
print("best_score =", best_score)  # best_score = 0.9916666666666667

注意:如果计算出的best_k是在边界,比如是10,那么我们需要在往外延伸,因为可能10是这之间最好的,但不是真正最好的,最好的可能k可能比10大,这是我们可以继续range(8-21),依次知道best_k不在是边界为止
到目前为止我们只考虑了k值,但其实k近邻是比较离xxx最近的K个样本,那么我们是不是应该也要把距离的因素考虑进去?(目前我们的距离都是指欧拉距离)
更新代码如下

import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split  # 拆分训练数据,测试数据
from sklearn.neighbors import KNeighborsClassifier   #knn分类器

digits = datasets.load_digits()
X = digits.data
y = digits.target

X_train,X_test,y_train,y_test = train_test_split(X, y, test_size=0.2, random_state=666)

best_score = 0.0
best_k = -1
best_method = ""
for method in ["uniform", "distance"]:   #因为KNeighborsClassifier有参数weights表示是否需要考虑距离,默认uniform不考虑
    for k in range(1, 11):
        knn_clf = KNeighborsClassifier(n_neighbors=k, weights=method)
        knn_clf.fit(X_train, y_train)
        score = knn_clf.score(X_test, y_test)
        if score > best_score:
            best_k = k
            best_score = score
            best_method = method

print("best_method =", best_method)
print("best_k =", best_k)
print("best_score =", best_score)
# best_method = uniform
# best_k = 4
# best_score = 0.9916666666666667

但是考虑到距离的问题,我们还会考虑到何为距离
我们目前用的是欧拉距离,如果使用明可夫斯基距离((Minkowski Distance)呢?明可夫斯基是从欧拉,曼哈顿推导出来的公式(p=1时明可夫斯基距离相当于曼哈顿距离,p=2时明可夫斯基距离相当于欧拉距离)
具体过程看https://blog.csdn.net/xiaoduan_/article/details/79327781
所以这是我们会想到还有另一个超参数p

import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split  # 拆分训练数据,测试数据
from sklearn.neighbors import KNeighborsClassifier   #knn分类器

digits = datasets.load_digits()
X = digits.data
y = digits.target

X_train,X_test,y_train,y_test = train_test_split(X, y, test_size=0.2, random_state=666)

best_score = 0.0
best_k = -1
best_p= -1
for k in range(1, 11):
    for p in range(1, 5):
        knn_clf = KNeighborsClassifier(n_neighbors=k, weights="distance",p=p)
        knn_clf.fit(X_train, y_train)
        score = knn_clf.score(X_test, y_test)
        if score > best_score:
            best_k = k
            best_score = score
            best_p = p

print("best_p =", p)
print("best_k =", best_k)
print("best_score =", best_score)
# best_p = 4
# best_k = 3
# best_score = 0.9888888888888889

上述代码的网格搜索,(两层循坏k*p的网格)存在问题是寻找best_p,的时候默认了weights=distance,所以best_method和best_p在上述代码无法一起处理
修改如下:

import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split  # 拆分训练数据,测试数据
from sklearn.neighbors import KNeighborsClassifier   #knn分类器
from sklearn.model_selection import GridSearchCV

digits = datasets.load_digits()
X = digits.data
y = digits.target

X_train,X_test,y_train,y_test = train_test_split(X, y, test_size=0.2, random_state=666)

param_grid = [
    {
        'weights': ['uniform'],
        'n_neighbors': [i for i in range(1, 11)]
    },
    {
        'weights': ['distance'],
        'n_neighbors': [i for i in range(1, 11)],
        'p': [i for i in range(1, 6)]
    }
]
knn_clf = KNeighborsClassifier()

grid_search = GridSearchCV(knn_clf, param_grid, n_jobs=-1, verbose=2) # n_jobs是使用多少个cpu,-1表示所有,verbose表示打印输出信息
grid_search.fit(X_train, y_train)
print(grid_search.best_score_)

因为sklearn已经为我们封装好了网格搜索的方法 所以直接导入:from sklearn.model_selection import GridSearchCV
然后将自己的网格信息存入变量param_grid ,最后调用GridSearchCV方法并打印结果

相关文章

  • Hyper-Parameters

    (笔记)超参数:在算法运行前需要决定的参数模型参数:算法过程中学习的参数 输出结果:0.9888888888888...

  • 神经网络参数hyper-parameters选择

    我们到目前为止在神经网络中使用了好几个参数, hyper-parameters包括: 学习率(learning r...

网友评论

      本文标题:Hyper-Parameters

      本文链接:https://www.haomeiwen.com/subject/gfjwkctx.html