美文网首页我爱编程
Grid-search 用于参数选择

Grid-search 用于参数选择

作者: dimmy | 来源:发表于2018-07-26 14:20 被阅读0次

    转: https://www.cnblogs.com/ysugyl/p/8711205.html

    定义

    Grid Search:一种调参手段;穷举搜索:在所有候选的参数选择中,通过循环遍历,尝试每一种可能性,表现最好的参数就是最终的结果。其原理就像是在数组里找最大值。(为什么叫网格搜索?以有两个参数的模型为例,参数a有3种可能,参数b有4种可能,把所有可能性列出来,可以表示成一个3*4的表格,其中每个cell就是一个网格,循环过程就像是在每个网格里遍历、搜索,所以叫grid search)

    代码实现

    以两个参数的调优为例:

    from sklearn.datasets import load_iris
    from sklearn.svm import SVC
    from sklearn.model_selection import train_test_split
    
    iris = load_iris()
    X_trainval,X_test,y_trainval,y_test = train_test_split(iris.data,iris.target,random_state=0)
    X_train,X_val,y_train,y_val = train_test_split(X_trainval,y_trainval,random_state=1)
    print("Size of training set:{} size of validation set:{} size of teseting set:{}".format(X_train.shape[0],X_val.shape[0],X_test.shape[0]))
    
    best_score = 0.0
    for gamma in [0.001,0.01,0.1,1,10,100]:
        for C in [0.001,0.01,0.1,1,10,100]:
            svm = SVC(gamma=gamma,C=C)
            svm.fit(X_train,y_train)
            score = svm.score(X_val,y_val)
            if score > best_score:
                best_score = score
                best_parameters = {'gamma':gamma,'C':C}
    svm = SVC(**best_parameters) #使用最佳参数,构建新的模型
    svm.fit(X_trainval,y_trainval) #使用训练集和验证集进行训练,more data always results in good performance.
    test_score = svm.score(X_test,y_test) # evaluation模型评估
    print("Best score on validation set:{:.2f}".format(best_score))
    print("Best parameters:{}".format(best_parameters))
    print("Best score on test set:{:.2f}".format(test_score))
    
    ####   grid search start
    best_score = 0
    for gamma in [0.001,0.01,0.1,1,10,100]:
        for C in [0.001,0.01,0.1,1,10,100]:
            svm = SVC(gamma=gamma,C=C)#对于每种参数可能的组合,进行一次训练;
            svm.fit(X_train,y_train)
            score = svm.score(X_test,y_test)
            if score > best_score:#找到表现最好的参数
                best_score = score
                best_parameters = {'gamma':gamma,'C':C}
    ####   grid search end
    
    print("Best score:{:.2f}".format(best_score))
    print("Best parameters:{}".format(best_parameters))
    

    输出:

    Size of training set:112 size of testing set:38
    Best score:0.973684
    Best parameters:{'gamma': 0.001, 'C': 100}
    

    然而,这种间的的grid search方法,其最终的表现好坏与初始数据的划分结果有很大的关系,为了处理这种情况,我们采用交叉验证的方式来减少偶然性。

    Grid Search with Cross Validation

    from sklearn.model_selection import cross_val_score
    
    best_score = 0.0
    for gamma in [0.001,0.01,0.1,1,10,100]:
        for C in [0.001,0.01,0.1,1,10,100]:
            svm = SVC(gamma=gamma,C=C)
            scores = cross_val_score(svm,X_trainval,y_trainval,cv=5) #5折交叉验证
            score = scores.mean() #取平均数
            if score > best_score:
                best_score = score
                best_parameters = {"gamma":gamma,"C":C}
    svm = SVC(**best_parameters)
    svm.fit(X_trainval,y_trainval)
    test_score = svm.score(X_test,y_test)
    print("Best score on validation set:{:.2f}".format(best_score))
    print("Best parameters:{}".format(best_parameters))
    print("Score on testing set:{:.2f}".format(test_score))
    

    相关文章

      网友评论

        本文标题:Grid-search 用于参数选择

        本文链接:https://www.haomeiwen.com/subject/kazvmftx.html