美文网首页
Darknet 评估训练好的网络的性能

Darknet 评估训练好的网络的性能

作者: myth_0c21 | 来源:发表于2017-09-18 20:45 被阅读0次

    本文章是我用darknet训练tiny_yolo的笔记,仅仅按照作者提供的步骤跑一遍darknet的流程是不够的,训练一个网络,需要评价这个网络,并根据评价的结果想一下为什么是这样,怎样去优化这个网络。这样才是一个闭环,能够有提高,仅仅走一遍训练的流程,是没有意义的。

    如何评价训练好的网络

    首先网络有一个参数是loss值,这反应了你训练好的网络得到的结果和真实值之间的差距,具体的公式后续会补充,不过查看loss曲线随着迭代次数的增多,如何变化,有助于查看训练是否过拟合,是否学习率太小。

    0.Valid命令,将test数据集结果批量生产

    darknet detector valid  ./cfg/voc.data cfg/tiny_yolo_voc.cfg tiny_yolo_voc.weights
    

    1.生成loss-iter曲线

    在执行训练命令的时候加一下管道,tee一下log

    ./darknet detector train cfg/voc.data cfg/yolo-voc.cfg | tee training.log
    

    将下面的python代码保存为drawcurve.py。并执行

    python  drawcurve.py training.log 0
    

    python的代码为

    import argparse
    import sys
    import matplotlib.pyplot as plt
    def main(argv):
        parser = argparse.ArgumentParser()
        parser.add_argument("log_file",  help = "path to log file"  )
        parser.add_argument( "option", help = "0 -> loss vs iter"  )
        args = parser.parse_args()
        f = open(args.log_file)
        lines  = [line.rstrip("\n") for line in f.readlines()]
        # skip the first 3 lines
        lines = lines[3:]
        numbers = {'1','2','3','4','5','6','7','8','9','0'}
        iters = []
        loss = []
        for line in lines:
            if line[0] in numbers:
                args = line.split(" ")
                if len(args) >3:
                    iters.append(int(args[0][:-1]))
                    loss.append(float(args[2]))
        plt.plot(iters,loss)
        plt.xlabel('iters')
        plt.ylabel('loss')
        plt.grid()
        plt.show()
    if __name__ == "__main__":
        main(sys.argv)
    

    训练的log格式如下:

    Loaded: 4.533954 seconds
    Region Avg IOU: 0.262313, Class: 1.000000, Obj: 0.542580, No Obj: 0.514735, Avg Recall: 0.162162,  count: 37
    Region Avg IOU: 0.175988, Class: 1.000000, Obj: 0.499655, No Obj: 0.517558, Avg Recall: 0.070423,  count: 71
    Region Avg IOU: 0.200012, Class: 1.000000, Obj: 0.483404, No Obj: 0.514622, Avg Recall: 0.075758,  count: 66
    Region Avg IOU: 0.279284, Class: 1.000000, Obj: 0.447059, No Obj: 0.515849, Avg Recall: 0.134615,  count: 52
    1: 629.763611, 629.763611 avg, 0.001000 rate, 6.098687 seconds, 64 images
    Loaded: 2.957771 seconds
    Region Avg IOU: 0.145857, Class: 1.000000, Obj: 0.051285, No Obj: 0.031538, Avg Recall: 0.069767,  count: 43
    Region Avg IOU: 0.257284, Class: 1.000000, Obj: 0.048616, No Obj: 0.027511, Avg Recall: 0.078947,  count: 38
    Region Avg IOU: 0.174994, Class: 1.000000, Obj: 0.030197, No Obj: 0.029943, Avg Recall: 0.088889,  count: 45
    Region Avg IOU: 0.196278, Class: 1.000000, Obj: 0.076030, No Obj: 0.030472, Avg Recall: 0.087719,  count: 57
    2: 84.804230, 575.267700 avg, 0.001000 rate, 5.959159 seconds, 128 images
    

    格式为

    • Region Avg IOU: 这个是预测出的bbox和实际标注的bbox的交集 除以 他们的并集。显然,这个数值越大,说明预测的结果越好。
    • Avg Recall: 这个表示平均召回率, 意思是 检测出物体的个数 除以 标注的所有物体个数。
    • count: 标注的所有物体的个数。 如果 count = 6, recall = 0.66667, 就是表示一共有6个物体(可能包含不同类别,这个不管类别),然后我预测出来了4个,所以Recall 就是 4 除以 6 = 0.66667 。
    • 有一行跟上面不一样的,最开始的是iteration次数,然后是train loss,然后是avg train loss, 然后是学习率, 然后是一batch的处理时间, 然后是已经一共处理了多少张图片。 重点关注 train loss 和avg train loss,这两个值应该是随着iteration增加而逐渐降低的。如果loss增大到几百那就是训练发散了,如果loss在一段时间不变,就需要降低learning rate或者改变batch来加强学习效果。当然也可能是训练已经充分。这个需要自己判断。

    我的loss图为

    image.png

    2. 查看训练网络的召回率

    • 更改example/detector.c
    //list *plist = get_paths("data/coco_val_5k.list");
    list *plist = get_paths("valid-tiny-yolo.txt");
    #重新编译
    make -j8
    #执行recall 函数
    ./darknet detector recall cfg/voc.data cfg/tiny-yolo.cfg backup/tiny-yolo-voc_final.weights
    
    • 最后得到的log如下:
      289   710   746       RPs/Img: 21.33  IOU: 75.44%     Recall:95.17%
      290   711   748       RPs/Img: 21.34  IOU: 75.38%     Recall:95.05%
      291   713   750       RPs/Img: 21.32  IOU: 75.41%     Recall:95.07%
      292   715   752       RPs/Img: 21.37  IOU: 75.37%     Recall:95.08%
      293   717   754       RPs/Img: 21.33  IOU: 75.40%     Recall:95.09%
      294   719   756       RPs/Img: 21.32  IOU: 75.42%     Recall:95.11%
      295   721   758       RPs/Img: 21.32  IOU: 75.41%     Recall:95.12%
      296   722   759       RPs/Img: 21.28  IOU: 75.42%     Recall:95.13%
      297   722   759       RPs/Img: 21.36  IOU: 75.42%     Recall:95.13%
      298   722   759       RPs/Img: 21.42  IOU: 75.42%     Recall:95.13%
      299   724   761       RPs/Img: 21.41  IOU: 75.44%     Recall:95.14%
      300   725   762       RPs/Img: 21.43  IOU: 75.44%     Recall:95.14%
      301   727   764       RPs/Img: 21.42  IOU: 75.48%     Recall:95.16%
      302   728   765       RPs/Img: 21.43  IOU: 75.46%     Recall:95.16%
      303   728   765       RPs/Img: 21.40  IOU: 75.46%     Recall:95.16%
      304   730   767       RPs/Img: 21.40  IOU: 75.48%     Recall:95.18%
      305   734   771       RPs/Img: 21.42  IOU: 75.50%     Recall:95.20%
      306   746   783       RPs/Img: 21.45  IOU: 75.59%     Recall:95.27%
      307   748   785       RPs/Img: 21.43  IOU: 75.62%     Recall:95.29%
      308   750   787       RPs/Img: 21.42  IOU: 75.59%     Recall:95.30%
      309   752   789       RPs/Img: 21.43  IOU: 75.62%     Recall:95.31%
      310   754   791       RPs/Img: 21.44  IOU: 75.63%     Recall:95.32%
    

    具体的解释如下:
    格式为

    Number Correct Total Rps/Img IOU Recall 
    
    • Number表示处理到第几张图片。
    • Correct表示正确的识别除了多少bbox。这个值算出来的步骤是这样的,丢进网络一张图片,网络会预测出很多bbox,每个bbox都有其置信概率,概率大于threshold的bbox与实际的bbox,也就是labels中txt的内容计算IOU,找出IOU最大的bbox,如果这个最大值大于预设的IOU的threshold,那么correct加一。
    • Total表示实际有多少个bbox。
    • Rps/img表示平均每个图片会预测出来多少个bbox。
    • IOU: 这个是预测出的bbox和实际标注的bbox的交集 除以 他们的并集。显然,这个数值越大,说明预测的结果越好。
    • Recall召回率, 意思是检测出物体的个数 除以 标注的所有物体个数。通过代码我们也能看出来就是Correct除以Total的值。

    3.计算训练网络的mAP

    首先通过valid命令,遍历一遍测试数据集,跑出来训练好的网络在这个测试数据集的结果,命令如下

    darknet detector valid cfg/voc.data  cfg/tiny_yolo_voc.cfg tiny_yolo_voc.weights
    

    注意:在执行该命令的时候,需要你的当前路径下有一个results的文件夹,不然会报segmentation fault的错误。

    然后将这两个python脚本放在你的路径下.
    reval_voc.py

    #!/usr/bin/env python
    
    # Adapt from ->
    # --------------------------------------------------------
    # Fast R-CNN
    # Copyright (c) 2015 Microsoft
    # Licensed under The MIT License [see LICENSE for details]
    # Written by Ross Girshick
    # --------------------------------------------------------
    # <- Written by Yaping Sun
    
    """Reval = re-eval. Re-evaluate saved detections."""
    
    import os, sys, argparse
    import numpy as np
    import cPickle
    
    from voc_eval import voc_eval
    
    def parse_args():
        """
        Parse input arguments
        """
        parser = argparse.ArgumentParser(description='Re-evaluate results')
        parser.add_argument('output_dir', nargs=1, help='results directory',
                            type=str)
        parser.add_argument('--voc_dir', dest='voc_dir', default='data/VOCdevkit', type=str)
        parser.add_argument('--year', dest='year', default='2017', type=str)
        parser.add_argument('--image_set', dest='image_set', default='test', type=str)
    
        parser.add_argument('--classes', dest='class_file', default='data/voc.names', type=str)
    
        if len(sys.argv) == 1:
            parser.print_help()
            sys.exit(1)
    
        args = parser.parse_args()
        return args
    
    def get_voc_results_file_template(image_set, out_dir = 'results'):
        filename = 'comp4_det_' + image_set + '_{:s}.txt'
        path = os.path.join(out_dir, filename)
        return path
    
    def do_python_eval(devkit_path, year, image_set, classes, output_dir = 'results'):
        annopath = os.path.join(
            devkit_path,
            'VOC' + year,
            'Annotations',
            '{:s}.xml')
        imagesetfile = os.path.join(
            devkit_path,
            'VOC' + year,
            'ImageSets',
            'Main',
            image_set + '.txt')
        cachedir = os.path.join(devkit_path, 'annotations_cache')
        aps = []
        # The PASCAL VOC metric changed in 2010
        use_07_metric = True if int(year) < 2010 else False
        print 'VOC07 metric? ' + ('Yes' if use_07_metric else 'No')
        if not os.path.isdir(output_dir):
            os.mkdir(output_dir)
        for i, cls in enumerate(classes):
            if cls == '__background__':
                continue
            filename = get_voc_results_file_template(image_set).format(cls)
            rec, prec, ap = voc_eval(
                filename, annopath, imagesetfile, cls, cachedir, ovthresh=0.5,
                use_07_metric=use_07_metric)
            aps += [ap]
            print('AP for {} = {:.4f}'.format(cls, ap))
            with open(os.path.join(output_dir, cls + '_pr.pkl'), 'w') as f:
                cPickle.dump({'rec': rec, 'prec': prec, 'ap': ap}, f)
        print('Mean AP = {:.4f}'.format(np.mean(aps)))
        print('~~~~~~~~')
        print('Results:')
        for ap in aps:
            print('{:.3f}'.format(ap))
        print('{:.3f}'.format(np.mean(aps)))
        print('~~~~~~~~')
        print('')
        print('--------------------------------------------------------------')
        print('Results computed with the **unofficial** Python eval code.')
        print('Results should be very close to the official MATLAB eval code.')
        print('-- Thanks, The Management')
        print('--------------------------------------------------------------')
    
    
    
    if __name__ == '__main__':
        args = parse_args()
    
        output_dir = os.path.abspath(args.output_dir[0])
        with open(args.class_file, 'r') as f:
            lines = f.readlines()
    
        classes = [t.strip('\n') for t in lines]
    
        print 'Evaluating detections'
        do_python_eval(args.voc_dir, args.year, args.image_set, classes, output_dir)
    

    以及voc_eval.py:

    
    # --------------------------------------------------------
    # Fast/er R-CNN
    # Licensed under The MIT License [see LICENSE for details]
    # Written by Bharath Hariharan
    # --------------------------------------------------------
    
    import xml.etree.ElementTree as ET
    import os
    import cPickle
    import numpy as np
    
    def parse_rec(filename):
        """ Parse a PASCAL VOC xml file """
        tree = ET.parse(filename)
        objects = []
        for obj in tree.findall('object'):
            obj_struct = {}
            obj_struct['name'] = obj.find('name').text
            #obj_struct['pose'] = obj.find('pose').text
            #obj_struct['truncated'] = int(obj.find('truncated').text)
            obj_struct['difficult'] = int(obj.find('difficult').text)
            bbox = obj.find('bndbox')
            obj_struct['bbox'] = [int(bbox.find('xmin').text),
                                  int(bbox.find('ymin').text),
                                  int(bbox.find('xmax').text),
                                  int(bbox.find('ymax').text)]
            objects.append(obj_struct)
    
        return objects
    
    def voc_ap(rec, prec, use_07_metric=False):
        """ ap = voc_ap(rec, prec, [use_07_metric])
        Compute VOC AP given precision and recall.
        If use_07_metric is true, uses the
        VOC 07 11 point method (default:False).
        """
        if use_07_metric:
            # 11 point metric
            ap = 0.
            for t in np.arange(0., 1.1, 0.1):
                if np.sum(rec >= t) == 0:
                    p = 0
                else:
                    p = np.max(prec[rec >= t])
                ap = ap + p / 11.
        else:
            # correct AP calculation
            # first append sentinel values at the end
            mrec = np.concatenate(([0.], rec, [1.]))
            mpre = np.concatenate(([0.], prec, [0.]))
    
            # compute the precision envelope
            for i in range(mpre.size - 1, 0, -1):
                mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i])
    
            # to calculate area under PR curve, look for points
            # where X axis (recall) changes value
            i = np.where(mrec[1:] != mrec[:-1])[0]
    
            # and sum (\Delta recall) * prec
            ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])
        return ap
    
    def voc_eval(detpath,
                 annopath,
                 imagesetfile,
                 classname,
                 cachedir,
                 ovthresh=0.5,
                 use_07_metric=False):
        """rec, prec, ap = voc_eval(detpath,
                                    annopath,
                                    imagesetfile,
                                    classname,
                                    [ovthresh],
                                    [use_07_metric])
    
        Top level function that does the PASCAL VOC evaluation.
    
        detpath: Path to detections
            detpath.format(classname) should produce the detection results file.
        annopath: Path to annotations
            annopath.format(imagename) should be the xml annotations file.
        imagesetfile: Text file containing the list of images, one image per line.
        classname: Category name (duh)
        cachedir: Directory for caching the annotations
        [ovthresh]: Overlap threshold (default = 0.5)
        [use_07_metric]: Whether to use VOC07's 11 point AP computation
            (default False)
        """
        # assumes detections are in detpath.format(classname)
        # assumes annotations are in annopath.format(imagename)
        # assumes imagesetfile is a text file with each line an image name
        # cachedir caches the annotations in a pickle file
    
        # first load gt
        if not os.path.isdir(cachedir):
            os.mkdir(cachedir)
        cachefile = os.path.join(cachedir, 'annots.pkl')
        # read list of images
        with open(imagesetfile, 'r') as f:
            lines = f.readlines()
        imagenames = [x.strip() for x in lines]
    
        if not os.path.isfile(cachefile):
            # load annots
            recs = {}
            for i, imagename in enumerate(imagenames):
                recs[imagename] = parse_rec(annopath.format(imagename))
                if i % 100 == 0:
                    print 'Reading annotation for {:d}/{:d}'.format(
                        i + 1, len(imagenames))
            # save
            print 'Saving cached annotations to {:s}'.format(cachefile)
            with open(cachefile, 'w') as f:
                cPickle.dump(recs, f)
        else:
            # load
            with open(cachefile, 'r') as f:
                recs = cPickle.load(f)
    
        # extract gt objects for this class
        class_recs = {}
        npos = 0
        for imagename in imagenames:
            R = [obj for obj in recs[imagename] if obj['name'] == classname]
            bbox = np.array([x['bbox'] for x in R])
            difficult = np.array([x['difficult'] for x in R]).astype(np.bool)
            det = [False] * len(R)
            npos = npos + sum(~difficult)
            class_recs[imagename] = {'bbox': bbox,
                                     'difficult': difficult,
                                     'det': det}
    
        # read dets
        detfile = detpath.format(classname)
        with open(detfile, 'r') as f:
            lines = f.readlines()
    
        splitlines = [x.strip().split(' ') for x in lines]
        image_ids = [x[0] for x in splitlines]
        confidence = np.array([float(x[1]) for x in splitlines])
        BB = np.array([[float(z) for z in x[2:]] for x in splitlines])
    
        # sort by confidence
        sorted_ind = np.argsort(-confidence)
        sorted_scores = np.sort(-confidence)
        BB = BB[sorted_ind, :]
        image_ids = [image_ids[x] for x in sorted_ind]
    
        # go down dets and mark TPs and FPs
        nd = len(image_ids)
        tp = np.zeros(nd)
        fp = np.zeros(nd)
        for d in range(nd):
            R = class_recs[image_ids[d]]
            bb = BB[d, :].astype(float)
            ovmax = -np.inf
            BBGT = R['bbox'].astype(float)
    
            if BBGT.size > 0:
                # compute overlaps
                # intersection
                ixmin = np.maximum(BBGT[:, 0], bb[0])
                iymin = np.maximum(BBGT[:, 1], bb[1])
                ixmax = np.minimum(BBGT[:, 2], bb[2])
                iymax = np.minimum(BBGT[:, 3], bb[3])
                iw = np.maximum(ixmax - ixmin + 1., 0.)
                ih = np.maximum(iymax - iymin + 1., 0.)
                inters = iw * ih
    
                # union
                uni = ((bb[2] - bb[0] + 1.) * (bb[3] - bb[1] + 1.) +
                       (BBGT[:, 2] - BBGT[:, 0] + 1.) *
                       (BBGT[:, 3] - BBGT[:, 1] + 1.) - inters)
    
                overlaps = inters / uni
                ovmax = np.max(overlaps)
                jmax = np.argmax(overlaps)
    
            if ovmax > ovthresh:
                if not R['difficult'][jmax]:
                    if not R['det'][jmax]:
                        tp[d] = 1.
                        R['det'][jmax] = 1
                    else:
                        fp[d] = 1.
            else:
                fp[d] = 1.
    
        # compute precision recall
        fp = np.cumsum(fp)
        tp = np.cumsum(tp)
        rec = tp / float(npos)
        # avoid divide by zero in case the first detection matches a difficult
        # ground truth
        prec = tp / np.maximum(tp + fp, np.finfo(np.float64).eps)
        ap = voc_ap(rec, prec, use_07_metric)
    
        return rec, prec, ap
    
    

    然后执行:

    python reval_voc.py --voc_dir VOCdevkit --year 2007 --image_set test --class ./data/voc.names .
    

    结果如下

    Evaluating detections
    VOC07 metric? Yes
    AP for bicycle = 0.4111
    AP for bus = 0.2424
    AP for car = 0.3861
    AP for motorbike = 0.3818
    AP for person = 0.4779
    Mean AP = 0.3799
    ~~~~~~~~
    Results:
    0.411
    0.242
    0.386
    0.382
    0.478
    0.380
    ~~~~~~~~
    

    不是很理想
    reference

    相关文章

      网友评论

          本文标题:Darknet 评估训练好的网络的性能

          本文链接:https://www.haomeiwen.com/subject/zetisxtx.html