美文网首页
计算keras-yolo v3 结果的mAP值

计算keras-yolo v3 结果的mAP值

作者: Y_166d | 来源:发表于2019-08-05 22:47 被阅读0次

    keras-yolo v3 源码:https://github.com/qqwweee/keras-yolo3
    mAP计算代码:https://github.com/Cartucho/mAP
    参考:https://blog.csdn.net/weixin_38106878/article/details/89199961

    最近用做目标检测用了yolo算法,因为对keras
    比较熟悉就用了keras版本的,但是训练完后发现源码没有可以评估训练结果的代码,于是就找资料然后自己对结果的mAP进行计算。(要计算precision、recall的可以直接在mAP代码改)
    以下方法比较麻瓜的地方是要自己将待测试的图片(测试集)放在一个文件夹里,包括标注信息也是。

    1 准备测试的图片

    下载完mAP的源码后,在该项目的目录的input下创建两个文件夹,images-optional和ground-truth,分别放置原图和测试图对应的xml文件

    --mAP
      |
      --input
        |
        --images-optional          放测试图片
        --ground-truth             放图片的xml文件
        --detection-results        放测试结果
    

    2 测试图片的标签格式转换(XML转换为txt格式)

    本次测试中,数据格式是的VOC格式,有现成的代码可用,无需做修改,打开终端,进入子目录、运行代码:

    cd mAP/scripts/extra
    python convert_gt_xml.py
    

    代码运行完后,测试数据的GT坐标会保存在对应的txt文件内,XML文件会另存在目录中的backup文件夹内;

    3 利用yolo结果测试图片

    在yolo项目的根目录下,与yolo_video.py在同级目录;
    修改路径后运行以下yolo_detect.py并生成测试结果保存在指定目录;

    路径:
    24行是模型路径
    186行是测试图片路径
    193行是测试结果.txt文件保存路径

    python yolo_detect.py
    

    详细的yolo_detect.py代码在文末
    (批量测试图片)

    测试完后将测试结果(.txt)保存在mAP项目的input/detection-results文件夹下

    4 计算mAP

    在mAP项目下运行:

    python main.py
    

    测试结果会自动保存在result目录下
    但是如果是在服务器运行没有图形界面的话,要在后面加--na --np
    不然会报: cannot connect to X server的错误。

    附 yolo_detect.py 代码如下:

    # -*- coding: utf-8 -*-
    """
    Class definition of YOLO_v3 style detection model on image and video
    """
    
    import colorsys
    import os
    import sys 
    from timeit import default_timer as timer
    
    import numpy as np
    from keras import backend as K
    from keras.models import load_model
    from keras.layers import Input
    from PIL import Image, ImageFont, ImageDraw
    
    from yolo3.model import yolo_eval, yolo_body, tiny_yolo_body
    from yolo3.utils import letterbox_image
    import os
    from keras.utils import multi_gpu_model
    
    class YOLO(object):
        _defaults = {
            "model_path": 'logs/000/trained_weights.h5', ##训练好的模型的路径
            "anchors_path": 'model_data/yolo_anchors.txt',
            "classes_path": 'model_data/voc_classes.txt',
            "score" : 0.3,
            "iou" : 0.45,
            "model_image_size" : (416, 416),
            "gpu_num" : 0
        }
    
        @classmethod
        def get_defaults(cls, n):
            if n in cls._defaults:
                return cls._defaults[n]
            else:
                return "Unrecognized attribute name '" + n + "'"
    
        def __init__(self, **kwargs):
            self.__dict__.update(self._defaults) # set up default values
            self.__dict__.update(kwargs) # and update with user overrides
            self.class_names = self._get_class()
            self.anchors = self._get_anchors()
            self.sess = K.get_session()
            self.boxes, self.scores, self.classes = self.generate()
    
        def _get_class(self):
            classes_path = os.path.expanduser(self.classes_path)
            with open(classes_path) as f:
                class_names = f.readlines()
            class_names = [c.strip() for c in class_names]
            return class_names
    
        def _get_anchors(self):
            anchors_path = os.path.expanduser(self.anchors_path)
            with open(anchors_path) as f:
                anchors = f.readline()
            anchors = [float(x) for x in anchors.split(',')]
            return np.array(anchors).reshape(-1, 2)
    
        def generate(self):
            model_path = os.path.expanduser(self.model_path)
            assert model_path.endswith('.h5'), 'Keras model or weights must be a .h5 file.'
    
            # Load model, or construct model and load weights.
            num_anchors = len(self.anchors)
            num_classes = len(self.class_names)
            is_tiny_version = num_anchors==6 # default setting
            try:
                self.yolo_model = load_model(model_path, compile=False)
            except:
                self.yolo_model = tiny_yolo_body(Input(shape=(None,None,3)), num_anchors//2, num_classes) \
                    if is_tiny_version else yolo_body(Input(shape=(None,None,3)), num_anchors//3, num_classes)
                self.yolo_model.load_weights(self.model_path) # make sure model, anchors and classes match
            else:
                assert self.yolo_model.layers[-1].output_shape[-1] == \
                    num_anchors/len(self.yolo_model.output) * (num_classes + 5), \
                    'Mismatch between model and given anchor and class sizes'
    
            print('{} model, anchors, and classes loaded.'.format(model_path))
    
            # Generate colors for drawing bounding boxes.
            hsv_tuples = [(x / len(self.class_names), 1., 1.)
                          for x in range(len(self.class_names))]
            self.colors = list(map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples))
            self.colors = list(
                map(lambda x: (int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)),
                    self.colors))
            np.random.seed(10101)  # Fixed seed for consistent colors across runs.
            np.random.shuffle(self.colors)  # Shuffle colors to decorrelate adjacent classes.
            np.random.seed(None)  # Reset seed to default.
    
            # Generate output tensor targets for filtered bounding boxes.
            self.input_image_shape = K.placeholder(shape=(2, ))
            if self.gpu_num>=2:
                self.yolo_model = multi_gpu_model(self.yolo_model, gpus=self.gpu_num)
            boxes, scores, classes = yolo_eval(self.yolo_model.output, self.anchors,
                    len(self.class_names), self.input_image_shape,
                    score_threshold=self.score, iou_threshold=self.iou)
            return boxes, scores, classes
    
        def detect_image(self, image):
            start = timer()
    
            if self.model_image_size != (None, None):
                assert self.model_image_size[0]%32 == 0, 'Multiples of 32 required'
                assert self.model_image_size[1]%32 == 0, 'Multiples of 32 required'
                boxed_image = letterbox_image(image, tuple(reversed(self.model_image_size)))
            else:
                new_image_size = (image.width - (image.width % 32),
                                  image.height - (image.height % 32))
                boxed_image = letterbox_image(image, new_image_size)
            image_data = np.array(boxed_image, dtype='float32')
    
            print(image_data.shape)
            image_data /= 255.
            image_data = np.expand_dims(image_data, 0)  # Add batch dimension.
    
            out_boxes, out_scores, out_classes = self.sess.run(
                [self.boxes, self.scores, self.classes],
                feed_dict={
                    self.yolo_model.input: image_data,
                    self.input_image_shape: [image.size[1], image.size[0]],
                    K.learning_phase(): 0
                })
    
            print('Found {} boxes for {}'.format(len(out_boxes), 'img'))
    
            font = ImageFont.truetype(font='font/FiraMono-Medium.otf',
                        size=np.floor(3e-2 * image.size[1] + 0.5).astype('int32'))
            thickness = (image.size[0] + image.size[1]) // 300
    
            for i, c in reversed(list(enumerate(out_classes))):
                predicted_class = self.class_names[c]
                box = out_boxes[i]
                score = out_scores[i]
    
                label = '{} {:.2f}'.format(predicted_class, score)
                draw = ImageDraw.Draw(image)
                label_size = draw.textsize(label, font)
    
                top, left, bottom, right = box
                top = max(0, np.floor(top + 0.5).astype('int32'))
                left = max(0, np.floor(left + 0.5).astype('int32'))
                bottom = min(image.size[1], np.floor(bottom + 0.5).astype('int32'))
                right = min(image.size[0], np.floor(right + 0.5).astype('int32'))
                print(label, (left, top), (right, bottom))
                #new_f=open("/home/shan/xws/pro/keras-yolo3/detection-results/"+tmp_file.replace(".jpg", ".txt"), "a")
                new_f.write("%s %s %s %s %s\n" %  (label, left, top, right, bottom))
                if top - label_size[1] >= 0:
                    text_origin = np.array([left, top - label_size[1]])
                else:
                    text_origin = np.array([left, top + 1])
    
                # My kingdom for a good redistributable image drawing library.
                for i in range(thickness):
                    draw.rectangle(
                        [left + i, top + i, right - i, bottom - i],
                        outline=self.colors[c])
                draw.rectangle(
                    [tuple(text_origin), tuple(text_origin + label_size)],
                    fill=self.colors[c])
                draw.text(text_origin, label, fill=(0, 0, 0), font=font)
                del draw
    
            end = timer()
            print(end - start)
            return image
    
        def close_session(self):
            self.sess.close()
    
    if __name__ == '__main__':
        # yolo=YOLO()
        # path = '1.jpg'
        # try:
        #     image = Image.open(path)
        # except:
        #     print('Open Error! Try again!')
        # else:
        #     r_image = yolo.detect_image(image)
        #     r_image.show()
        # yolo.close_session()
        #strat1=timer()
        dirname="test/" ##该目录为测试照片的存储路径,每次测试照片的数量可以自己设定
        path=os.path.join(dirname)
        pic_list=os.listdir(path) 
        count=0
        yolo=YOLO()
        for filename in pic_list:
            tmp_file=pic_list[count]
            new_f=open("result/"+tmp_file.replace(".jpg", ".txt"), "a")  #预测坐标生成txt文件保存的路径
            abs_path=path+pic_list[count]
            image = Image.open(abs_path)
            r_image = yolo.detect_image(image)
            count=count+1
        #end1=timer()
        print(count)
        yolo.close_session()
    

    相关文章

      网友评论

          本文标题:计算keras-yolo v3 结果的mAP值

          本文链接:https://www.haomeiwen.com/subject/uqnvdctx.html