在衡量目标检测器的精确程度时,常用的metric是AP(Average precision)。AP计算的是recall在0-1之间的平均precision。在解释AP之前,需要先了解precision, recall 和 IoU的概念。
Precision & Recall
Rank | Correct? | Precision | Recall |
---|---|---|---|
1 | True | 1.0 | 0.2 |
2 | True | 1.0 | 0.4 |
3 | False | 0.67 | 0.4 |
4 | False | 0.5 | 0.4 |
5 | False | 0.4 | 0.4 |
6 | True | 0.5 | 0.6 |
7 | True | 0.57 | 0.8 |
8 | False | 0.5 | 08 |
9 | False | 0.44 | 0.8 |
10 | True | 0.5 | 1.0 |
Precision 用于衡量预测的准确性,是预测正确的百分比
Recall用于衡量预测出positives的能力
数学定义如下:
其中:
= True positive,
= True negative,
= False positive,
= False negative
IoU(Intersection over union)
![](https://img.haomeiwen.com/i15048949/bb8be828ac6b8aca.png)
IoU测量的是两个bbox的重叠程度,在目标检测领域,被用于计算predicted bbox 和ground truth bbox有多重叠,IoU的取值范围是[0,1],取值越大说明重叠的越多,即预测的越准确。
![](https://img.haomeiwen.com/i15048949/5b5ea5ca633bbdd0.png)
![](https://img.haomeiwen.com/i15048949/db6677933b496e6c.png)
COCO mAP
在COCO竞赛中,AP是涵盖10个等级IoU(从0.5到0.95,步长为0.05)的80类预测精度的平均。
YOLO v3中AP的结果如下图所示
![](https://img.haomeiwen.com/i15048949/05d65e7c346101ac.png)
其中,AP
在COCO中AP和mAP没有区别:
AP is averaged over all categories. Traditionally, this is called “mean average precision” (mAP). We make no distinction between AP and mAP (and likewise AR and mAR) and assume the difference is clear from context.
Reference
[1]https://towardsdatascience.com/implementation-of-mean-average-precision-map-with-non-maximum-suppression-f9311eb92522
[2]https://medium.com/@timothycarlen/understanding-the-map-evaluation-metric-for-object-detection-a07fe6962cf3
[3]https://medium.com/@jonathan_hui/map-mean-average-precision-for-object-detection-45c121a31173
[4]https://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html
网友评论