美文网首页
【VID】On the stability of video d

【VID】On the stability of video d

作者: EdwardLee | 来源:发表于2017-12-13 16:10 被阅读0次

    key point:

    衡量视频检测框的稳定性指标,以及提升稳定性的方法评估
    paper:https://arxiv.org/abs/1611.06467

    1 Work

    stability:16.11前,no prior Work
    A novel evaluation Metric for video detection:
    1)Accuracy:extended mAP
    2)Stability:a)fragment Error; b)center position Error;c)scale and ratio error;
    3)Stability Metric has low correlation with Accuracy Metric

    2 Introduction

    1)Detection

    Certain Object detection(Face/hand/pedestrian)->General Object detection(DPM、region based methods[rich feature hierarchies for accurate Object detection and Semantic Segmentation][Fast R-CNN][Faster R-CNN][Spatial pyramid pooling in deep convolutional networks for visual Recognition] & direct regression methods[YOLO][SSD])

    2)Video detection

    1)use object class correlation and motion propagation,Rescore detections based on the tubelets generated by visual tracking->limited in post-processing stage.
    2)integerate temporal context in end2end manner. A closely related area——video segmentation——use Conv-LSTM, capturing both temporal and spatial info.

    3)MOT Algorithm

    1)(Nearly) Online Algorithm[4,6,21]:try to associate existing targets with detections in recent frames after receiving the input image.
    2)Offline Algorithms[2,30]:read in all frames

    4)VID Metric

    a)Accuracy
    IoU and IoU threshold -> precision and recall curve by varying the threshold -> AP=Area Under Curve(AUC) of precision and recall curve -> mAUC over all classes
    b)Stability
    aa)Temporal Stability:measure the integrity of a trajectory
    bb)Spatial Stability:measure how much detection box jitter around GT in a trajectory

    3 Detection Stability

    output detections match GT:Hungarian Algorithm。IOUs between them are weights of the bipartile graph.
    φ=Ef + Ec + Er

    1)Ef(Fragment Error)

    1)Status change:Object is detected in previous frame but missed in current frame or missed in previous but detected in current frame
    2)Ef=1/N * Sum[ Fk/(Tk - 1) ], N个trajectory,Tk 是第k个trajectory的长度,Fk是status change的数量。——Fragment Error在MOT评价中也存在,与这里的区别是Normalized by the trajectory length。

    2)Ec(Center Position Error)

    a)evaluate the change of center position in both horizontal and vertical directions.
    b)BBox = (X,Y,W,H)
    Ex,f,k = (Xp,f,k-Xg,f,k)/Wg,f,k
    Ey,f,k = (Yp,f,k-Yg,f,k)/Hg,f,k
    δx,k=std(Ex,f,k)
    δy,k=std(Ey,f,k)
    **δ是标准差(standard deviation)
    Ec = 1/N
    Sum[ δx,k + δy,k ]
    因为bias已经在Accuracy Metric里考虑了,这里Ec只考虑variance of center deviation。

    3)Er(Scale and Ratio Error)

    1)Use square root of the area ratio to represent scale deviation;
    2)BBox = (X,Y,W,H)
    Es,f,k = sqrt[ (Wp,f,kHp,f,k)/(Wg,f,kHg,f,k) ]
    Er,f,k = (Wp,f,k/Hp,f,k)/(Wg,f,k/Hg,f,k)
    δs,k = std(Es,f,k )
    δr,k = std(Er,f,k)
    Er = 1/N*Sum[ δs,k + δr,k ]
    同样focus on the variance instead of bias of the scale and ratio deviation.

    4 Validation & Analysis

    对比VID中bbox聚合aggregation(消除冗余)阶段的方法:
    a)aggregation bbox within single frame(在单帧内聚合输出的检测框):representative method weighted NMS[Object detection via a multi-region and semantic segmentation-aware CNN model]
    b)utilize the temporal context across frames:Motion Guided Propagation(MGP)[T-CNN: tubelets with convolutional neural networks for Object detection from videos] and object tracking[Forward-backward error:Automatic detection of Tracking failures]

    *额外模型类型:利用不同class的correlation来抑制(suppress) False Positives——Multi-context suppression[T-CNN: tubelets with convolutional neural networks for Object detection from videos]

    1)Weighted NMS

    rather than only keeping the bounding box with highest score, we weighted average it with all the suppressed bounding boxes by their scores. It was first proposed in [Object detection via a multi-region and semantic segmentation-aware CNN model] to improve the mAP of still image detection. ——Also helpful to improve Stability[值得关注,NMS优化有助于提升框稳定性]

    2)MGP

    1)Insight:propagating detections to adjacent frames help recover FP(False Negative)
    2)MGP takes raw bbox before aggregation, then propagates them bidirectionally across adjacent frames using optical flow.
    3)the propagated bbox are treated equally as other detections, and use in aggregation.
    ***4) 为propagated bbox增加decay factor for detection score可以提高稳定性,decay factor和应用的数据集有关系

    MGP能降低Fragment error.

    3)Object Tracking: 调研用物体跟踪来smooth trajectory

    1)选择Median Flow (MF)[Forward-backward Error:Automatic detection of tracking failures]——efficient and effective short term tracking method
    2)不同于MGP,论文在bbox aggregation(NMS or weighted NMS)之后使用MF
    3)只用MF smooth detection bbox,而不改变detection score,也不add new box;
    4)高置信度的bbox->tracking->发现frame里的bbox和跟踪bbox IOU>0.5, 就average them as the final box;For the detections without any associated Tracking box,keep it unchanged。

    MF能提升center Error,但会小幅降低Accuracy。

    5 Stability & Accuracy:

    1)No single best method that outperforms others in both Accuracy and Stability
    2)Weighted NMS 在两者上都有提升,MF更多的是提升Stability。MGP提升Accuracy,尽管MF和MGP都是使用运动信息来指导detection [很有趣的结论]

    相关文章

      网友评论

          本文标题:【VID】On the stability of video d

          本文链接:https://www.haomeiwen.com/subject/tvfzixtx.html