美文网首页
Paper | OVTrack: Open-Vocabulary

Paper | OVTrack: Open-Vocabulary

作者: 与阳光共进早餐 | 来源:发表于2023-12-21 05:22 被阅读0次

1 basic info

OVTrack: Open-Vocabulary Multiple Object Tracking

2 introduction

open vocabulary MOT: tracking beyond predefined training categories.

  • the classes of interested objects are available at test time
  1. Detection: similar to OV D, use CLIP to align image features and text embedding.
  2. Association: CLIP feature distillation helps in learning better appearance representations.
  3. Besides, used the denoising diffusion probabilistic models (DDPMs) to form an effective data hallucination strategy.

OVTracker sets a new SOTA on TAO benchmark with only static images as training data

3 open-vocabulary MOT

basically the same as OVD.

benchmark builds on the TAO benchmark.

4 OVTrack

framework:


OVTracker's functionality: localization, classification, and association;

  1. localization: train Faster-RCNN in a class-agnostic manner
  2. classification: first replace the original classifier in Faster-RCNN with a text head add an image head generating the embeddings. Then, use the CLIP text and image encoders to supervise these two heads. Apply supervision on image and text getting the L_{image} and L_{text}, respectively.
  3. Association: using contrastive learning with paired objects in I_{key} and I_{ref}.

Learning to track without video data.

  • use the large-scale, diverse image dataset LVIS to train the OVTrack.

  • propose a data hallucination method.

相关文章

网友评论

      本文标题:Paper | OVTrack: Open-Vocabulary

      本文链接:https://www.haomeiwen.com/subject/remxndtx.html