美文网首页行为时序定位
pytorch版本的R-C3D工作以及扩展

pytorch版本的R-C3D工作以及扩展

作者: 黑恶歌王 | 来源:发表于2019-01-21 19:43 被阅读0次

    2019.11.17.更新说明

    1. 关于这个工程复现的一些问题
      对于私信以及评论问我的朋友们,首先表示抱歉,我没有看简书及时回复,这一个工程是我今年年初尝试复现的一项工作,后续作者也不断扩展做的更robust,具体的一些操作在作者的github上写的也比较清晰,我这里只是一个比较粗糙简单描述,写的比较粗糙,最后一次在5月份调试这个工程时作者已经把map的文件测试集成到了python中(虽然还是调用的matlab),那次比起我之前的调试过的版本已经完善了许多.
    2. 关于当时最后测试map的问题.
    threshold=0.500000 
    时间已过 4.047161 秒。
    AP:0.131 at overlap 0.5 for BaseballPitch
    AP:0.443 at overlap 0.5 for BasketballDunk
    AP:0.095 at overlap 0.5 for Billiards
    AP:0.431 at overlap 0.5 for CleanAndJerk
    AP:0.527 at overlap 0.5 for CliffDiving
    AP:0.298 at overlap 0.5 for CricketBowling
    AP:0.124 at overlap 0.5 for CricketShot
    AP:0.408 at overlap 0.5 for Diving
    AP:0.037 at overlap 0.5 for FrisbeeCatch
    AP:0.080 at overlap 0.5 for GolfSwing
    AP:0.545 at overlap 0.5 for HammerThrow
    AP:0.173 at overlap 0.5 for HighJump
    AP:0.361 at overlap 0.5 for JavelinThrow
    AP:0.599 at overlap 0.5 for LongJump
    AP:0.476 at overlap 0.5 for PoleVault
    AP:0.131 at overlap 0.5 for Shotput
    AP:0.049 at overlap 0.5 for SoccerPenalty
    AP:0.122 at overlap 0.5 for TennisSwing
    AP:0.224 at overlap 0.5 for ThrowDiscus
    AP:0.058 at overlap 0.5 for VolleyballSpiking
    MAP: 0.265574 
    

    上面是我最后复现的使用res18的一个结果,可以看作者的reame后发现其实是和他的结果差异不大.因为后续去涉及其他的一些工作这一个工作搁置了许久,也只留下了这一个最终预测结果.

    1. 关于map的计算部分代码的使用
      作者这里使用的工具实际上还是从matlab里调用来的接口,只不过这个map结果我们想要得到的话需要保留test时的log.
    #!/bin/bash
    GPU_ID=0,1
    EX_DIR=thumos14
    export PYTHONUNBUFFERED=true
    LOG="test_log.txt"
    time python test_net.py --cuda --net i3d-res50 \
      ${EXTRA_ARGS} \
     2>&1 | tee $LOG
    

    上述是当时我写的一个脚本,可以打印输出到test_log.txt文件中,只需要替换自己的net即可.

    然后使用evaluation/thumos14里的thumos14_log_analysis.py

    import sys, os, errno
    import numpy as np
    import csv
    import json
    import copy
    import argparse
    import subprocess
    
    THIS_DIR = os.path.dirname(os.path.abspath(__file__))
    FRAME_DIR = '/home/simon/THUMOS14'
    META_DIR = os.path.join(FRAME_DIR, 'annotation_')
    
    def nms(dets, thresh=0.4):
        """Pure Python NMS baseline."""
        if len(dets) == 0: return []
        x1 = dets[:, 0]
        x2 = dets[:, 1]
        scores = dets[:, 2]
        lengths = x2 - x1
        order = scores.argsort()[::-1]
        keep = []
        while order.size > 0:
            i = order[0]
            keep.append(i)
            xx1 = np.maximum(x1[i], x1[order[1:]])
            xx2 = np.minimum(x2[i], x2[order[1:]])
            inter = np.maximum(0.0, xx2 - xx1)
            ovr = inter / (lengths[i] + lengths[order[1:]] - inter)
            inds = np.where(ovr <= thresh)[0]
            order = order[inds + 1]
        return keep
    
    def generate_classes(meta_dir, split, use_ambiguous=False):
        class_id = {0: 'Background'}
        with open(os.path.join(meta_dir, 'class-index-detection.txt'), 'r') as f:
            lines = f.readlines()
            for l in lines:
                cname = l.strip().split()[-1]
                cid = int(l.strip().split()[0])
                class_id[cid] = cname
            if use_ambiguous:
                class_id[21] = 'Ambiguous'
    
        return class_id
    '''
    def get_segments(data, thresh, framerate):
        segments = []
        vid = 'Background'
        find_next = False
        tmp = {'label' : 0, 'score': 0, 'segment': [0, 0]}
        for l in data:
          # video name and sliding window length
          if "fg_name :" in l:
             vid = l.split('/')[-1]
    
          # frame index, time, confident score
          elif "frames :" in l:
             start_frame=int(l.split()[4])
             end_frame=int(l.split()[5])
             stride = int(l.split()[6].split(']')[0])
    
          elif "activity:" in l:
             label = int(l.split()[1])
             tmp['label'] = label
             find_next = True
    
          elif "im_detect" in l:
             return vid, segments
    
          elif find_next:
             try: 
               left_frame = float(l.split()[0].split('[')[-1])*stride + start_frame
               right_frame = float(l.split()[1])*stride + start_frame
             except:
               left_frame = float(l.split()[1])*stride + start_frame
               right_frame = float(l.split()[2])*stride + start_frame
             if (left_frame < end_frame) and (right_frame <= end_frame):
               left  = left_frame / 25.0
               right = right_frame / 25.0
               try: 
                 score = float(l.split()[-1].split(']')[0])
               except:
                 score = float(l.split()[-2])
               if score > thresh:
                 tmp1 = copy.deepcopy(tmp)
                 tmp1['score'] = score
                 tmp1['segment'] = [left, right]
                 segments.append(tmp1)
             elif (left_frame < end_frame) and (right_frame > end_frame):
                 if (end_frame-left_frame)*1.0/(right_frame-left_frame)>=0:
                     right_frame = end_frame
                     left  = left_frame / 25.0
                     right = right_frame / 25.0
                     try: 
                       score = float(l.split()[-1].split(']')[0])
                     except:
                       score = float(l.split()[-2])
                     if score > thresh:
                         tmp1 = copy.deepcopy(tmp)
                         tmp1['score'] = score
                         tmp1['segment'] = [left, right]
                         segments.append(tmp1)
    
    '''
    def get_segments(data, thresh, framerate):
        segments = []
        vid = 'Background'
        find_next = False
        tmp = {'label' : 0, 'score': 0, 'segment': [0, 0]}
        for l in data:
            # video name and sliding window length
            if "fg_name:" in l:
                vid = l.split('/')[-1]
    
            # frame index, time, confident score
            elif "frames:" in l:
                start_frame=int(l.split()[3])
                end_frame=int(l.split()[4])
                stride = int(l.split()[5].split(']')[0])
    
            elif "activity:" in l:
                label = int(l.split()[1])
                tmp['label'] = label
                find_next = True
    
            elif "im_detect" in l:
                return vid, segments
    
            elif find_next:
                try: 
                    left_frame = float(l.split()[0].split('[')[-1])*stride + start_frame
                    right_frame = float(l.split()[1])*stride + start_frame               
                except:
                    left_frame = float(l.split()[1])*stride + start_frame
                    right_frame = float(l.split()[2])*stride + start_frame
    
                try:
                    score = float(l.split()[-1].split(']')[0])                
                except:
                    score = float(l.split()[-2])    
                                
                if (left_frame >= right_frame):
                    print("???", l)
                    continue
                    
                if right_frame > end_frame:
                    #print("right out", right_frame, end_frame)
                    right_frame = end_frame
                                    
                left  = left_frame / framerate
                right = right_frame / framerate                
                if score > thresh:
                    tmp1 = copy.deepcopy(tmp)
                    tmp1['score'] = score
                    tmp1['segment'] = [left, right]
                    segments.append(tmp1)
                    
    def analysis_log(logfile, thresh, framerate):
        with open(logfile, 'r') as f:
            lines = f.read().splitlines()
        predict_data = []
        res = {}
        for l in lines:
            if "frames:" in l:
                predict_data = []
            predict_data.append(l)
            if "im_detect:" in l:
                vid, segments = get_segments(predict_data, thresh, framerate)
                if vid not in res:
                    res[vid] = []
                res[vid] += segments
        return res
    
    def select_top(segmentations, nms_thresh=0.99999, num_cls=0, topk=0):
      res = {}
      for vid, vinfo in segmentations.items():
        # select most likely classes
        if num_cls > 0:
          ave_scores = np.zeros(21)
          for i in xrange(1, 21):
            ave_scores[i] = np.sum([d['score'] for d in vinfo if d['label']==i])
          labels = list(ave_scores.argsort()[::-1][:num_cls])
        else:
          labels = list(set([d['label'] for d in vinfo]))
    
        # NMS
        res_nms = []
        for lab in labels:
          nms_in = [d['segment'] + [d['score']] for d in vinfo if d['label'] == lab]
          keep = nms(np.array(nms_in), nms_thresh)
          for i in keep:
            # tmp = {'label':classes[lab], 'score':nms_in[i][2], 'segment': nms_in[i][0:2]}
            tmp = {'label': lab, 'score':nms_in[i][2], 'segment': nms_in[i][0:2]}
            res_nms.append(tmp)
          
        # select topk
        scores = [d['score'] for d in res_nms]
        sortid = np.argsort(scores)[-topk:]
        res[vid] = [res_nms[id] for id in sortid]
      return res
    
    parser = argparse.ArgumentParser(description="log analysis.py")
    parser.add_argument('log_file', type=str, help="test log file path")
    parser.add_argument('--framerate', type=int, help="frame rate of videos extract by ffmpeg")##这里必填framerate,不然会报错,这里也是照顾到我们选择不同的帧率拆帧的方式。
    parser.add_argument('--thresh', type=float, default=0.005, help="filter those dets low than the thresh, default=0.0005")
    parser.add_argument('--nms_thresh', type=float, default=0.4, help="nms thresh, default=0.3")
    parser.add_argument('--topk', type=int, default=200, help="select topk dets, default=200")
    parser.add_argument('--num_cls', type=int, default=0, help="select most likely classes, default=0")  
    
    args = parser.parse_args()
    classes = generate_classes(META_DIR+'test', 'test', use_ambiguous=False)
    segmentations = analysis_log(args.log_file, thresh = args.thresh, framerate=args.framerate)
    segmentations = select_top(segmentations, nms_thresh=args.nms_thresh, num_cls=args.num_cls, topk=args.topk)
    
    
    res = {'version': 'VERSION 1.3', 
           'external_data': {'used': True, 'details': 'C3D pre-trained on activity-1.3 training set'},
           'results': {}}
    for vid, vinfo in segmentations.items():
      res['results'][vid] = vinfo
    
    #with open('results.json', 'w') as outfile:
    #  json.dump(res, outfile)
    
    with open('tmp.txt', 'w') as outfile:
      for vid, vinfo in segmentations.items():
        for seg in vinfo:
          outfile.write("{} {} {} {} {}\n".format(vid, seg['segment'][0], seg['segment'][1], int(seg['label']) ,seg['score']))
          
          
    def matlab_eval():
        print('Computing results with the official Matlab eval code')
        path = os.path.join(THIS_DIR, 'Evaluation')
        cmd = 'cp tmp.txt {} && '.format(path)
        cmd += 'cd {} && '.format(path)
        cmd += 'matlab -nodisplay -nodesktop '
        cmd += '-r "dbstop if error; '
        cmd += 'eval_thumos14(); quit;"'
        
        print('Runing: \n {}'.format(cmd))
        status = subprocess.call(cmd, shell=True)
        
    matlab_eval()
    

    当时的这个文件代码如上所示,一些相应的注释已有所标注.运行代码可以参考下图.


    运行示例
    1. 这个工作暂时就只做到这里了,各位可以根据自己的需求选取自己觉得有用的信息就好,如果能有一点帮助的话我就感觉很开心了,另外最近facebook的slowfast最近也开源了,这个工程感觉应用价值大一些,有兴趣的可以关注一下这个.

    时序行为检测新工作开展

    最近开始的一项新工作,首先是基于R-C3D.pytorch这一部分进行工作(看了下是电子科大的大佬迁移写出来的,确实是在这里救急了,十分感谢)的baseline具体工程见链接。这个方法是结合了C3D的框架还有faster-rcnn的做法来做的一项工作,也就是两个工作的结合。不得不说其实这一块只要是有一个比较好的做法提出,实际上就是把别人的两个方法一结合就呈现了自己的方法....搞得十分真实。我们的想法也是打算将这里之前的ma-i3d的network移植过来替代一下这里的C3D的network看看能不能对于时序检测工作上有一些改进帮助。

    时序行为检测的难点

    根据这两天对于时序行为检测的理解,当前打算和R-C3D的工作相结合做一个新框架出来,对于时序行为检测的两个难点了解到一个是时序检测片段要有精确的边界,就是说什么时候算是一个行为的开始,什么时候又是一个行为的结束,这个一点对比用bounding boxes来框一个静态object来讲难度相对大一些。(就是说就算拆帧来算的话也要精确到起始帧)。还有就是要结合时序信息来进行识别,看了下THUMOS2014数据集无论training还是validation都是一定会包含一个是feature和一个是video的文件夹。feature里面包含的一些信息是,video里面就是原始视频了。如果单纯我们切帧的话确实是会存在一部分的问题无论是R-C3D还是SCNN都是要先找到proposal,这里的精确度越高,对于后面分类的帮助越大。
    当前想做的几个数据集诸如THUMOS2014ActivityNet_v1.3、AVA(根据具体情况决定),Charades。其中THUMOS2014的训练集其实就是UCF101数据集,这个下载下来看一看就知道了。实际上现有数据集多多少少会有一些问题,里面视频带水印或者是时间长度十分短对于我们人来讲理解都有一定的困难,何况机器了...不过好在机器不像人一样会感觉疲劳。

    R-C3D的baseline工作开展

    实际上这一块只要在理解C3D基础上以及有了解过faster-rcnn之类的做法,就会发现这就是两块工作(proposal+classification的结合)。他的网络结构比较容易理解,可以看看这篇文章讲的比我详细,就是主要两个子网络,一:时序上的行为检测,检测起始帧和结束帧看这一段是否是一个完整的动作,这里训练集中倒是给出了指定的start-time和end-time。

    R-C3D.pytorch遇到的一些问题以及最终解决方案

    我们来详细解析一下这个工程的一些调试和遇到具体问题
    首先是大体看了一下这个工作的实现情况,具体的指标暂且还没有研究,只是凭感觉把结果调试出来了。工程内包含的文件见图1

    图1: 工程内包含的所有文件
    因为数据集在远程服务器上然后比较大,我就下了两三个视频拿来进行测试,这里有一个问题一开始没有解决因为这个作者写的十分不明确,对于第一次使用该数据集的人了解会有一些误解,我就是吃了这个亏卡在读取数据处理数据这里。
    这里我们首先要讲一下THUMOS2014数据集,训练集就是上面的ucf101数据集我们已经了解到了,但是,最重要的一点是这个数据集是包含两个任务的,识别和检测两大任务,它本身是从一个竞赛中分离过来的,也就是说本身他不是说要用ucf101来进行做检测这一块的。下面这个图2是我们服务器上的THUMOS2014数据集,其中1-training里面就是ucf101数据集,包括101类动作,共计13320段分割好的视频片段。THUMOS2014的验证集和测试集则分别包括1010和1574个未分割过的视频。就是后面的3-validation和6-testing这两个文件夹里的东西。在时序行为检测任务中,只有20类动作的未分割视频是有时序行为片段标注的,包括200个验证集视频(包含3007个行为片段)和213个测试集视频(包含3358个行为片段)。这些经过标注的未分割视频可以被用于训练和测试时序行为检测模型。这里说明就是说这里是用来做检测的数据,也就是我们真正做检测要用的训练的和测试的数据。一开始没有理解透,老是在想应该怎么把ucf101当做训练数据来匹配进去,耽误了不少时间,实在是不应该。话说论文中就有写啊,就应该一开始把数据集论文先读一遍,这样直接看R-C3D的方法实际上进展也并未有很多(嘤)。下面图3是3-validation里面的一些文件,图4是6-testing里的文件。 图2:THUMOS2014数据集里全部的几个文件,名称有过改变 图3:3-validation里的文件,也就是我们做时序检测要用的训练集
    图4:6-testing里的文件,也就是我们做时序检测的测试集
    理解了上面的任务要点后,那么根据完成这个任务的完整流程走的话,首先就是看看官方README里怎么讲的,图5告诉我们比较明确的前置准备工作了 图5:官方README给出的一个做法 数据我直接选取了3-validation里面的validation-videos里的三个视频,它里面的视频命名是和图6展示一样的。 图6: 训练视频数据集的文件命名形式
    然后这里我们是要对输入的视频进行分割的,必须要做的一部分。这个地方一开始很难理解,因为我们看到训练的时候不是有1010个未分割的视频在validation里,为什么不能用这么多?因为这里我们要做的是检测啊,上面也有提到只有20类动作的未分割视频是有时序行为片段标注的,也就说我们想得到时序上标注的动作信息,也只有这点数据供给我们的机器学习了。就相当于我们复习的时候只有两本书复习,别的什么复习资料都没有,就问你看还是不看吧,不看反正一点不会,看了多少还能有点进展。这里我们从图7-9中看一下具体的一些文件吧。 图7:标注信息 图8:Ambigious里面的内容
    图7这里的文件包含的都是一些时序标注信息,也就是我们所讲的能检测出动作的时间,然后图8里面就是这个具体动作到底在那几个视频中出现过,然后后面两个数字是他具体出现的起始时间,这里不光是猜测,经过验证过的,图9是我对应截到的一个能看的出来是动作的一帧(截图鬼才),我们可以看到下方的时间轴也正好和图8中的开始结束时间相对应。这里的起始时间都是按秒数算的,从多少秒到多少秒为止结束,小数点精确到微秒(相对来说比较宽泛了,不过也说得过去,再细一点其实影响也不是很大,毕竟这还是一个时序上的动作,多一个静止的图也就是截出的帧不一定有很大的帮助)。 图9:截出的一个图,视频名上方也可以看到
    然后我们就这样跟着来读一下generate_frames.py这个文件吧,这个文件在R-C3D.pytorch/process/thumos2014/文件夹下面,这里我们把我这里修改的程序也给出来:
    #coding=utf-8
    # --------------------------------------------------------
    # R-C3D
    # Copyright (c) 2017 Boston University
    # Licensed under The MIT License [see LICENSE for details]
    # Written by Huijuan Xu
    # --------------------------------------------------------
    
    import os
    from util import *
    import json
    import glob
    
    fps = 25
    ext = '.mp4'
    VIDEO_DIR = '/home/simon/ApplyEyeMakeup'
    FRAME_DIR = '/home/simon/THUMOS14'
    
    META_DIR = os.path.join(FRAME_DIR, 'annotation_')
    
    def generate_frame(split):
      SUB_FRAME_DIR = os.path.join(FRAME_DIR, split)
      mkdir(SUB_FRAME_DIR)
      segment = dataset_label_parser(META_DIR+split, split, use_ambiguous=True)
      video_list = segment.keys()
      for vid in video_list:
        filename = os.path.join(VIDEO_DIR, vid+ext)
        outpath = os.path.join(FRAME_DIR, split, vid)
        outfile = os.path.join(outpath, "image_%5d.jpg")
        mkdir(outpath)
        ffmpeg(filename, outfile, fps)
        for framename in os.listdir(outpath):
          resize(os.path.join(outpath, framename))
        frame_size = len(os.listdir(outpath))
        print (filename, fps, frame_size)
    
    generate_frame('val')
    #generate_frame('test')
    #generate_frame('testing')
    
    

    这里我们video-dir中存放的就是我们有的原始视频了,frames-dir是我们存放一些生成的frame的地方。这个文件单看是不够的,因为他从同一文件夹下的util.py中定义了一个方法dataset_label_parser(),我们把util.py也贴在下面:

    # --------------------------------------------------------
    # R-C3D
    # Copyright (c) 2017 Boston University
    # Licensed under The MIT License [see LICENSE for details]
    # Written by Huijuan Xu
    # --------------------------------------------------------
    
    import subprocess
    #import shutil
    import os, errno
    import cv2
    from collections import defaultdict
    import shutil
    import matplotlib
    import numpy as np
    
    def dataset_label_parser(meta_dir, split, use_ambiguous=False):
      class_id = defaultdict(int)
      with open(os.path.join(meta_dir, 'class-index-detection.txt'), 'r') as f:
        lines = f.readlines()
        for l in lines:
          cname = l.strip().split()[-1]# leibie name
          #print(cname)
          cid = int(l.strip().split()[0])## leibie id
          class_id[cname] = cid
          if use_ambiguous:
            class_id['Ambiguous'] = 21
        segment = {}
        #video_instance = set()
      for cname in class_id.keys():
        tmp = '{}_{}.txt'.format(cname, split)
        with open(os.path.join(meta_dir, tmp)) as f:
          lines = f.readlines()
          for l in lines:
            vid_name = l.strip().split()[0]
            start_t = float(l.strip().split()[1])
            end_t = float(l.strip().split()[2])
            #video_instance.add(vid_name)
            # initionalize at the first time
            if not vid_name in segment.keys():
              segment[vid_name] = [[start_t, end_t, class_id[cname]]]
            else:
              segment[vid_name].append([start_t, end_t, class_id[cname]])
    
      # sort segments by start_time
      for vid in segment:
        segment[vid].sort(key=lambda x: x[0])
    
      if True:
        keys = list(segment.keys())
        keys.sort()
        with open('segment.txt', 'w') as f:
          for k in keys:
            f.write("{}\n{}\n\n".format(k,segment[k]))
    
      return segment
    
    def get_segment_len(segment):
      segment_len = []
      for vid_seg in segment.values():
        for seg in vid_seg:
          l = seg[1] - seg[0]
          assert l > 0
          segment_len.append(l)
      return segment_len
    
    def mkdir(path):
      try:
        os.makedirs(path)
      except OSError as e:
        if e.errno != errno.EEXIST:
          raise
    
    def rm(path):
      try:
        shutil.rmtree(path)
      except OSError as e:
        if e.errno != errno.ENOENT:
          raise
    
    def ffmpeg(filename, outfile, fps):
      command = ["ffmpeg", "-i", filename, "-q:v", "1", "-r", str(fps), outfile]
      pipe = subprocess.Popen(command, stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
      pipe.communicate()
    
    
    def resize(filename, size = (171, 128)):
      img = cv2.imread(filename, 100)
      img2 = cv2.resize(img, size, interpolation=cv2.INTER_LINEAR)
      cv2.imwrite(filename, img2, [100])
    
    # get segs_len from segments by: segs_len = [ s[1]-s[0] for v in segments.values() for s in v ]
    def kmeans(segs_len, K=5, vis=False):
      X = np.array(segs_len).reshape(-1, 1)
      cls = KMeans(K).fit(X)
      print( "the cluster centers are: ")
      print( cls.cluster_centers_)
      if vis:
        markers = ['^','x','o','*','+']
        for i in range(K):
          members = cls.labels_ == i
          matplotlib.scatter(X[members,0],X[members,0],s=60,marker=markers[min(i,K-1)],c='b',alpha=0.5)
          matplotlib.title(' ')
          matplotlib.show()
    
    

    这里我们直接先看util.py中的'class-index-detection.txt',这个文件里存放的是我们有的这20类有标注的类别的名称:

    7 BaseballPitch
    9 BasketballDunk
    12 Billiards
    21 CleanAndJerk
    22 CliffDiving
    23 CricketBowling
    24 CricketShot
    26 Diving
    31 FrisbeeCatch
    33 GolfSwing
    36 HammerThrow
    40 HighJump
    45 JavelinThrow
    51 LongJump
    68 PoleVault
    79 Shotput
    85 SoccerPenalty
    92 TennisSwing
    93 ThrowDiscus
    97 VolleyballSpiking
    

    然而这里面并没有Ambigious,这里应该是说只是单纯识别是一个夸张的动作吧,反正分类里面没有,我们是学习不到具体是哪一类的。
    然后具体的我们再回去看generate_frames.py中,这里对于THUMOS2014文件夹是我们自己建的,里面包含annotaiton_val文件,这个稍微懂点程序的都能看出来,不多赘述。annotaiton文件夹里面就包含我们上述图7的标注信息,找寻这些动作的所在起始时间。
    好了,然后再往下读util.py里的dataset_label_parser()方法吧,这里还有一个他自己给的segment.txt,里面不光有我们所存的动作标注,后面还有一个数应该是划分的k段吧。上面采样率已经设置好了是25了。然后就根据segment划分出来每一帧存到对应的文件夹中,不是什么问题。
    然后下一步来看这个generate_roidb_training.py文件。

    #coding=utf-8
    # --------------------------------------------------------
    # R-C3D
    # Copyright (c) 2017 Boston University
    # Licensed under The MIT License [see LICENSE for details]
    # Written by Huijuan Xu
    # --------------------------------------------------------
    
    import os
    import copy
    import json
    import pickle
    import subprocess
    import numpy as np
    import cv2
    from util import *
    import glob
    
    FPS = 25
    ext = '.mp4'
    LENGTH = 768
    min_length = 3
    overlap_thresh = 0.7
    STEP = LENGTH / 4
    WINS = [LENGTH * 1]
    WINS = [LENGTH * 1]
    FRAME_DIR = '/home/simon/THUMOS14'## it can be changed
    META_DIR = os.path.join(FRAME_DIR, 'annotation_')
    
    print ('Generate Training Segments')
    train_segment = dataset_label_parser(META_DIR+'val', 'val', use_ambiguous=False)
    
    def generate_roi(rois, video, start, end, stride, split):
      tmp = {}
      tmp['wins'] = ( rois[:,:2] - start ) / stride
      tmp['durations'] = tmp['wins'][:,1] - tmp['wins'][:,0]
      tmp['gt_classes'] = rois[:,2]
      tmp['max_classes'] = rois[:,2]
      tmp['max_overlaps'] = np.ones(len(rois))
      tmp['flipped'] = False
      tmp['frames'] = np.array([[0, start, end, stride]])
      tmp['bg_name'] = os.path.join(FRAME_DIR, split, video)
      tmp['fg_name'] = os.path.join(FRAME_DIR, split, video)
      if not os.path.isfile(os.path.join(FRAME_DIR, split, video, 'image_' + str(end-1).zfill(5) + '.jpg')):
        print (os.path.join(FRAME_DIR, split, video, 'image_' + str(end-1).zfill(5) + '.jpg'))
        raise
      return tmp
    
    def generate_roidb(split, segment):
      VIDEO_PATH = os.path.join(FRAME_DIR, split)
      video_list = set(os.listdir(VIDEO_PATH))
      duration = []
      roidb = []
      for vid in segment:
        if vid in video_list:
          length = len(os.listdir(os.path.join(VIDEO_PATH, vid)))
          db = np.array(segment[vid])
          if len(db) == 0:
            continue
          db[:,:2] = db[:,:2] * FPS
    
          for win in WINS:
            # inner of windows
            stride = int(win / LENGTH)
            # Outer of windows
            step = int(stride * STEP)
            # Forward Direction
            for start in range(0, max(1, length - win + 1), step):
              end = min(start + win, length)
              assert end <= length
              rois = db[np.logical_not(np.logical_or(db[:,0] >= end, db[:,1] <= start))]
    
              # Remove duration less than min_length
              if len(rois) > 0:
                duration = rois[:,1] - rois[:,0]
                rois = rois[duration >= min_length]
    
              # Remove overlap less than overlap_thresh
              if len(rois) > 0:
                time_in_wins = (np.minimum(end, rois[:,1]) - np.maximum(start, rois[:,0]))*1.0
                overlap = time_in_wins / (rois[:,1] - rois[:,0])
                assert min(overlap) >= 0
                assert max(overlap) <= 1
                rois = rois[overlap >= overlap_thresh]
    
              # Append data
              if len(rois) > 0:
                rois[:,0] = np.maximum(start, rois[:,0])
                rois[:,1] = np.minimum(end, rois[:,1])
                tmp = generate_roi(rois, vid, start, end, stride, split)
                roidb.append(tmp)
                if USE_FLIPPED:
                   flipped_tmp = copy.deepcopy(tmp)
                   flipped_tmp['flipped'] = True
                   roidb.append(flipped_tmp)
    
            # Backward Direction
            for end in range(length, win-1, - step):
              start = end - win
              assert start >= 0
              rois = db[np.logical_not(np.logical_or(db[:,0] >= end, db[:,1] <= start))]
    
              # Remove duration less than min_length
              if len(rois) > 0:
                duration = rois[:,1] - rois[:,0]
                rois = rois[duration > min_length]
    
              # Remove overlap less than overlap_thresh
              if len(rois) > 0:
                time_in_wins = (np.minimum(end, rois[:,1]) - np.maximum(start, rois[:,0]))*1.0
                overlap = time_in_wins / (rois[:,1] - rois[:,0])
                assert min(overlap) >= 0
                assert max(overlap) <= 1
                rois = rois[overlap > overlap_thresh]
    
              # Append data
              if len(rois) > 0:
                rois[:,0] = np.maximum(start, rois[:,0])
                rois[:,1] = np.minimum(end, rois[:,1])
                tmp = generate_roi(rois, vid, start, end, stride, split)
                roidb.append(tmp)
                if USE_FLIPPED:
                   flipped_tmp = copy.deepcopy(tmp)
                   flipped_tmp['flipped'] = True
                   roidb.append(flipped_tmp)
    
      return roidb
    
    if __name__ == '__main__':
    
        USE_FLIPPED = True      
        train_roidb = generate_roidb('val', train_segment)
        print (len(train_roidb))
        print ("Save dictionary")
        pickle.dump(train_roidb, open('train_data_25fps_flipped.pkl','wb'), pickle.HIGHEST_PROTOCOL)
    
    

    这个就是生成我们需要的输入数据roidb,这个我查了一下是faster-rcnn读取数据的格式,这篇文章里面讲的比较详细,我也是从这里大概了解了一下,啊,反正总的来说他就是一个数据读取格式。我们就算不知道他是什么原理,反正能读取这一步是已经做到了。

    下面说一下中间出现的一些问题,首先就是报出这个torch里没有安cuda,这个我们把cuda9.0文件中的文件直接全复制做个硬链接暴力解决。
    然后本机测试遇到这个问题:ImportError: No module named 'numpy.core._multiarray_umath'
    终端使用pip install -U numpy更新到最新版本后可以解决上述问题。
    然后运行

    python train_net.py
    

    我们直接看结果,上述问题也都是从运行train这步出现的一些问题。
    最后终于,能够运行训练并且保留模型了。


    训练结果

    这里我们可以直接把自己的网络替换,因为这个老哥也就给了4个网络,c3d,i3d,res34,res50这四个,然后剩下的网络我们可以自己添加来看看结果如何。
    还存在的问题就是训练占得显存也很大,batch-size默认为1我的本机单卡1070显存就占了6000m左右这个后续调优我们再说。
    不管怎么样,第一步做的还是不错的,这项工作继续开展,还会持续更新。
    因为数据集比较大,所以跟师兄商量后拿出5类进行测试训练,每一类挑选了5个视频,然后还有一类分类为ambigious,一共定下是6类。
    从图11我了解到,train_net.py里最终的loss由四个loss想加得到一个总的loss函数,因为我们有两个子网络,proposal和classification两个subnet。一个sunet里包含两个loss来约束最终的结果。直接翻译下面的英文的话,就是


    图11:论文中对于loss的解释

    出现cudaerro(59),查了一下是标签出现问题,更正后从1开始打到最后一个类别。
    目前evaluation没有完成,看了一下需要从log里找到所需要的东西,看一下test.log应该怎么保存,保存要什么格式。(可以确定的是.txt格式)
    然后我们就直接用脚本输出看看最后这个出来的日志生成json看看,这个pytorch版的R3D还是一个残次品,不完全。粗略看了一下log_analysis文件,他是截取log中的关键词来生成对应我们想要的json文件,问题应该不是很大,等待半个小时log打印出来后去评估一下,评估json我们直接用python版本的这个evaluation文件就好了,原版使用的是matlab版的,这个今下午应该可以做出来。
    上述问题解决,现在已经正式开始训练thumos2014数据集了,先看看结果能不能复现出来,可以的话就没有太大问题。预计两天半训练结果出来。

    Processed过程

    又重新看了一下processed数据的过程,因为训练的时读取的是roidb的数据格式,刚刚研究并且从faster-rcnn对应得到启发,找一下里面各个键对应的含义,一会整理一下到下方表格中。
    这里的anchor中的K设置的是4,其实可以自由变换的,论文中说道给了2,4,5,6,8,9,10,12,14,16这么10种选择,实际上并没有用到这么多。

    目前难点以及可以优化部分

    从头捋了一遍,难点主要是对于不同数据集是否可以进行相同的处理方式?看了一下这个版本中generate_frames对于三个数据集的处理除了采样率不一样没有其他大的区别。基本是以activitynet的处理方式进行的。按照可以从大范围来约束小范围的理论来讲,应该是没有什么问题。可以先等等结果看看评估如何,如果处理不得当再进行优化修改。
    对于resnet50来讲,这里我用了之前3D-ResNet50-Pytorch的预训练kinetics模型,没有想到竟然嵌套进去了,这里应该是只要了resnet50每一层的权重,不会导致出来loss是nan(没有权重加载进来算loss)

    2019.4.22.

    任务其实在一周之前就重启了,目前发现现在基础的工程完善度十分之高,只不过是数据单纯不好看而已,但是工程的丰富度值得我们学习。
    faster-rccn这个文章还没有看,不过基础知识还是按照这个来做的,再仔细看一看理解一下。

    从代码方面具体理解细节

    我们开始深入从网络中的各部细节进行理解。还是先以C3D部分进行理解。
    我们先看成出c3d.py的代码

    import torch.nn as nn
    import torch
    from model.tdcnn.tdcnn import _TDCNN
    import math
    
    def make_layers(cfg, batch_norm=False):
        layers = []
        in_channels = 3
        maxpool_count = 0
        for v in cfg:
            if v == 'M':
                maxpool_count += 1
                if maxpool_count==1:
                    layers += [nn.MaxPool3d(kernel_size=(1,2,2), stride=(1,2,2))]
                elif maxpool_count==5:
                    layers += [nn.MaxPool3d(kernel_size=(2,2,2), stride=(2,2,2), padding=(0,1,1))]
                else:
                    layers += [nn.MaxPool3d(kernel_size=(2,2,2), stride=(2,2,2))]
            else:
                conv3d = nn.Conv3d(in_channels, v, kernel_size=(3,3,3), padding=(1,1,1))
                if batch_norm:
                    layers += [conv3d, nn.BatchNorm3d(v), nn.ReLU(inplace=True)]
                else:
                    layers += [conv3d, nn.ReLU(inplace=True)]
                in_channels = v
        return nn.Sequential(*layers)
    
    
    cfg = {
        'A': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
    }##这里c3d的config已经写的比较清楚,一个一个便利,如果遇到maxpool的标示"m"就添加一层maxpool的操作.也就说在第一层后和最后一层分别加一个maxpool层.
    
    class C3D(nn.Module):##这里就是一个正常的C3D网络模型,运用上面makelayer进行操作,估计他也是从某个地方搞来的.
        """
        The C3D network as described in [1].
            References
            ----------
           [1] Tran, Du, et al. "Learning spatiotemporal features with 3d convolutional networks."
           Proceedings of the IEEE international conference on computer vision. 2015.
        """
    
        def _initialize_weights(self):
            for m in self.modules():
                if isinstance(m, nn.Conv3d):
                    n = m.kernel_size[0] * m.kernel_size[1] * m.kernel_size[2] * m.out_channels
                    m.weight.data.normal_(0, math.sqrt(2. / n))
                    if m.bias is not None:
                        m.bias.data.zero_()
                elif isinstance(m, nn.BatchNorm3d):
                    m.weight.data.fill_(1)
                    m.bias.data.zero_()
                elif isinstance(m, nn.Linear):
                    m.weight.data.normal_(0, 0.01)
                    m.bias.data.zero_()
    
        def __init__(self):
            super(C3D, self).__init__()
            self.features = make_layers(cfg['A'], batch_norm=False)
            self.classifier = nn.Sequential(
                nn.Linear(512*1*4*4, 4096),
                nn.ReLU(True),
                nn.Dropout(inplace=False),
                nn.Linear(4096, 4096),
                nn.ReLU(True),
                nn.Dropout(inplace=False),
                nn.Linear(4096, 487),##这里487可以更改,应该是有输入num_classes进行更改.
            )
            self._initialize_weights()
    
        def forward(self, x):
            x = self.features(x)#这里记录features类(从make_layer里记录)
            x = x.view(x.size(0), -1)
            x = self.classifier(x)##记录分类层.这是基础网络,classes通过下方的tdcnn部分来传导.,num_classes通过cfg文件设置拿过来.
            return x
    class c3d_tdcnn(_TDCNN):
        def __init__(self, pretrained=False):
            self.model_path = 'data/pretrained_model/activitynet_iter_30000_3fps-caffe.pth' #ucf101-caffe.pth' #c3d_sports1M.pth' #activitynet_iter_30000_3fps-caffe.pth
            self.dout_base_model = 512##这里512是以为经过c3d网络之后确实得到的是512*l/8*h/16*w*16大小的特征图,总共52个通道的512张特征图,那么就保证往后输入的特征就是512.
            self.pretrained = pretrained
            _TDCNN.__init__(self)
    
        def _init_modules(self):
            c3d = C3D()##这里把C3D的网络先导进来,提取feature.
            if self.pretrained:##通过代码结合文章理解,实际上是说从c3d里直接导入这个网络.读取预训练模型.
                print("Loading pretrained weights from %s" %(self.model_path))
                state_dict = torch.load(self.model_path)
                c3d.load_state_dict({k:v for k,v in state_dict.items() if k in c3d.state_dict()})
    
            # Using conv1 -> conv5b, not using the last maxpool##这里的标示也说的很清楚了,models.values也是把对应层的值并到一起了.
            self.RCNN_base = nn.Sequential(*list(c3d.features._modules.values())[:-1])##对应的键值都加进去,并且也不导入maxpool这一层.
            # Using fc6
            self.RCNN_top = nn.Sequential(*list(c3d.classifier._modules.values())[:-4])#写入classification这一层.但是只要nn.Linear(512*4*4,4096)和他的relu以及dropout这一层.
            # Fix the layers before pool2:
            for layer in range(6):
                for p in self.RCNN_base[layer].parameters(): p.requires_grad = False##不要求梯度.实际上也是从头开始计算
    
            # not using the last maxpool layer
            self.RCNN_cls_score = nn.Linear(4096, self.n_classes)#这里再把tdcnn的两个参数导入.
            self.RCNN_twin_pred = nn.Linear(4096, 2 * self.n_classes)      
    
        def _head_to_tail(self, pool5):##从头到位结束.
            pool5_flat = pool5.view(pool5.size(0), -1)###这些设置实际上都是为tdcnn服务的.
            fc6 = self.RCNN_top(pool5_flat)
    
            return fc6
    

    下面的是tdcnn.py的代码,也就是faster-rcnn的那一部分转过来的做法.这个tdcnn理解应该是用来提取proposal的.

    import random
    import torch
    import torch.nn as nn
    import torch.nn.functional as F
    from torch.autograd import Variable
    import torchvision.models as models
    from torch.autograd import Variable
    import numpy as np
    from model.utils.config import cfg
    from model.rpn.rpn import _RPN
    from model.roi_temporal_pooling.modules.roi_temporal_pool import _RoITemporalPooling
    from model.rpn.proposal_target_layer_cascade import _ProposalTargetLayer
    import time
    import pdb
    from model.utils.net_utils import _smooth_l1_loss
    from model.utils.non_local_dot_product import NONLocalBlock3D
    
    DEBUG = False
    
    class _TDCNN(nn.Module):
        """ faster RCNN """
        def __init__(self):
            super(_TDCNN, self).__init__()
            #self.classes = classes
            self.n_classes = cfg.NUM_CLASSES
            # loss
            self.RCNN_loss_cls = 0
            self.RCNN_loss_twin = 0
    
            # define rpn
            self.RCNN_rpn = _RPN(self.dout_base_model)##特征传入rpn网络
            self.RCNN_proposal_target = _ProposalTargetLayer(self.n_classes)
            self.RCNN_roi_temporal_pool = _RoITemporalPooling(cfg.POOLING_LENGTH, cfg.POOLING_HEIGHT, cfg.POOLING_WIDTH, cfg.DEDUP_TWINS)
            if cfg.USE_ATTENTION:#导入nonlocal的模块.
                self.RCNN_attention = NONLocalBlock3D(self.dout_base_model, inter_channels=self.dout_base_model)
            
        def prepare_data(self, video_data):
            return video_data
    
        def forward(self, video_data, gt_twins):
            batch_size = video_data.size(0)
    
            gt_twins = gt_twins.data
            # prepare data
            video_data = self.prepare_data(video_data)
            # feed image data to base model to obtain base feature map
            base_feat = self.RCNN_base(video_data)
            # feed base feature map tp RPN to obtain rois
            # rois, [rois_score], rpn_cls_prob, rpn_twin_pred, self.rpn_loss_cls, self.rpn_loss_twin, self.rpn_label, self.rpn_loss_mask
            rois, _, _, rpn_loss_cls, rpn_loss_twin, _, _ = self.RCNN_rpn(base_feat, gt_twins)
    
            # if it is training phase, then use ground truth twins for refining
            if self.training:
                roi_data = self.RCNN_proposal_target(rois, gt_twins)
                rois, rois_label, rois_target, rois_inside_ws, rois_outside_ws = roi_data
    
                rois_label = Variable(rois_label.view(-1).long())
                rois_target = Variable(rois_target.view(-1, rois_target.size(2)))
                rois_inside_ws = Variable(rois_inside_ws.view(-1, rois_inside_ws.size(2)))
                rois_outside_ws = Variable(rois_outside_ws.view(-1, rois_outside_ws.size(2)))
            else:
                rois_label = None
                rois_target = None
                rois_inside_ws = None
                rois_outside_ws = None
                rpn_loss_cls = 0
                rpn_loss_twin = 0
    
            rois = Variable(rois)
            # do roi pooling based on predicted rois
            if cfg.POOLING_MODE == 'pool':
                pooled_feat = self.RCNN_roi_temporal_pool(base_feat, rois.view(-1,3))               
           
            if cfg.USE_ATTENTION:
                pooled_feat = self.RCNN_attention(pooled_feat) 
            # feed pooled features to top model
            pooled_feat = self._head_to_tail(pooled_feat)        
            # compute twin offset, twin_pred will be (128, 402)
            twin_pred = self.RCNN_twin_pred(pooled_feat)
    
            if self.training:
                # select the corresponding columns according to roi labels, twin_pred will be (128, 2)
                twin_pred_view = twin_pred.view(twin_pred.size(0), int(twin_pred.size(1) / 2), 2)
                twin_pred_select = torch.gather(twin_pred_view, 1, rois_label.view(rois_label.size(0), 1, 1).expand(rois_label.size(0), 1, 2))
                twin_pred = twin_pred_select.squeeze(1)
    
            # compute object classification probability
            cls_score = self.RCNN_cls_score(pooled_feat)
            cls_prob = F.softmax(cls_score, dim=1)
    
            if DEBUG:
                print("tdcnn.py--base_feat.shape {}".format(base_feat.shape))
                print("tdcnn.py--rois.shape {}".format(rois.shape))
                print("tdcnn.py--tdcnn_tail.shape {}".format(pooled_feat.shape))
                print("tdcnn.py--cls_score.shape {}".format(cls_score.shape))
                print("tdcnn.py--twin_pred.shape {}".format(twin_pred.shape))
                
            RCNN_loss_cls = 0
            RCNN_loss_twin = 0
    
            if self.training:
                # classification loss
                RCNN_loss_cls = F.cross_entropy(cls_score, rois_label)
    
                # bounding box regression L1 loss
                RCNN_loss_twin = _smooth_l1_loss(twin_pred, rois_target, rois_inside_ws, rois_outside_ws)
    
                # RuntimeError caused by mGPUs and higher pytorch version: https://github.com/jwyang/faster-rcnn.pytorch/issues/226
                rpn_loss_cls = torch.unsqueeze(rpn_loss_cls, 0)
                rpn_loss_twin = torch.unsqueeze(rpn_loss_twin, 0)
                RCNN_loss_cls = torch.unsqueeze(RCNN_loss_cls, 0)
                RCNN_loss_twin = torch.unsqueeze(RCNN_loss_twin, 0)
                
            cls_prob = cls_prob.view(batch_size, rois.size(1), -1)
            twin_pred = twin_pred.view(batch_size, rois.size(1), -1)
    
            if self.training:        
                return rois, cls_prob, twin_pred, rpn_loss_cls, rpn_loss_twin, RCNN_loss_cls, RCNN_loss_twin, rois_label
            else:
                return rois, cls_prob, twin_pred            
    
        def _init_weights(self):
            def normal_init(m, mean, stddev, truncated=False):
                """
                weight initalizer: truncated normal and random normal.
                """
                # x is a parameter
                if truncated:
                    m.weight.data.normal_().fmod_(2).mul_(stddev).add_(mean) # not a perfect approximation
                else:
                    m.weight.data.normal_(mean, stddev)
                    m.bias.data.zero_()
            self.RCNN_rpn.init_weights()
            normal_init(self.RCNN_cls_score, 0, 0.01, cfg.TRAIN.TRUNCATED)
            normal_init(self.RCNN_twin_pred, 0, 0.001, cfg.TRAIN.TRUNCATED)
    
        def create_architecture(self):
            self._init_modules()
            self._init_weights()
    

    上面代码实际上就是两部分的结合,先用c3d的基础网络对特征提取proposal,TDCNN这一部分从另一部分来学习.把对应的键值写到tdcnn里进行提取proposal。从下方看其实要先走一遍rpn的方法。
    rpn.py的代码如下。

    from __future__ import absolute_import
    import torch
    import torch.nn as nn
    import torch.nn.functional as F
    from torch.autograd import Variable
    
    from model.utils.config import cfg
    from .proposal_layer import _ProposalLayer
    from .anchor_target_layer import _AnchorTargetLayer
    from model.utils.net_utils import _smooth_l1_loss, mask_rpn_losses
    
    import numpy as np
    import math
    import pdb
    import time
    
    DEBUG=False
    
    class _RPN(nn.Module):
        """ region proposal network """
        def __init__(self, din, out_scores=False):##这里网络也说明了rpn也是一个单独的网络,可以作为后期更改入手的方向.
            super(_RPN, self).__init__()
            
            self.din = din  # get depth of input feature map, e.g., 512
            self.anchor_scales = cfg.ANCHOR_SCALES##c
            self.feat_stride = cfg.FEAT_STRIDE[0]
            self.out_scores = out_scores#是否输出得分?
            self.mask_upsample_rate = 1
    
            # define the convrelu layers processing input feature map
            self.RPN_Conv1 = nn.Conv3d(self.din, 512, kernel_size=(3, 3, 3), stride=(1, 2, 2), padding=(1, 1, 1), bias=True)##c3d实际上经过了3*3的卷积核和一个maxpool,下面那个是给resnet等更深层的网络准备的.
            self.RPN_Conv2 = nn.Conv3d(512, 512, kernel_size=(3, 3, 3), stride=(1, 2, 2), padding=(1, 1, 1), bias=True)
            self.RPN_output_pool = nn.MaxPool3d(kernel_size=(1, 2, 2), stride=(1, 2, 2))
    
            # define bg/fg classifcation score layer
            self.nc_score_out = len(self.anchor_scales) * 2 # 2(bg/fg) * 10 (anchors)##backgroun/foreground,具体是一个指标.
            self.RPN_cls_score = nn.Conv3d(512, self.nc_score_out, 1, 1, 0)##1*1的卷积层,这里主要是区分类别的.
    
            # define anchor twin offset prediction layer
            self.nc_twin_out = len(self.anchor_scales) * 2 # 2(coords) * 10 (anchors)
            self.RPN_twin_pred = nn.Conv3d(512, self.nc_twin_out, 1, 1, 0)
    
            # define proposal layer
            self.RPN_proposal = _ProposalLayer(self.feat_stride, self.anchor_scales, self.out_scores)##proposal加classification的操作,具体操作大概就是按论文中所写的那样.proposal分别要做分类和边框回归划分.
    
            # define anchor target layer
            self.RPN_anchor_target = _AnchorTargetLayer(self.feat_stride, self.anchor_scales)##这里求anchors的具体分类.往forward下面看.
    
            self.rpn_loss_cls = 0
            self.rpn_loss_twin = 0
            self.rpn_loss_mask = 0
    
        @staticmethod
        def reshape(x, d):##reshape
            input_shape = x.size()
            x = x.view(
                input_shape[0],
                int(d),
                int(float(input_shape[1] * input_shape[2]) / float(d)),
                input_shape[3],
                input_shape[4]
            )
            return x
    
        def forward(self, base_feat, gt_twins):
    
            batch_size = base_feat.size(0)
    
            # return feature map after convrelu layer
            rpn_conv1 = F.relu(self.RPN_Conv1(base_feat), inplace=True)
            rpn_conv2 = F.relu(self.RPN_Conv2(rpn_conv1), inplace=True)
            rpn_output_pool = self.RPN_output_pool(rpn_conv2) # (1,512,96,1,1)
    
            # get rpn classification score
            rpn_cls_score = self.RPN_cls_score(rpn_output_pool)
    
            rpn_cls_score_reshape = self.reshape(rpn_cls_score, 2)##reshape这里调用上面的reshape
            #print("rpn_cls_score_reshape: {}".format(rpn_cls_score_reshape.shape))
            rpn_cls_prob_reshape = F.softmax(rpn_cls_score_reshape, dim=1)
            rpn_cls_prob = self.reshape(rpn_cls_prob_reshape, self.nc_score_out)
            #print("rpn_cls_prob: {}".format(rpn_cls_prob.shape))
    
            # get rpn offsets to the anchor twins
            rpn_twin_pred = self.RPN_twin_pred(rpn_output_pool)
            #print("rpn_twin_pred: {}".format(rpn_twin_pred.shape))
    
            # proposal layer
            cfg_key = 'TRAIN' if self.training else 'TEST'
    
            #rois = self.RPN_proposal((rpn_cls_prob.data, rpn_twin_pred.data, cfg_key))
            if self.out_scores:
                rois, rois_score = self.RPN_proposal((rpn_cls_prob.data, rpn_twin_pred.data, cfg_key))
            else:
                rois = self.RPN_proposal((rpn_cls_prob.data, rpn_twin_pred.data, cfg_key))
    
            self.rpn_loss_cls = 0
            self.rpn_loss_twin = 0
            self.rpn_loss_mask = 0
            self.rpn_label = None
    
            # generating training labels and build the rpn loss
            if self.training:
                assert gt_twins is not None
                # rpn_data = [label_targets, twin_targets, twin_inside_weights, twin_outside_weights]
                # label_targets: (batch_size, 1, A * length, height, width)
                # twin_targets: (batch_size, A*2, length, height, width), the same as twin_inside_weights and twin_outside_weights
                rpn_data = self.RPN_anchor_target((rpn_cls_score.data, gt_twins))
    
                # compute classification loss
                rpn_cls_score = rpn_cls_score_reshape.permute(0, 2, 3, 4, 1).contiguous().view(batch_size, -1, 2)
                self.rpn_label = rpn_data[0].view(batch_size, -1)
    
                rpn_keep = Variable(self.rpn_label.view(-1).ne(-1).nonzero().view(-1))
                rpn_cls_score = torch.index_select(rpn_cls_score.view(-1,2), 0, rpn_keep)
                self.rpn_label = torch.index_select(self.rpn_label.view(-1), 0, rpn_keep.data)
                self.rpn_label = Variable(self.rpn_label.long())
                self.rpn_loss_cls = F.cross_entropy(rpn_cls_score, self.rpn_label)
                fg_cnt = torch.sum(self.rpn_label.data.ne(0))
    
                rpn_twin_targets, rpn_twin_inside_weights, rpn_twin_outside_weights = rpn_data[1:]
    
                # compute twin regression loss
                rpn_twin_inside_weights = Variable(rpn_twin_inside_weights)
                rpn_twin_outside_weights = Variable(rpn_twin_outside_weights)
                rpn_twin_targets = Variable(rpn_twin_targets)
    
                self.rpn_loss_twin = _smooth_l1_loss(rpn_twin_pred, rpn_twin_targets, rpn_twin_inside_weights,
                                                                rpn_twin_outside_weights, sigma=3, dim=[1,2,3,4])
    
            if self.out_scores:
                return rois, rois_score, rpn_cls_prob, rpn_twin_pred, self.rpn_loss_cls, self.rpn_loss_twin, self.rpn_label, self.rpn_loss_mask
            else:
                return rois, rpn_cls_prob, rpn_twin_pred, self.rpn_loss_cls, self.rpn_loss_twin, self.rpn_label, self.rpn_loss_mask
    
        def init_weights(self):
            def normal_init(m, mean, stddev, truncated=False):
                """
                weight initalizer: truncated normal and random normal.
                """
                # x is a parameter
                if truncated:
                    m.weight.data.normal_().fmod_(2).mul_(stddev).add_(mean) # not a perfect approximation
                else:
                    m.weight.data.normal_(mean, stddev)
                    m.bias.data.zero_()
    
            normal_init(self.RPN_Conv1, 0, 0.01, cfg.TRAIN.TRUNCATED)
            normal_init(self.RPN_Conv2, 0, 0.01, cfg.TRAIN.TRUNCATED)
            normal_init(self.RPN_cls_score, 0, 0.01, cfg.TRAIN.TRUNCATED)
            normal_init(self.RPN_twin_pred, 0, 0.01, cfg.TRAIN.TRUNCATED)
    
        def create_architecture(self):
            self._init_modules()
            self.init_weights()
            
        def generate_mask_label(self, gt_twins, feat_len):
            """ 
            gt_twins will be (batch_size, n, 3), where each gt will be (x1, x2, class_id)
            # feat_len is the length of mask-task features, self.feat_stride * feat_len = video_len
            # according: self.feat_stride, and upsample_rate
            # mask will be (batch_size, feat_len), -1 -- ignore, 1 -- fg, 0 -- bg
            """
            batch_size = gt_twins.size(0)
            mask_label = torch.zeros(batch_size, feat_len).type_as(gt_twins)
            for b in range(batch_size):
               single_gt_twins = gt_twins[b]
               single_gt_twins[:, :2] = (single_gt_twins[:, :2] / self.feat_stride).int()
               twins_start = single_gt_twins[:, 0]
               _, indices = torch.sort(twins_start)
               single_gt_twins = torch.index_select(single_gt_twins, 0, indices).long().cpu().numpy()
    
               starts = np.minimum(np.maximum(0, single_gt_twins[:,0]), feat_len-1)
               ends = np.minimum(np.maximum(0, single_gt_twins[:,1]), feat_len)
               for x in zip(starts, ends):
                  mask_label[b, x[0]:x[1]+1] = 1
    
            return mask_label
    
    

    其中还涉及到的proposal的layer也需要看一下,总体的功能就是为了生成proposal.具体proposal是个什么暂且理解为就是一段有长度的序列?

    from __future__ import absolute_import
    # --------------------------------------------------------
    # R-C3D
    # Copyright (c) 2017 Boston University
    # Licensed under The MIT License [see LICENSE for details]
    # Written by Huijuan Xu
    # --------------------------------------------------------
    # --------------------------------------------------------
    # Reorganized and modified by Shiguang Wang
    # --------------------------------------------------------
    
    import torch
    import torch.nn as nn
    import numpy as np
    import math
    import yaml
    from model.utils.config import cfg
    from .generate_anchors import generate_anchors
    from .twin_transform import twin_transform_inv, clip_twins
    from model.nms.nms_wrapper import nms
    
    import pdb
    
    DEBUG = False
    
    class _ProposalLayer(nn.Module):
        """
        Outputs object detection proposals by applying estimated bounding-box
        transformations to a set of regular twins (called "anchors").
        """
    
        def __init__(self, feat_stride, scales, out_scores=False):
            super(_ProposalLayer, self).__init__()
    
            self._feat_stride = feat_stride
            self._anchors = torch.from_numpy(generate_anchors(base_size=feat_stride, scales=np.array(scales))).float()##生成anchors
            self._num_anchors = self._anchors.size(0)
            self._out_scores = out_scores
            # TODO: add scale_ratio for video_len ??
            # rois blob: holds R regions of interest, each is a 3-tuple
            # (n, x1, x2) specifying an video batch index n and a
            # rectangle (x1, x2)
            # top[0].reshape(1, 3)
            #
            # # scores blob: holds scores for R regions of interest
            # if len(top) > 1:
            #     top[1].reshape(1, 1, 1, 1)
    
        def forward(self, input):
    
            # Algorithm:
            #
            # for each (H, W) location i
            #   generate A anchor twins centered on cell i
            #   apply predicted twin deltas at cell i to each of the A anchors
            # clip predicted twins to video
            # remove predicted twins with either height or width < threshold
            # sort all (proposal, score) pairs by score from highest to lowest
            # take top pre_nms_topN proposals before NMS
            # apply NMS with threshold 0.7 to remaining proposals
            # take after_nms_topN proposals after NMS
            # return the top proposals (-> RoIs top, scores top)
    
    
            # the first set of _num_anchors channels are bg probs
            # the second set are the fg probs
            scores = input[0][:, self._num_anchors:, :, :, :]
            twin_deltas = input[1]
            cfg_key = input[2]
            pre_nms_topN  = cfg[cfg_key].RPN_PRE_NMS_TOP_N
            post_nms_topN = cfg[cfg_key].RPN_POST_NMS_TOP_N
            nms_thresh    = cfg[cfg_key].RPN_NMS_THRESH
            min_size      = cfg[cfg_key].RPN_MIN_SIZE
    
            # 1. Generate proposals from twin deltas and shifted anchors
            length, height, width = scores.shape[-3:]
    
            if DEBUG:
                print( 'score map size: {}'.format(scores.shape))
    
            batch_size = twin_deltas.size(0)
    
            # Enumerate all shifts
            shifts = np.arange(0, length) * self._feat_stride
            shifts = torch.from_numpy(shifts.astype(float))
            shifts = shifts.contiguous().type_as(scores)
    
            # Enumerate all shifted anchors:
            #
            # add A anchors (1, A, 2) to
            # cell K shifts (K, 1, 1) to get
            # shift anchors (K, A, 2)
            # reshape to (1, K*A, 2) shifted anchors
            # expand to (batch_size, K*A, 2)
            A = self._num_anchors
            K = shifts.shape[0]
            self._anchors = self._anchors.type_as(scores)
            anchors = self._anchors.view(1, A, 2) + shifts.view(K, 1, 1)
            anchors = anchors.view(1, K * A, 2).expand(batch_size, K * A, 2)
            # Transpose and reshape predicted twin transformations to get them
            # into the same order as the anchors:
            #
            # twin deltas will be (batch_size, 2 * A, L, H, W) format
            # transpose to (batch_size, L, H, W, 2 * A)
            # reshape to (batch_size, L * H * W * A, 2) where rows are ordered by (l, h, w, a)
            # in slowest to fastest order
            twin_deltas = twin_deltas.permute(0, 2, 3, 4, 1).contiguous()
            twin_deltas = twin_deltas.view(batch_size, -1, 2)
    
            # Same story for the scores:
            #
            # scores are (batch_size, A, L, H, W) format
            # transpose to (batch_size, L, H, W, A)
            # reshape to (batch_size, L * H * W * A) where rows are ordered by (l, h, w, a)
            scores = scores.permute(0, 2, 3, 4, 1).contiguous()
            scores = scores.view(batch_size, -1)
    
            # Convert anchors into proposals via twin transformations
            proposals = twin_transform_inv(anchors, twin_deltas, batch_size)
    
            # 2. clip predicted wins to video
            proposals = clip_twins(proposals, length * self._feat_stride, batch_size)
    
            # 3. remove predicted twins with either length < threshold
            # assign the score to 0 if it's non keep.
            no_keep = self._filter_twins_reverse(proposals, min_size)
            scores[no_keep] = 0
            
            scores_keep = scores
            proposals_keep = proposals
            # sorted in descending order
            _, order = torch.sort(scores_keep, 1, True)
     
            #print ("scores_keep {}".format(scores_keep.shape))
            #print ("proposals_keep {}".format(proposals_keep.shape))
            #print ("order {}".format(order.shape))
    
            output = scores.new(batch_size, post_nms_topN, 3).zero_()
    
            if self._out_scores:
                output_score = scores.new(batch_size, post_nms_topN, 2).zero_()
    
            for i in range(batch_size):
    
                proposals_single = proposals_keep[i]
                scores_single = scores_keep[i]
    
                # 4. sort all (proposal, score) pairs by score from highest to lowest
                # 5. take top pre_nms_topN (e.g. 6000)
                order_single = order[i]
    
                if pre_nms_topN > 0 and pre_nms_topN < scores_keep.numel():
                    order_single = order_single[:pre_nms_topN]
    
                proposals_single = proposals_single[order_single, :]
                scores_single = scores_single[order_single].view(-1,1)
    
                # 6. apply nms (e.g. threshold = 0.7)
                # 7. take after_nms_topN (e.g. 300)
                # 8. return the top proposals (-> RoIs top)
    
                keep_idx_i = nms(torch.cat((proposals_single, scores_single), 1), nms_thresh, force_cpu=not cfg.USE_GPU_NMS)
                keep_idx_i = keep_idx_i.long().view(-1)
    
                if post_nms_topN > 0:
                    keep_idx_i = keep_idx_i[:post_nms_topN]
                proposals_single = proposals_single[keep_idx_i, :]
                scores_single = scores_single[keep_idx_i, :]
    
                # padding 0 at the end.
                num_proposal = proposals_single.size(0)
                #print ("num_proposal: ", num_proposal)
                output[i,:,0] = i
                output[i,:num_proposal,1:] = proposals_single
    
                if self._out_scores:
                    output_score[i, :, 0] = i
                    output_score[i, :num_proposal, 1] = scores_single
    
            if self._out_scores:
                return output, output_score
            else:
                return output
    
        def backward(self, top, propagate_down, bottom):
            """This layer does not propagate gradients."""
            pass
    
        def reshape(self, bottom, top):
            """Reshaping happens during the call to forward."""
            pass
    
        def _filter_twins_reverse(self, twins, min_size):
            """get the keep index of all twins with length smaller than min_size. 
            twins will be (batch_size, C, 2), keep will be (batch_size, C)"""
            ls = twins[:, :, 1] - twins[:, :, 0] + 1
            no_keep = (ls < min_size)
            return no_keep
    

    其中的proposal里面generate_anchors也需要细看.

    # --------------------------------------------------------
    # R-C3D
    # Copyright (c) 2017 Boston University
    # Licensed under The MIT License [see LICENSE for details]
    # Written by Huijuan Xu
    # --------------------------------------------------------
    
    import numpy as np
    import pdb
    
    def generate_anchors(base_size=8, scales=2**np.arange(3, 6)):##读入anchors和读入的批次,生成一个划窗方
        # 便我们选择后续的anchors.对于np.arange函数的返回值: np.arange()函数返回一个有终点和起点的固定步长的排列,如[1,
        # 2,3,4,5],起点是1,终点是5,步长为1。 两个参数时,第一个参数为起点,第二个参数为终点,步长取默认值1。
        """
        Generate anchor (reference) windows by enumerating aspect 
        scales wrt a reference (0, 7) window.
        """
    
        base_anchor = np.array([1, base_size]) - 1#进入的base_size是8,也就说实际上划窗大小是8长度的
        #print('base_anchor = ',base_anchor)
        anchors = _scale_enum(base_anchor, scales)##这里生成来了8个anchors,实际上对应的scales有8个.
        return anchors
    
    def _whctrs(anchor):
        """
        Return width, height, x center, and y center for an anchor (window).
        """
    
        l = anchor[1] - anchor[0] + 1
        x_ctr = anchor[0] + 0.5 * (l - 1)
        return l, x_ctr 
    
    def _mkanchors(ls, x_ctr):
        """
        Given a vector of lengths (ls) around a center##看解释说是生成一堆可能的anchors
        (x_ctr), output a set of anchors (windows).
        """
        ls = ls[:, np.newaxis   ]#np.newaxis的作用就是在这一位置增加一个一维,这一位置指的是np.newaxis所在的位置,比较抽象,需要配合例子理解。这列默认是添加到了10*1,强行作为一个二维的矩阵.##这里已经乘过8了,生成8个不同scale的候选时序.
        anchors = np.hstack((x_ctr - 0.5 * (ls - 1),
                             x_ctr + 0.5 * (ls - 1)))#np.hstack():在水平方向上平铺,这里又撑起了一层新的维度.
        return anchors
    
    def _scale_enum(anchor, scales):
        """
        Enumerate a set of anchors for each scale wrt an anchor.
        """
    
        l, x_ctr = _whctrs(anchor)#这里输入的anchors长度是8,也就是0-7,长度就是8.x的中心坐标是3.5((0+7)/2)
        ls = l * scales
        anchors = _mkanchors(ls, x_ctr)
        return anchors
    
    if __name__ == '__main__':
        import time
        t = time.time()
        a = generate_anchors(scales=np.array([2, 4, 5, 6, 8, 9, 10, 12, 14, 16]))
        print (time.time() - t)
        print (a)
        from IPython import embed; embed()
    

    对于classification的网络,通过anchor_target_layer这一层进行生成

    from __future__ import absolute_import
    # --------------------------------------------------------
    # R-C3D
    # Copyright (c) 2017 Boston University
    # Licensed under The MIT License [see LICENSE for details]
    # Written by Huijuan Xu
    # --------------------------------------------------------
    # --------------------------------------------------------
    # Reorganized and modified by Shiguang Wang
    # --------------------------------------------------------
    
    import torch
    import torch.nn as nn
    import numpy as np
    import numpy.random as npr
    
    from model.utils.config import cfg
    from .generate_anchors import generate_anchors
    from .twin_transform import clip_twins, twins_overlaps_batch, twin_transform_batch
    
    import pdb
    
    DEBUG = False
    
    try:
        long        # Python 2
    except NameError:
        long = int  # Python 3##照顾到了不同版本的python接口
    
    
    class _AnchorTargetLayer(nn.Module):
        """
            Assign anchors to ground-truth targets. Produces anchor classification
            labels and bounding-box regression targets.
        """
        def __init__(self, feat_stride, scales):
            super(_AnchorTargetLayer, self).__init__()
    
            self._feat_stride = feat_stride##base_stride为8
            self._anchors = torch.from_numpy(generate_anchors(base_size=feat_stride, scales=np.array(scales))).float()##通过代码来看确实是从生成的anchors继续,也就说作者论文写的是有问题的.
            self._num_anchors = self._anchors.size(0)
            # allow boxes to sit over the edge by a small amount
            self._allowed_border = 0  # default is 0##这个是anchors的边界.
    
        def forward(self, input):
            # Algorithm:
            #
            # for each (H, W) location i
            #   generate 9 anchor boxes centered on cell i
            #   apply predicted twin deltas at cell i to each of the 9 anchors
            # filter out-of-video anchors
            # measure GT overlap
            rpn_cls_score = input[0]##得到的分类scores
            # GT boxes (batch_size, n, 3), each row of gt box contains (x1, x2, label)
            gt_twins = input[1]
            #im_info = input[2]
            #num_boxes = input[2]
    
            batch_size = gt_twins.size(0)
    
            # map of shape (..., L, H, W)
            length, height, width = rpn_cls_score.shape[-3:]
            # Enumerate all shifts
            shifts = np.arange(0, length) * self._feat_stride
            shifts = torch.from_numpy(shifts.astype(float))
            shifts = shifts.contiguous().type_as(rpn_cls_score)
            # Enumerate all shifted anchors:
            #
            # add A anchors (1, A, 2) to
            # cell K shifts (K, 1, 1) to get
            # shift anchors (K, A, 2)
            # reshape to (K*A, 2) shifted anchors
            A = self._num_anchors
            K = shifts.shape[0]
    
            self._anchors = self._anchors.type_as(rpn_cls_score) # move to specific context
            all_anchors = self._anchors.view((1, A, 2)) + shifts.view(K, 1, 1)
            all_anchors = all_anchors.view(K * A, 2)
            total_anchors = int(K * A)
    
            keep = ((all_anchors[:, 0] >= -self._allowed_border) &
                    (all_anchors[:, 1] < long(length * self._feat_stride) + self._allowed_border))
    
            inds_inside = torch.nonzero(keep).view(-1)
    
            # keep only inside anchors
            anchors = all_anchors[inds_inside, :]
    
            # label: 1 is positive, 0 is negative, -1 is dont care
            labels = gt_twins.new(batch_size, inds_inside.size(0)).fill_(-1)
            twin_inside_weights = gt_twins.new(batch_size, inds_inside.size(0)).zero_()
            twin_outside_weights = gt_twins.new(batch_size, inds_inside.size(0)).zero_()
            #print("anchors {}".format(anchors.shape)) #(876, 2)
            #print("gt_twins {}".format(gt_twins.shape)) #(1, 6, 3)
            # assume anchors(batch_size, N, 2) and gt_wins(batch_size, K, 2), respectively, overlaps will be (batch_size, N, K)
            overlaps = twins_overlaps_batch(anchors, gt_twins)
            # find max_overlaps for each dt: (batch_size, N)
            max_overlaps, argmax_overlaps = torch.max(overlaps, 2)
            # find max_overlaps for each gt: (batch_size, K)
            gt_max_overlaps, _ = torch.max(overlaps, 1)
    
            if not cfg.TRAIN.RPN_CLOBBER_POSITIVES:
                labels[max_overlaps < cfg.TRAIN.RPN_NEGATIVE_OVERLAP] = 0
    
            gt_max_overlaps[gt_max_overlaps==0] = 1e-5
            keep = torch.sum(overlaps.eq(gt_max_overlaps.view(batch_size,1,-1).expand_as(overlaps)), 2)
    
            if torch.sum(keep) > 0:
                labels[keep>0] = 1
    
            # fg label: above threshold IOU
            labels[max_overlaps >= cfg.TRAIN.RPN_POSITIVE_OVERLAP] = 1
    
            if cfg.TRAIN.RPN_CLOBBER_POSITIVES:
                labels[max_overlaps < cfg.TRAIN.RPN_NEGATIVE_OVERLAP] = 0
    
            num_fg = int(cfg.TRAIN.RPN_FG_FRACTION * cfg.TRAIN.RPN_BATCHSIZE)
    
            sum_fg = torch.sum((labels == 1).int(), 1)
            sum_bg = torch.sum((labels == 0).int(), 1)
    
            for i in range(batch_size):
                # subsample positive labels if we have too many
                if sum_fg[i] > num_fg:
                    fg_inds = torch.nonzero(labels[i] == 1).view(-1)
                    # torch.randperm seems has a bug on multi-gpu setting that cause the segfault.
                    # See https://github.com/pytorch/pytorch/issues/1868 for more details.
                    # use numpy instead.
                    #rand_num = torch.randperm(fg_inds.size(0)).type_as(gt_twins).long()
                    rand_num = torch.from_numpy(np.random.permutation(fg_inds.size(0))).type_as(gt_twins).long()
                    disable_inds = fg_inds[rand_num[:fg_inds.size(0)-num_fg]]
                    labels[i][disable_inds] = -1
    
    #           num_bg = cfg.TRAIN.RPN_BATCHSIZE - sum_fg[i]
                num_bg = cfg.TRAIN.RPN_BATCHSIZE - torch.sum((labels == 1).int(), 1)[i]
    
                # subsample negative labels if we have too many
                if sum_bg[i] > num_bg:
                    bg_inds = torch.nonzero(labels[i] == 0).view(-1)
                    #rand_num = torch.randperm(bg_inds.size(0)).type_as(gt_twins).long()
                    rand_num = torch.from_numpy(np.random.permutation(bg_inds.size(0))).type_as(gt_twins).long()
                    disable_inds = bg_inds[rand_num[:bg_inds.size(0)-num_bg]]
                    labels[i][disable_inds] = -1
    
            offset = torch.arange(0, batch_size)*gt_twins.size(1)
    
            argmax_overlaps = argmax_overlaps + offset.view(batch_size, 1).type_as(argmax_overlaps)
            twin_targets = _compute_targets_batch(anchors, gt_twins.view(-1,3)[argmax_overlaps.view(-1), :].view(batch_size, -1, 3))
    
            # use a single value instead of 2 values for easy index.
            twin_inside_weights[labels==1] = cfg.TRAIN.RPN_TWIN_INSIDE_WEIGHTS[0]
    
            if cfg.TRAIN.RPN_POSITIVE_WEIGHT < 0:
                num_examples = torch.sum(labels[i] >= 0)
                positive_weights = 1.0 / num_examples.float()
                negative_weights = 1.0 / num_examples.float()
            else:
                assert ((cfg.TRAIN.RPN_POSITIVE_WEIGHT > 0) &
                        (cfg.TRAIN.RPN_POSITIVE_WEIGHT < 1))
                positive_weights = cfg.TRAIN.RPN_POSITIVE_WEIGHT
                negative_weights = 1 - positive_weights                    
    
            twin_outside_weights[labels == 1] = positive_weights
            twin_outside_weights[labels == 0] = negative_weights
    
            labels = _unmap(labels, total_anchors, inds_inside, batch_size, fill=-1)
            twin_targets = _unmap(twin_targets, total_anchors, inds_inside, batch_size, fill=0)
            twin_inside_weights = _unmap(twin_inside_weights, total_anchors, inds_inside, batch_size, fill=0)
            twin_outside_weights = _unmap(twin_outside_weights, total_anchors, inds_inside, batch_size, fill=0)
    
            outputs = []
    
            labels = labels.view(batch_size, length, height, width, A).permute(0,4,1,2,3).contiguous()
            labels = labels.view(batch_size, 1, A * length, height, width)
            outputs.append(labels)
    
            twin_targets = twin_targets.view(batch_size, length, height, width, A*2).permute(0,4,1,2,3).contiguous()
            outputs.append(twin_targets)
    
            anchors_count = twin_inside_weights.size(1)
            twin_inside_weights = twin_inside_weights.view(batch_size,anchors_count,1).expand(batch_size, anchors_count, 2)
    
            twin_inside_weights = twin_inside_weights.contiguous().view(batch_size, length, height, width, 2*A)\
                                .permute(0,4,1,2,3).contiguous()
    
            outputs.append(twin_inside_weights)
    
            twin_outside_weights = twin_outside_weights.view(batch_size,anchors_count,1).expand(batch_size, anchors_count, 2)
            twin_outside_weights = twin_outside_weights.contiguous().view(batch_size, length, height, width, 2*A)\
                                .permute(0,4,1,2,3).contiguous()
            outputs.append(twin_outside_weights)
    
            return outputs
    
        def backward(self, top, propagate_down, bottom):
            """This layer does not propagate gradients."""
            pass
    
        def reshape(self, bottom, top):
            """Reshaping happens during the call to forward."""
            pass
    
    def _unmap(data, count, inds, batch_size, fill=0):
        """ Unmap a subset of item (data) back to the original set of items (of
        size count) """
        # for labels, twin_inside_weights and twin_outside_weights
        if data.dim() == 2:
            ret = data.new(batch_size, count).fill_(fill)
            ret[:, inds] = data
        # for twin_targets
        else:
            ret = data.new(batch_size, count, data.size(2)).fill_(fill)
            ret[:, inds,:] = data
        return ret
    def _compute_targets_batch(ex_rois, gt_rois):
        """Compute bounding-box regression targets for an video."""
        return twin_transform_batch(ex_rois, gt_rois[:, :, :2])##twin_transform
    

    总体来讲确实是所有方法都基于faster-rcnn的思想直接迁移过来的。

    关于我们的i3d网络的迁移

    问题出现在conv1上,可能是因为卷积卷的不太对,目前conv1设置为了3dresnet50的conv1就可以正常运行,其他参数设置也和3dresnet50一样。先这么跑着,明天下课再过来看看具体如何。
    目前第一个epoch已经得到结果,已经在跑validation,最后看看test的结果如何,不过毕竟是第一个epoch,不太指望能跑到多高,只是大体看一下结果。
    目前这个问题更改了归一化的参数,把BN_TRACK_STATS = False,默认是true。通过谷歌翻译得到的解释是:

    布尔值,当设置为True时,此模块跟踪运行的均值和方差,当设置为False时,此模块不跟踪此类统计信息,并始终在训练和评估模式下使用批处理统计信息。 默认值:True
    

    不管怎么样这样下去loss比原来小多了,而且正常了。这个开关还是有用的。具体结果还是看明天跑出来怎么样。

    相关文章

      网友评论

        本文标题:pytorch版本的R-C3D工作以及扩展

        本文链接:https://www.haomeiwen.com/subject/nincjqtx.html