美文网首页
object_detection框架

object_detection框架

作者: 富有的心 | 来源:发表于2018-07-17 15:01 被阅读0次

https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md

Tensorflow detection model zoo

We provide a collection of detection models pre-trained on the COCO dataset, the Kitti dataset, the Open Images dataset and the AVA v2.1 dataset. These models can be useful for out-of-the-box inference if you are interested in categories already in COCO (e.g., humans, cars, etc) or in Open Images (e.g., surfboard, jacuzzi, etc). They are also useful for initializing your models when training on novel datasets.

我们提供了一系列的检测模型,这些模型在可可数据集、Kitti数据集、开放图像数据集和AVA v2.1数据集上进行了预处理。如果您对COCO(例如,人、汽车等)或Open image(例如,冲浪板、jacuzzi等)中的类别感兴趣,那么这些模型可以用于开箱即用的推断。在对新数据集进行训练时,它们对于初始化模型也很有用。

In the table below, we list each such pre-trained model including:
1、a model name that corresponds to a config file that was used to train this model in the samples/configs directory
一个与配置文件对应的模型名,该配置文件用于在sample /configs目录中训练这个模型

2、a download link to a tar.gz file containing the pre-trained model,
一个tar.gz文件的下载链接。这个tar.gz文件包含预训练模型,

3、a frozen graph proto with weights baked into the graph as constants (frozen_inference_graph.pb) to be used for out of the box inference (try this out in the Jupyter notebook!)
一个冻结的图原型,其权重被作为常量(frozen_inference_graph.pb)放入图中,用于开箱即用的推断(在Jupyter记事本中试试这个!)

4、a config file (pipeline.config) which was used to generate the graph. These directly correspond to a config file in the samples/configs directory but often with a modified score threshold. In the case of the heavier Faster R-CNN models, we also provide a version of the model that uses a highly reduced number of proposals for speed.
用于生成图形的配置文件(pipeline.config)。它们直接对应于sample /configs目录中的配置文件,但通常使用修改后的分数阈值。对于更重的更快的R-CNN模型,我们还提供了一个模型的版本,这个模型的版本使用了大量的关于速度的建议。

Some remarks on frozen inference graphs:
1、If you try to evaluate the frozen graph, you may find performance numbers for some of the models to be slightly lower than what we report in the below tables. This is because we discard detections with scores below a threshold (typically 0.3) when creating the frozen graph. This corresponds effectively to picking a point on the precision recall curve of a detector (and discarding the part past that point), which negatively impacts standard mAP metrics.
如果您试图对冻结的图表进行评估,您可能会发现一些模型的性能数字略低于我们在下表中报告的数字。这是因为我们在创建冻结图时丢弃了低于阈值(通常为0.3)的检测结果。这有效地对应于在检测器的精度恢复曲线上选择一个点(并丢弃超过该点的部分),这对标准映射度量产生负面影响。
2、Our frozen inference graphs are generated using the v1.8.0 release version of Tensorflow and we do not guarantee that these will work with other versions; this being said, each frozen inference graph can be regenerated using your current version of Tensorflow by re-running the exporter, pointing it at the model directory as well as the corresponding config file in samples/configs
我们的冻结推断图是使用Tensorflow的v1.8.0版本生成的,我们不保证这些将与其他版本一起工作;也就是说,通过重新运行导出器,可以使用当前版本的Tensorflow重新生成每个冻结推断图,并将其指向模型目录以及示例/configs中的相应配置文件

1531799058346.jpg 1531799075147.jpg 1531799082804.jpg

相关文章

网友评论

      本文标题:object_detection框架

      本文链接:https://www.haomeiwen.com/subject/avewpftx.html