大神就是大神,一定要紧跟步伐。
Michael Baumgartner在nnUnet之后推出了nnDetection,仔细一看,居然三年前就已经推出了medicaldetectiontoolkit,里面集成了Mask R-CNN, Faster R-CNN+ 等。
追随最新代码有网站https://paperswithcode.com/。
论文地址:https://arxiv.org/pdf/2106.00817v1.pdf
代码地址:https://github.com/MIC-DKFZ/nnDetection
下载源代码解压缩
Dockerfile代码稍作修改:
#Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#Licensed under the Apache License, Version 2.0 (the "License");
#you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.
# Contains pytorch, torchvision, cuda, cudnn
FROM nvcr.io/nvidia/pytorch:20.12-py3
ARG env_det_num_threads=6
ARG env_det_verbose=1
# Setup environment variables
ENV det_data=/opt/data det_models=/opt/models det_num_threads=$env_det_num_threads det_verbose=$env_det_verbose OMP_NUM_THREADS=1
# Install some tools
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive && apt-get install -y \
git \
cmake \
make \
wget \
gnupg \
build-essential \
software-properties-common \
gdb \
ninja-build
RUN pip install pip -U && pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
RUN pip install numpy
# Install own code
COPY ./requirements.txt .
RUN mkdir ${det_data} \
&& mkdir ${det_models} \
&& mkdir -p /opt/code/nndet \
&& pip install -r requirements.txt \
&& pip install hydra-core --upgrade --pre \
&& pip install pytorch-model-summary
# && pip install git+https://github.com/mibaumgartner/pytorch_model_summary.git
WORKDIR /opt/code/nndet
COPY . .
RUN FORCE_CUDA=1 pip install -v -e .
docker build -t nndetection:0.1 --build-arg env_det_num_threads=6 --build-arg env_det_verbose=1 .
docker run --gpus all -v ${det_data}:/opt/data -v ${det_models}:/opt/models -it --shm-size=24gb nndetection:0.1 /bin/bash
docker run --gpus all -v /home/yakeworld/work/nnDetection/data:/opt/data -v /home/yakeworld/work/nnDetection/models:/opt/models -it --shm-size=24gb nndetection:0.1 /bin/bash
docker run --gpus all -v /home/amax/work/nnDetection/data:/opt/data -v /home/amax/work/nnDetection/models:/opt/models -it --shm-size=24gb nndetection:0.1 /bin/bash
docker run --gpus all --ipc=host -it --rm --shm-size=24gb nndetection:0.1 /bin/bash
docker run --gpus all -v /home/yakeworld/datas:/opt/data -v /home/yakeworld/models:/opt/models -it --shm-size=24gb nndetection:0.1 /bin/bash
docker run -d --name nndetection --gpus all -v /home/yakeworld/work/data/:/opt/data -v /home/yakeworld/work/models:/opt/models -it --shm-size=24gb nndetection:0.1 /bin/bash
nndet_example 可以生产测试集,观察相应目录和文件结构。
依样画葫芦,创建自己数据的相关文件结构。
data.json
{
"task": "Task04_Hippocampus",
"name": "Hippocampus",
"target_class": null,
"test_labels": true,
"labels": {
"0": "background",
"1": "Anterior",
"2": "Posterior"
},
"modalities": {
"0": "MRI"
},
"dim": 3
}
tag.json
{
"instances": {
"1": 1,
"2": 2
}
}
nndet_prep 04
nndet_unpack preprocessed/D3V001_3d/imagesTr 6
nndet_train 04
nndet_eval 091 RetinaUNetV001_D3V001_3d 0 --boxes --analyze_boxes
nndet_consolidate 091 RetinaUNetV001_D3V001_3d --sweep_boxes
nndet_predict 04 RetinaUNetV001_D3V001_3d --fold -1
半规管数据集
拷贝nnUnet生成的目录
1.数据预处理
nndet_prep 091
2.数据解压缩
cd /opt/data/Task091_innerear
nndet_unpack preprocessed/D3V001_3d/imagesTr 6
3.训练模型
nndet_train 091
https://github.com/GJiananChen/MICCAI2021-OpenReviewAnalysis#opensource
仔细阅读论文,分析思路,是非常有效的学习方法。
最好是具备一定的重现代码能力。
nnDetection利用了Retina U-Net作为基础,这个一个比较创新的模型,需要仔细学习。
网友评论