美文网首页
CUDA VISIBLE DEVICE

CUDA VISIBLE DEVICE

作者: 阿o醒 | 来源:发表于2017-07-06 12:31 被阅读3630次

TensorFlow will attempt to use (an equal fraction of the memory of) all GPU devices that are visible to it.
If you want to run different sessions on different GPUs, you should do the following.

  1. Run each session in a different Python process.
  2. Start each process with a different value for the CUDA_VISIBLE_DEVICES environment variable.
    For example, if your script is called my_script.py
    and you have 4 GPUs, you could run the following:
$ CUDA_VISIBLE_DEVICES=0 python my_script.py # Uses GPU 0
$ CUDA_VISIBLE_DEVICES=1 python my_script.py # Uses GPU 1
$ CUDA_VISIBLE_DEVICES=2,3 python my_script.py # Uses GPUs 2 and 3.

Note the GPU devices in TensorFlow will still be numbered from zero (i.e. "/gpu:0" etc.), but they will correspond to the devices that you have made visible with CUDA_VISIBLE_DEVICES

You can set environment variables in the notebook using os.environ.
Do the following before initializing TensorFlow to limit TensorFlow to first GPU.

import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="0"

相关文章

网友评论

      本文标题:CUDA VISIBLE DEVICE

      本文链接:https://www.haomeiwen.com/subject/ckjhhxtx.html