美文网首页AIDocker容器Awesome Docker
Docker环境下玩转GPU(二)

Docker环境下玩转GPU(二)

作者: 9c46ece5b7bd | 来源:发表于2017-10-20 17:57 被阅读149次

    使用Docker方式来运行GPU任务计算。英伟达(nvidia)官方提供了一个插件NVIDIA-Docker,封装docker的相关参数来绑定GPU相关信息,以给容器提供gpu环境。下面主要介绍使用nvidia-docker插件运行容器以共享宿主机的GPU资源,后面会捎带的讲解如何使用docker原生的方式来运行GPU容器环境。

    1. 快速部署安装

    # 安装 nvidia-docker and nvidia-docker-plugin
    $ wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker-1.0.1-1.x86_64.rpm
    $ rpm -i /tmp/nvidia-docker*.rpm && rm /tmp/nvidia-docker*.rpm
    $ systemctl start nvidia-docker
    
    # 测试docker容器内部可以识别宿主机的GPU环境
    # nvidia-docker run --rm idockerhub.jd.com/nvidia-docker/cuda8.0-runtime:centos6-17-10-19 nvidia-smi
    Thu Oct 19 08:07:09 2017       
    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |===============================+======================+======================|
    |   0  Tesla M40 24GB      On   | 0000:04:00.0     Off |                    0 |
    | N/A   28C    P8    18W / 250W |      0MiB / 22939MiB |      0%      Default |
    +-------------------------------+----------------------+----------------------+
    |   1  Tesla M40           On   | 0000:05:00.0     Off |                    0 |
    | N/A   31C    P8    17W / 250W |      0MiB / 11443MiB |      0%      Default |
    +-------------------------------+----------------------+----------------------+
    |   2  Tesla M40 24GB      On   | 0000:06:00.0     Off |                    0 |
    | N/A   26C    P8    18W / 250W |      0MiB / 22939MiB |      0%      Default |
    +-------------------------------+----------------------+----------------------+
    |   3  Tesla M40           On   | 0000:07:00.0     Off |                    0 |
    | N/A   27C    P8    16W / 250W |      0MiB / 11443MiB |      0%      Default |
    +-------------------------------+----------------------+----------------------+
                                                                                   
    +-----------------------------------------------------------------------------+
    | Processes:                                                       GPU Memory |
    |  GPU       PID  Type  Process name                               Usage      |
    |=============================================================================|
    |  No running processes found                                                 |
    +-----------------------------------------------------------------------------+
    
    

    注意:上述输出的表格中可用看到,使用nvidia-docker工具创建的容器内部其实是可以识别到宿主机的GPU设备的

    2. 容器内部GPU环境验证

    使用官方的gpu版本的tensorflow框架来测试:

    # nvidia-docker run -it --rm -v /usr/lib64/libcuda.so.1:/usr/local/nvidia/lib64/libcuda.so.1 idockerhub.xxb.com/jdjr/tensorflow-gpu:17-10-17 bash
    
    root@6b4ad215279e:/notebooks# python
    Python 2.7.12 (default, Nov 19 2016, 06:48:10) 
    [GCC 5.4.0 20160609] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import tensorflow as tf
    >>> a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
    >>> b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
    >>> c = tf.matmul(a, b)
    >>> sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
    2017-10-19 08:01:39.862500: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
    2017-10-19 08:01:39.862600: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
    2017-10-19 08:01:39.862646: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
    2017-10-19 08:01:39.862676: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
    2017-10-19 08:01:39.862711: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
    2017-10-19 08:01:40.388656: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties: 
    name: Tesla M40 24GB
    major: 5 minor: 2 memoryClockRate (GHz) 1.112
    pciBusID 0000:04:00.0
    Total memory: 22.40GiB
    Free memory: 22.29GiB
    2017-10-19 08:01:40.682810: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x1e2dac0 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
    2017-10-19 08:01:40.684222: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 1 with properties: 
    name: Tesla M40
    major: 5 minor: 2 memoryClockRate (GHz) 1.112
    pciBusID 0000:05:00.0
    Total memory: 11.17GiB
    Free memory: 11.07GiB
    2017-10-19 08:01:40.995170: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x329a280 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
    2017-10-19 08:01:40.998560: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 2 with properties: 
    name: Tesla M40 24GB
    major: 5 minor: 2 memoryClockRate (GHz) 1.112
    pciBusID 0000:06:00.0
    Total memory: 22.40GiB
    Free memory: 22.29GiB
    2017-10-19 08:01:41.289133: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x329dc00 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
    2017-10-19 08:01:41.290444: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 3 with properties: 
    name: Tesla M40
    major: 5 minor: 2 memoryClockRate (GHz) 1.112
    pciBusID 0000:07:00.0
    Total memory: 11.17GiB
    Free memory: 11.07GiB
    2017-10-19 08:01:41.294062: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 1 2 3 
    2017-10-19 08:01:41.294083: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0:   Y Y Y Y 
    2017-10-19 08:01:41.294093: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 1:   Y Y Y Y 
    2017-10-19 08:01:41.294156: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 2:   Y Y Y Y 
    2017-10-19 08:01:41.294178: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 3:   Y Y Y Y 
    2017-10-19 08:01:41.294215: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla M40 24GB, pci bus id: 0000:04:00.0)
    2017-10-19 08:01:41.294229: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:1) -> (device: 1, name: Tesla M40, pci bus id: 0000:05:00.0)
    2017-10-19 08:01:41.294239: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:2) -> (device: 2, name: Tesla M40 24GB, pci bus id: 0000:06:00.0)
    2017-10-19 08:01:41.294248: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:3) -> (device: 3, name: Tesla M40, pci bus id: 0000:07:00.0)
    Device mapping:
    /job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: Tesla M40 24GB, pci bus id: 0000:04:00.0
    /job:localhost/replica:0/task:0/gpu:1 -> device: 1, name: Tesla M40, pci bus id: 0000:05:00.0
    /job:localhost/replica:0/task:0/gpu:2 -> device: 2, name: Tesla M40 24GB, pci bus id: 0000:06:00.0
    /job:localhost/replica:0/task:0/gpu:3 -> device: 3, name: Tesla M40, pci bus id: 0000:07:00.0
    2017-10-19 08:01:41.875931: I tensorflow/core/common_runtime/direct_session.cc:300] Device mapping:
    /job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: Tesla M40 24GB, pci bus id: 0000:04:00.0
    /job:localhost/replica:0/task:0/gpu:1 -> device: 1, name: Tesla M40, pci bus id: 0000:05:00.0
    /job:localhost/replica:0/task:0/gpu:2 -> device: 2, name: Tesla M40 24GB, pci bus id: 0000:06:00.0
    /job:localhost/replica:0/task:0/gpu:3 -> device: 3, name: Tesla M40, pci bus id: 0000:07:00.0
    
    >>> print(sess.run(c))
    MatMul: (MatMul): /job:localhost/replica:0/task:0/gpu:0
    2017-10-19 08:01:51.333248: I tensorflow/core/common_runtime/simple_placer.cc:872] MatMul: (MatMul)/job:localhost/replica:0/task:0/gpu:0
    b: (Const): /job:localhost/replica:0/task:0/gpu:0
    2017-10-19 08:01:51.333346: I tensorflow/core/common_runtime/simple_placer.cc:872] b: (Const)/job:localhost/replica:0/task:0/gpu:0
    a: (Const): /job:localhost/replica:0/task:0/gpu:0
    2017-10-19 08:01:51.333408: I tensorflow/core/common_runtime/simple_placer.cc:872] a: (Const)/job:localhost/replica:0/task:0/gpu:0
    [[ 22.  28.]
     [ 49.  64.]]
    >>> 
    
    

    注意:由以上完整输出可以看到当前的容器环境内部可以正常识别到GPU设备,并可以进行GPU计算

    使用docker原生方式运行GPU容器环境

    # docker run  $DEVICES -it --rm -v /usr/lib64/libcuda.so.1:/usr/local/nvidia/lib64/libcuda.so.1 -v /usr/lib64/libnvidia-fatbinaryloader.so.375.39:/usr/local/nvidia/lib64/libnvidia-fatbinaryloader.so.375.39  -v /root/gpu-example/:/tmp idockerhub.xxb.com/jdjr/tensorflow-gpu:17-10-17 bash
    
    
    
    # docker run  --privileged -it --rm -v /usr/lib64/libcuda.so.1:/usr/local/nvidia/lib64/libcuda.so.1 -v /usr/lib64/libnvidia-fatbinaryloader.so.375.39:/usr/local/nvidia/lib64/libnvidia-fatbinaryloader.so.375.39  -v /root/gpu-example/:/tmp idockerhub.xxb.com/jdjr/tensorflow-gpu:17-10-17 bash
    

    相关文章

      网友评论

        本文标题:Docker环境下玩转GPU(二)

        本文链接:https://www.haomeiwen.com/subject/nuwvuxtx.html