美文网首页
mxnet分布式1

mxnet分布式1

作者: 迷途的Go | 来源:发表于2018-01-12 23:04 被阅读0次

    mxnet分布式1

    可能的阻塞原因

    启动分布式的时候一开始经常程序阻塞住,自以为一切都按照官方的操作了,从表面现象看发射机在启动了launcher.py进程后,shell停住,这个是最让人头疼的,这种情况首先确保:

    1. 每一台机器上的环境一样,包括代码路径,python环境
    2. 要启动的进程是否已经存在,如果已经存在,先杀死它们
    3. 防火墙是否已经关闭
    4. 两台机器是否能免密ssh了

    启动方式

    1. 通过官方提供的launcher.py启动

      参考:https://github.com/apache/incubator-mxnet/tree/master/example/image-classification

    2. 为了看明白其中的过程,看一种别的启动方式

    首先启动scheduler,scheduler进程会阻塞等待,再启动两个server,每个server都指定了PS的IP地址,最后启动两个worker,整个分布式程序开始启动运行,worker的shell动起来

    针对mnist

    sch---export DMLC_PS_ROOT_URI=x.x.x.x; export DMLC_ROLE=scheduler; export DMLC_PS_ROOT_PORT=9001; export DMLC_NUM_WORKER=2; export DMLC_NUM_SERVER=2; 
    cd /path/to;
    python train_mnist.py --kv-store dist_sync
    
    ps1---export DMLC_SERVER_ID=0; export DMLC_PS_ROOT_URI=x.x.x.x; export DMLC_ROLE=server; export DMLC_PS_ROOT_PORT=9001; export DMLC_NUM_WORKER=2; export DMLC_NUM_SERVER=2; 
    cd /path/to;
    python train_mnist.py --kv-store dist_sync
    
    ps2---export DMLC_SERVER_ID=1; export DMLC_PS_ROOT_URI=x.x.x.x; export DMLC_ROLE=server; export DMLC_PS_ROOT_PORT=9001; export DMLC_NUM_WORKER=2; export DMLC_NUM_SERVER=2 
    cd /path/to; 
    python train_mnist.py --kv-store dist_sync
    
    wk1---export DMLC_WORKER_ID=0; export DMLC_PS_ROOT_URI=x.x.x.x; export DMLC_ROLE=worker; export DMLC_PS_ROOT_PORT=9001; export DMLC_NUM_WORKER=2; export DMLC_NUM_SERVER=2 
    cd /path/to;
    python train_mnist.py --kv-store dist_sync
    
    wk2---export DMLC_WORKER_ID=2; export DMLC_PS_ROOT_URI=x.x.x.x; export DMLC_ROLE=worker; export DMLC_PS_ROOT_PORT=9001; export DMLC_NUM_WORKER=2; export DMLC_NUM_SERVER=2 
    cd /path/to;
    python train_mnist.py --kv-store dist_sync
    

    启动过程分析

    在x.x.x.x/x两台机器上做实验

    启动脚本:

    python ../../tools/launch.py -n 2 --launcher ssh -H hosts `which python` train_mnist.py --kv-store=dist_sync
    

    启动后两台机器上的启动的进程分析

    • 发射机

    /home/xxx/anaconda2/envs/ps_lite/bin/python train_mnist.py --kv-store=dist_sync这条命令执行了3次,第一次是parameter server启动的scheduler进程,由trackerpserver = PSTracker(hostIP=hostIP, cmd=pscmd, envs=envs)代码启动,scheduler进程由PSTtacker的构造函数启动,另外两个是由发射机ssh启动的server和worker进程,以上所有进程启动都是用异步线程启动

    ssh -o StrictHostKeyChecking=no x.x.x.x -p 22 export LD_LIBRARY_PATH=.::/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/home/xxx/xxx-workspace/cuda-8.0-cudnn-6.0/lib64; export DMLC_ROLE=server; export DMLC_PS_ROOT_PORT=9091; export DMLC_PS_ROOT_URI=x.x.x.x; export DMLC_NUM_SERVER=2; export DMLC_NUM_WORKER=2; cd /path/to/example/image-classification/; `which python` train_mnist.py --kv-store=dist_sync
    

    这个进程起了四次,分别是向两个worker和两个server发送ssh进程,IP从hosts文件读取,PS都是x.x.x.x这台机器

    bash -c export LD_LIBRARY_PATH=.::/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/home/xxx/xxx-workspace/cuda-8.0-cudnn-6.0/lib64; export DMLC_ROLE=server; export DMLC_PS_ROOT_PORT=9092; export DMLC_PS_ROOT_URI=x.x.x.x; export DMLC_NUM_SERVER=2; export DMLC_NUM_WORKER=2; cd /path/to/example/image-classification/; `which python` train_mnist.py --kv-store=dist_sync
    

    这个进程起了2次,接收到发射机发送的两次请求,分别启动server进程和worker进程

    • worker节点

    /home/xxx/anaconda2/envs/ps_lite/bin/python train_mnist.py --kv-store=dist_sync这条命令执行了2次

    bash -c export LD_LIBRARY_PATH=.::/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/home/xxx/xxx-workspace/cuda-8.0-cudnn-6.0/lib64; export DMLC_ROLE=worker; export DMLC_PS_ROOT_PORT=9092; export DMLC_PS_ROOT_URI=x.x.x.x; export DMLC_NUM_SERVER=2; export DMLC_NUM_WORKER=2; cd /path/to/example/image-classification/; `which python` train_mnist.py --kv-store=dist_sync
    

    这个进程启动了2次, 接收到发射机发送的两次请求,分别启动server进程和worker进程

    启动后的参数

    Namespace(archives=[], auto_file_cache=True, cluster='ssh', command=['`which', 'python`', 'train_mnist.py', '--kv-store=dist_sync'], env=[], files=[], hdfs_tempdir='/tmp', host_file='hosts', host_ip=None, jobname=None, kube_namespace='default', kube_server_image='mxnet/python', kube_server_template=None, kube_worker_image='mxnet/python', kube_worker_template=None, log_file=None, log_level='INFO', mesos_master=None, num_servers=2, num_workers=2, queue='default', server_cores=1, server_memory='1g', server_memory_mb=1024, sge_log_dir=None, ship_libcxx=None, slurm_server_nodes=None, slurm_worker_nodes=None, sync_dst_dir='None', worker_cores=1, worker_memory='1g', worker_memory_mb=1024, yarn_app_classpath=None, yarn_app_dir='/path/to/tools/../dmlc-core/tracker/dmlc_tracker/../yarn')
    

    上面一堆参数中只有num_workers, num_servers,cluseter,host_file,sync_dst_dir,command是从外部给出,其他的参数从

    try:
        from dmlc_tracker import opts
    except ImportError:
        print("Can't load dmlc_tracker package.  Perhaps you need to run")
        print("    git submodule update --init --recursive")
        raise
    dmlc_opts = opts.get_opts(args)
    

    中最后一行加载进来, opt.py中定义了很多参数parser

    ssh.py->submit(args)->tracker.py:submit()->fun_submit->ssh.py:submit():ssh_submit()

    hosts对象包装了hosts文件的IP地址和对应的端口

    在ssh.py的方法ssh_submit()方法中,for语句中依次从hosts文件中拿到IP启动server和worker

    代码分析

    只试过ssh的启动方式,目前只看到python层的代码,看到启动了各自的进程,主要就四个类,launcher调用ssh.py,ssh.py调用tracker.py依次启动scheduler,server,worker进程


    image.png

    相关文章

      网友评论

          本文标题:mxnet分布式1

          本文链接:https://www.haomeiwen.com/subject/ssyqoxtx.html