美文网首页Python 并行计算
mpi4py 中的全规约操作

mpi4py 中的全规约操作

作者: 自可乐 | 来源:发表于2018-04-02 15:23 被阅读222次

    上一篇中我们介绍了 mpi4py 中的全收集操作方法,下面我们将介绍全规约操作。

    对组内通信子上的全规约操作,组内所有进程都作为根执行一次规约操作,操作完毕后所有进程接收缓冲区的数据均相同。这个操作等价于以某个进程作为根首先进行一次规约操作,然后执行一次广播操作,最后每个进程都得到相同的结果。

    对组间通信子上的全规约操作,其关联的两个组 group A 和 group B 都要执行该方法调用,该操作使得 group A 中进程提供的规约结果将保存到 group B 的各进程中,反之亦然。

    方法接口

    mpi4py 中的全规约操作的方法(MPI.Comm 类的方法)接口为:

    allreduce(self, sendobj, op=SUM)
    Allreduce(self, sendbuf, recvbuf, Op op=SUM)
    

    这些方法的参数与规约操作对应方法的参数类似,不同的是对全规约操作没有了 root 参数。

    对组内通信子对象的 Allreduce,可以将其 sendbuf 参数设置成 MPI.IN_PLACE,此时各进程将从自己的接收缓冲区中提取数据,经过规约操作后,将结果替换接收缓冲区中原来的内容。

    例程

    下面给出全规约操作的使用例程。

    # allreduce.py
    
    """
    Demonstrates the usage of allreduce, Allreduce.
    
    Run this with 4 processes like:
    $ mpiexec -n 4 python allreduce.py
    """
    
    import numpy as np
    from mpi4py import MPI
    
    
    comm = MPI.COMM_WORLD
    rank = comm.Get_rank()
    
    # ------------------------------------------------------------------------------
    # reduce generic object from each process by using allreduce
    if rank == 0:
        send_obj = 0.5
    elif rank == 1:
        send_obj = 2.5
    elif rank == 2:
        send_obj = 3.5
    else:
        send_obj = 1.5
    
    # reduce by SUM: 0.5 + 2.5 + 3.5 + 1.5 = 8.0
    recv_obj = comm.allreduce(send_obj, op=MPI.SUM)
    print 'allreduce by SUM: rank %d has %s' % (rank, recv_obj)
    # reduce by MAX: max(0.5, 2.5, 3.5, 1.5) = 3.5
    recv_obj = comm.allreduce(send_obj, op=MPI.MAX)
    print 'allreduce by MAX: rank %d has %s' % (rank, recv_obj)
    
    
    # ------------------------------------------------------------------------------
    # reduce numpy arrays from each process by using Allreduce
    send_buf = np.array([0, 1], dtype='i') + 2 * rank
    recv_buf = np.empty(2, dtype='i')
    
    # Reduce by SUM: [0, 1] + [2, 3] + [4, 5] + [6, 7] = [12, 16]
    comm.Allreduce(send_buf, recv_buf, op=MPI.SUM)
    print 'Allreduce by SUM: rank %d has %s' % (rank, recv_buf)
    
    
    # ------------------------------------------------------------------------------
    # reduce numpy arrays from each process by using Allreduce with MPI.IN_PLACE
    recv_buf = np.array([0, 1], dtype='i') + 2 * rank
    
    # Reduce by SUM with MPI.IN_PLACE: [0, 1] + [2, 3] + [5, 6] + [6, 7] = [12, 16]
    # recv_buf used as both send buffer and receive buffer
    comm.Allreduce(MPI.IN_PLACE, recv_buf, op=MPI.SUM)
    print 'Allreduce by SUM with MPI.IN_PLACE: rank %d has %s' % (rank, recv_buf)
    

    运行结果如下:

    $ mpiexec -n 4 python allreduce.py
    allreduce by SUM: rank 2 has 8.0
    allreduce by SUM: rank 0 has 8.0
    allreduce by SUM: rank 1 has 8.0
    allreduce by SUM: rank 3 has 8.0
    allreduce by MAX: rank 3 has 3.5
    allreduce by MAX: rank 2 has 3.5
    allreduce by MAX: rank 0 has 3.5
    Allreduce by SUM: rank 0 has [12 16]
    allreduce by MAX: rank 1 has 3.5
    Allreduce by SUM: rank 1 has [12 16]
    Allreduce by SUM with MPI.IN_PLACE: rank 0 has [12 16]
    Allreduce by SUM: rank 3 has [12 16]
    Allreduce by SUM with MPI.IN_PLACE: rank 3 has [12 16]
    Allreduce by SUM: rank 2 has [12 16]
    Allreduce by SUM with MPI.IN_PLACE: rank 2 has [12 16]
    Allreduce by SUM with MPI.IN_PLACE: rank 1 has [12 16]
    

    以上我们介绍了 mpi4py 中的全规约操作方法,在下一篇中我们将介绍规约发散操作。

    相关文章

      网友评论

        本文标题:mpi4py 中的全规约操作

        本文链接:https://www.haomeiwen.com/subject/xwhxhftx.html