Module RMPI
Rimu.RMPI — ModuleModule for providing MPI functionality for Rimu. This module is unexported. To use it, run
using Rimu.RMPIMPIData
Rimu.RMPI.MPIData — TypeMPIData(data; kwargs...)Wrapper used for signaling that this data is part of a distributed data structure and communication should happen with MPI. MPIData can generally be used where an AbstractDVec would be used otherwise. Unlike AbstractDVecs, MPIData does not support indexing, or iteration over keys, values, and pairs.
Keyword arguments:
setup = mpi_point_to_point- controls the communication stratgympi_one_sideduses one-sided communication with remote memory access (RMA), setsMPIOneSidedstrategy.mpi_point_to_pointusesMPIPointToPointstrategy.mpi_all_to_allusesMPIAllToAllstrategy.mpi_no_exchangesetsMPINoWalkerExchangestrategy. Experimental. Use with caution!
comm = mpi_comm()root = mpi_root- The rest of the keyword arguments are passed to
setup.
Setup functions
The following distribute strategies are available. The functions are unexported.
Rimu.RMPI.mpi_point_to_point — Functionmpi_point_to_point(data, comm = mpi_comm(), root = mpi_root)Declare data as mpi-distributed and set communication strategy to point-to-point.
Sets up the MPIData structure with MPIPointToPoint strategy.
Rimu.RMPI.mpi_one_sided — Functionmpi_one_sided(data, comm = mpi_comm(), root = mpi_root; capacity)Declare data as mpi-distributed and set communication strategy to one-sided with remote memory access (RMA). capacity sets the capacity of the RMA windows.
Sets up the MPIData structure with MPIOneSided strategy.
Rimu.RMPI.mpi_all_to_all — Functionmpi_all_to_all(data, comm = mpi_comm(), root = mpi_root)Declare data as mpi-distributed and set communication strategy to all-to-all.
Sets up the MPIData structure with MPIAllToAll strategy.
Rimu.RMPI.mpi_no_exchange — Functionmpi_no_exchange(data, comm = mpi_comm(), root = mpi_root)Declare data as mpi-distributed and set communication strategy to MPINoWalkerExchange. Sets up the MPIData structure with MPINoWalkerExchange strategy.
Strategies
Rimu.RMPI.MPIPointToPoint — TypeMPIPointToPoint{N,A}Point-to-point communication strategy. Uses circular communication using MPI.Send and MPI.Recv!.
Constructor
MPIPointToPoint(::Type{P}, np, id, comm): Construct an instance with pair typePonnpprocesses with current rankid.
Rimu.RMPI.MPIOneSided — TypeMPIOneSided(nprocs, myrank, comm, ::Type{T}, capacity)Communication buffer for use with MPI one-sided communication (remote memory access). Up to capacity elements of type T can be exchanged between MPI ranks via put. It is important that isbitstype(T) == true. Objects of type MPIOneSided have to be freed manually with a (blocking) call to free().
Rimu.RMPI.MPIAllToAll — Type MPIAllToAllAll-to-all communication strategy. The communication works in two steps: first MPI.Alltoall! is used to communicate the number of walkers each rank wants to send to other ranks, then MPI.Alltoallv! is used to send the walkers around.
Constructor
MPIAllToAll(Type{P}, np, id, comm): Construct an instance with pair typePonnpprocesses with current rankid.
Rimu.RMPI.MPINoWalkerExchange — TypeMPINoWalkerExchange(nprocs, my_rank, comm)Strategy for not exchanging walkers between ranks. Consequently there will be no cross-rank annihilations.
MPI convenience functions
Rimu.RMPI.mpi_root — ConstantDefault MPI root for RMPI.
Rimu.DictVectors.mpi_comm — MethodDefault MPI communicator for RMPI.
Rimu.DictVectors.mpi_rank — Functionmpi_rank(comm = mpi_comm())Return the current MPI rank.
Rimu.DictVectors.mpi_size — Functionmpi_size(comm = mpi_comm())Size of MPI communicator.
Rimu.RMPI.is_mpi_root — Functionis_mpi_root(root = mpi_root)Returns true if called from the root rank
Rimu.RMPI.mpi_allprintln — Methodmpi_allprintln(args...)Print a message to stdout from each rank separately, in order. MPI synchronizing.
Rimu.RMPI.mpi_barrier — Functionmpi_barrier(comm = mpi_comm())The MPI barrier with optional argument. MPI syncronizing.
Rimu.RMPI.mpi_combine_walkers! — Methodmpi_combine_walkers!(target, source, [strategy])Distribute the entries of source to the target data structure such that all entries in the target are on the process with the correct mpi rank as controlled by targetrank(). MPI syncronizing.
Note: the storage of the source is communicated rather than the source itself.
Rimu.RMPI.mpi_seed! — Functionmpi_seed!(seed = rand(Random.RandomDevice(), UInt))Re-seed the random number generators in an MPI-safe way. If seed is provided, the random numbers from rand will follow a deterministic sequence.
Independence of the random number generators on different MPI ranks is achieved by adding hash(mpi_rank()) to seed.
Rimu.RMPI.next_mpiID — Functionnext_mpiID()Produce a new ID number for MPI distributed objects. Uses an internal counter.
Rimu.RMPI.targetrank — Methodtargetrank(key, np)Compute the rank where the key belongs.
Rimu.RMPI.@mpi_root — Macro@mpi_root exprEvaluate expression only on the root rank. Extra care needs to be taken as expr must not contain any code that involves syncronising MPI operations, i.e. actions that would require syncronous action of all MPI ranks.
Example:
wn = walkernumber(dv) # an MPI syncronising function call that gathers
# information from all MPI ranks
@mpi_root @info "The current walker number is" wn # print info message on root onlyIndex
Rimu.RMPIRimu.RMPI.mpi_rootRimu.RMPI.MPIAllToAllRimu.RMPI.MPIDataRimu.RMPI.MPINoWalkerExchangeRimu.RMPI.MPIOneSidedRimu.RMPI.MPIPointToPointRimu.DictVectors.mpi_commRimu.DictVectors.mpi_rankRimu.DictVectors.mpi_sizeRimu.RMPI.is_mpi_rootRimu.RMPI.mpi_all_to_allRimu.RMPI.mpi_allprintlnRimu.RMPI.mpi_barrierRimu.RMPI.mpi_combine_walkers!Rimu.RMPI.mpi_no_exchangeRimu.RMPI.mpi_one_sidedRimu.RMPI.mpi_point_to_pointRimu.RMPI.mpi_seed!Rimu.RMPI.next_mpiIDRimu.RMPI.targetrankRimu.RMPI.@mpi_root