Module DictVectors
Rimu.DictVectors — ModuleModule that provides concrete implementations of the AbstractDVec interface.
- DVec: basic- AbstractDVec
- PDVec: parallel- AbstractDVecwith MPI and initiator support
- InitiatorDVec: allows storing information about initiator status
See Interfaces.
Rimu.Interfaces.AbstractDVec — TypeAbstractDVec{K,V}Abstract data type for vector-like data structures with sparse storage. While conceptually AbstractDVecs represent elements of a vector space over a scalar type V, they are indexed by an arbitrary type K (could be non-integers) similar to dictionaries. They support the interface from VectorInterface.jl and are designed to work well for quantum Monte Carlo with ProjectorMonteCarloProblem and for matrix-free linear algebra with KrylovKit.
Concrete implementations are available as PDVec, DVec, and InitiatorDVec.
AbstractDVecs have a StochasticStyle which selects the spawning algorithm in FCIQMC. Looking up an element that is not stored in the AbstractDVec should return a zero, and setting a value to zero should remove it from the vector. To iterate over an AbstractDVec, use keys, pairs, or values. When possible, use reduction functions such as sum or mapreduce.
Interface
The interface is similar to the AbstractDict interface, but with the changed behaviour as noted above.  Implement what would be needed for the AbstractDict interface (pairs, keys, values, setindex!, getindex, delete!, length, empty, empty!) and, in addition:
- StochasticStyle
- storagereturns an- AbstractDictstoring the raw data with possibly different- valtypethan- V.
- deposit!
A default implementation for the VectorInterface.jl interface is provided through the above functions.
See also DictVectors, Interfaces.
Concrete implementations
Rimu.DictVectors.DVec — TypeDVec(args...; kwargs...) <: AbstractDVec{K,V}Dictionary-based vector-like data structure for storing key <: K and value <: V pairs for use with ProjectorMonteCarloProblem and ExactDiagonalizationProblem. DVec supports the interface from VectorInterface.jl but stores only non-zero values.
DVec is fast but does not support Initiators (see InitiatorDVec) or parallel and distributed vector operations (see PDVec).
See also: AbstractDVec, InitiatorDVec, PDVec.
Constructors
- DVec(dict::AbstractDict[; style, capacity]): create a- DVecwith- dictfor storage. Note that the data may or may not be copied.
- DVec(args...[; style, capacity]):- args...can be- key => value- Pairs or an iterator over- Pairs or- Tuples.
- DVec{K,V}([; style, capacity]): create an empty- DVec{K,V}.
- DVec(dv::AbstractDVec[; style, capacity]): create a- DVecwith the same contents as- adv. The- styleis inherited from- dvby default.
Keyword arguments
- style::StochasticStyledetermines the mode of stochastic operations. See- StochasticStyle. The default- styleis selected based on the- DVec's- valtype(see- default_style). If a style is given and the- valtypedoes not match the- style's- eltype, the values are converted to an appropriate type.
- capacity: The capacity argument is optional and sets the initial size of the- DVecvia- Base.sizehint!.
Examples
julia> dv = DVec(:a => 1)
DVec{Symbol,Int64} with 1 entry, style = IsStochasticInteger{Int64}()
  :a => 1
julia> dv = DVec(:a => 2, :b => 3; style=IsDeterministic())
DVec{Symbol,Float64} with 2 entries, style = IsDeterministic{Float64}()
  :a => 2.0
  :b => 3.0Rimu.DictVectors.InitiatorDVec — TypeInitiatorDVec(args...; kwargs...) <: AbstractDVec{K,V}Dictionary-based vector-like data structure for storing key <: K and value <: V pairs for use with ProjectorMonteCarloProblem and KrylovKit.jl. See AbstractDVec. Functionally identical to DVec, but contains InitiatorValues internally in order to facilitate initiator methods. Initiator methods for controlling the Monte Carlo sign problem were first introduced in J. Chem. Phys. 132, 041103 (2010). How the initiators are handled is controlled by specifying an InitiatorRule with the initiator keyword argument (see below). Note that values need to be comparable to real numbers with isless.
See also: AbstractDVec, DVec, PDVec.
Constructors
- InitiatorDVec(dict::AbstractDict[; style, initiator, capacity]): create an- InitiatorDVecwith- dictfor storage. Note that the data may or may not be copied.
- InitiatorDVec(args...[; style, initiator, capacity]):- args...can be- key => value- Pairs or an iterator over- Pairs or- Tuples.
- InitiatorDVec{K,V}([; style, initiator, capacity]): create an empty- InitiatorDVec{K,V}.
- InitiatorDVec(dv::AbstractDVec[; style, initiator, capacity]): create an- InitiatorDVecwith the same contents as- dv. The- styleis inherited from- dvby default.
Keyword arguments
- style: A valid- StochasticStyle. The default is selected based on the- InitiatorDVec's- valtype(see- default_style). If a style is given and the- valtypedoes not match the- style's- eltype, the values are converted to an appropriate type.
- initiator = Initiator(1): A valid- InitiatorRule. See- Initiator.
- capacity: Indicative size as- Int. Optional. Sets the initial size of the- InitiatorDVecvia- Base.sizehint!.
Rimu.DictVectors.PDVec — TypePDVec{K,V}(; kwargs...) <: AbstractDVec{K,V}
PDVec(iter; kwargs...)
PDVec(pairs...; kwargs...)Dictionary-based vector-like data structure for storing key <: K and value <: V pairs for use with ProjectorMonteCarloProblem and ExactDiagonalizationProblem. Only pairs with non-zero values are stored. PDVec supports various linear algebra operations such as norm and dot, as well as the full interface defined in VectorInterface.
The P in PDVec stands for parallel. PDVecs perform reductions such as mapreduce, foreach, and various linear algebra operations in a threaded manner on the available Julia threads. If MPI is available, the storage is distributed over the MPI ranks and reductions operate in parallel on the full distributed data structure. As such it is not recommended to iterate over pairs, keys, or values directly unless explicitly performing them on the localpart of the vector.
See also: AbstractDVec, DVec, InitiatorDVec.
Keyword arguments
- style =- default_style- (V): A- StochasticStylethat is used to select the spawning strategy in the FCIQMC algorithm.
- initiator =- NonInitiator- (): An- InitiatorRule, used in FCIQMC to remove the sign problem.
- communicator: A- Communicatorthat controls how operations are performed when using MPI. The defaults are- NotDistributedwhen not using MPI and- AllToAllwhen using MPI.
Example
julia> address = FermiFS2C((1,1,0,0), (0,0,1,1));
julia> op = HubbardMom1D(address; t=4/π^2, u=4);
julia> pv = PDVec(address => 1.0)
1-element PDVec: style = IsDeterministic{Float64}()
  fs"|↑↑↓↓⟩" => 1.0
julia> pv = op * pv
7-element PDVec: style = IsDeterministic{Float64}()
  fs"|↑↑↓↓⟩" => 4.0
  fs"|↑↓↑↓⟩" => 1.0
  fs"|⋅⇅⇅⋅⟩" => 1.0
  fs"|↓↑↑↓⟩" => -1.0
  fs"|↑↓↓↑⟩" => -1.0
  fs"|⇅⋅⋅⇅⟩" => 1.0
  fs"|↓↑↓↑⟩" => 1.0
julia> scale!(pv, -1); pv
7-element PDVec: style = IsDeterministic{Float64}()
  fs"|↑↑↓↓⟩" => -4.0
  fs"|↑↓↑↓⟩" => -1.0
  fs"|⋅⇅⇅⋅⟩" => -1.0
  fs"|↓↑↑↓⟩" => 1.0
  fs"|↑↓↓↑⟩" => 1.0
  fs"|⇅⋅⋅⇅⟩" => -1.0
  fs"|↓↑↓↑⟩" => -1.0
julia> dest = similar(pv)
0-element PDVec: style = IsDeterministic{Float64}()
julia> map!(x -> x + 2, dest, values(pv))
7-element PDVec: style = IsDeterministic{Float64}()
  fs"|↑↑↓↓⟩" => -2.0
  fs"|↑↓↑↓⟩" => 1.0
  fs"|⋅⇅⇅⋅⟩" => 1.0
  fs"|↓↑↑↓⟩" => 3.0
  fs"|↑↓↓↑⟩" => 3.0
  fs"|⇅⋅⋅⇅⟩" => 1.0
  fs"|↓↑↓↑⟩" => 1.0
julia> sum(values(pv))
-6.0
julia> dot(dest, pv)
10.0
julia> dot(dest, op, pv)
44.0Extended Help
Segmentation
The vector is split into Threads.nthreads() subdictionaries called segments. Which dictionary a key-value pair is mapped to is determined by the hash of the key. The purpose of this segmentation is to allow parallel processing - functions such as mapreduce, add! or dot (full list below) process each subdictionary on a separate thread.
See also PDWorkingMemory.
MPI
When MPI is active, all parallel reductions are automatically reduced across MPI ranks with a call to MPI.Allreduce!.
In a distributed setting, PDVec does not support iteration without first making it explicit the iteration is only to be performed on the local segments of the vector. This is done with localpart. In general, even when not using MPI, it is best practice to use localpart when explicit iteration is required.
Use with KrylovKit
PDVec is compatible with eigsolve from KrylovKit.jl. When used, the diagonalisation is performed in a threaded and distributed manner. Using multiple MPI ranks with this method does not distribute the memory load effectively, but does result in significant speedups.
Example
julia> using KrylovKit
julia> address = BoseFS((0,0,5,0,0));
julia> op = HubbardMom1D(address; u=6.0);
julia> pv = PDVec(address => 1.0);
julia> results = eigsolve(op, pv, 4, :SR);
julia> results[1][1:4]
4-element Vector{Float64}:
 -3.431115689232223
  1.182174860261238
  3.737775375308286
  6.996390417443125Parallel functionality
The following functions are threaded and MPI-compatible:
- From Base: mapreduceand derivatives (sum,prod,reduce...),all,any,map!(onvaluesonly),+,-,*
- From LinearAlgebra: rmul!,lmul!,mul!,axpy!,axpby!,dot,norm,normalize,normalize!
- The full interface defined in VectorInterface.jl
Interface functions
Rimu.Interfaces.deposit! — Functiondeposit!(w::InitiatorDVec, add, val, p_add=>p_val)Add val into w at address add as an AbstractInitiatorValue.
deposit!(w::AbstractDVec, add, val, parent::Pair)Add val into w at address add, taking into account initiator rules if applicable. parent contains the address => value pair from which the pair add => val was created. InitiatorDVec can intercept this and add its own functionality.
Rimu.Interfaces.storage — Functionstorage(dvec) -> AbstractDictReturn the raw storage associated with dvec as an AbstractDict. Used in MPI communication.
OrderedCollections.freeze — Functionfreeze(dv)Create a "frozen" version of dv which can no longer be modified or used in the conventional manner, but supports faster dot products.
Rimu.Interfaces.localpart — Functionlocalpart(dv) -> AbstractDVecGet the part of dv that is located on this MPI rank. Returns dv itself for vectors that can't be MPI distributed (DVecs and InitiatorDVecs).
Rimu.Interfaces.apply_operator! — Functionapply_operator!(working_memory, target, source, operator, boost=1, compress=Val(true)) ->
    stat_names, stats, working_memory, targetPerform a single matrix(/operator)-vector multiplication:
\[v^{(n + 1)} = \hat{T} v^{(n)} ,\]
where $\hat{T}$ is the operator, $v^{(n+1)}$ is the target and $v^{(n)}$ is the source. The working_memory can be used as temporary storage.
The boost argument is passed to apply_column! and increases the number of spawns performed. For the operator to be applied without compressing the vector after, set compress to Val(false).
Whether the operation is performed in a stochastic, semistochastic, or determistic way is controlled by the trait StochasticStyle(target). See StochasticStyle.
Returns the step stats generated by the StochasticStyle, the working memory and the target vector. target and working_memory may be mutated and/or swapped.
Rimu.Interfaces.sort_into_targets! — Functionsort_into_targets!(target, source, stats) -> target, source, agg_statsAggregate coefficients from source to target and from stats to agg_stats according to thread- or MPI-level parallelism.
Returns the new target and source, as the sorting process may involve swapping them.
Rimu.Interfaces.working_memory — Functionworking_memory(dv::AbstractDVec)Create a working memory instance compatible with dv. The working memory must be compatible with sort_into_targets! and apply_operator!.
Base.mapreduce — Functionmapreduce(f, op, keys(::PDVec); [init])
mapreduce(f, op, values(::PDVec); [init])
mapreduce(f, op, pairs(::PDVec); [init])Perform a parallel reduction operation on PDVecs. MPI-compatible. Is used in the definition of various functions from Base such as reduce, sum, prod, etc.
init, if provided, must be a neutral element for op.
Rimu.Interfaces.sum_mutating! — Functionsum_mutating!(accumulator, [f! = add!], keys(::PDVec); [init])
sum_mutating!(accumulator, [f! = add!], values(::PDVec); [init])
sum_mutating!(accumulator, [f! = add!], pairs(::PDVec); [init])Perform a parallel sum on PDVecs for vector-valued results while minimizing allocations. The result of the sum will be added to accumulator and stored in accumulator. MPI-compatible. If f! is provided, it must accept two arguments, the first being the accumulator and the second the element of the iterator. Otherwise,add! is used.
If provided, init must be a neutral element for + and of the same type as accumulator.
See also mapreduce.
sum_mutating!(accumulator, [f! = add!], iterator)Add the sum of elements in iterator to accumulator, storing the result in accumulator. If f! is provided, it must accept two arguments, the first being the accumulator and the second the element of the iterator. Otherwise, add! is used.
See also mapreduce.
Supported operations
AbstractDVecs generally support most operations that are defined on Vectors and Dicts. This includes the interface from VectorInterface.jl, and many functions from the LinearAlgebra standard library.
A significant difference between AbstractDVecs, Vectors, and Dicts, is that iteration on them is disabled by default. Iteration must be explicitly performed on keys, values, or pairs, however, it is highly recommended you use mapreduce, reduce, or similar functions when performing reductions, as that will make the operations compatible with MPI.
In addition, Rimu defines the following function.
Rimu.DictVectors.walkernumber — Functionwalkernumber(v)Compute the number of walkers in v. It is used for updating the shift. Overload this function for modifying population control.
In most cases walkernumber(v) is identical to norm(v, 1). For AbstractDVecs with complex integer coefficients it reports the one norm separately for the real and the imaginary part as a ComplexF64. See Norm1ProjectorPPop.
Rimu.DictVectors.walkernumber_and_length — Functionwalkernumber_and_length(v)Compute walkernumber and length at the same time. When MPI is used, this is more efficient than calling them separately.
Rimu.Interfaces.dot_from_right — Functiondot_from_right(w, op::AbstractObservable, v)Internal function evaluates the 3-argument dot() function in order from right to left.
Projectors
Rimu.DictVectors.AbstractProjector — TypeAbstract supertype for projectors to be used in in lieu of DVecs or Vectors in dot products. Implemented subtypes:
See also PostStepStrategy for use of projectors in ProjectorMonteCarloProblem.
Interface
Define a method for LinearAlgebra.dot(projector, v).
Rimu.DictVectors.NormProjector — TypeNormProjector() <: AbstractProjectorResults in computing the one-norm when used in dot(). E.g.
dot(NormProjector(),x)
-> norm(x,1)NormProjector() thus represents the vector sign.(x).
See also PostStepStrategy, and AbstractProjector for use of projectors in ProjectorMonteCarloProblem.
Rimu.DictVectors.Norm2Projector — TypeNorm2Projector() <: AbstractProjectorResults in computing the two-norm when used in dot(). E.g.
dot(NormProjector(),x)
-> norm(x,2) # with type Float64See also PostStepStrategy, and AbstractProjector for use of projectors in ProjectorMonteCarloProblem.
Rimu.DictVectors.UniformProjector — TypeUniformProjector() <: AbstractProjectorRepresents a vector with all elements 1. To be used with dot(). Minimizes memory allocations.
UniformProjector()⋅v == sum(v)
dot(UniformProjector(), LO, v) == sum(LO*v)See also PostStepStrategy, and AbstractProjector for use of projectors in ProjectorMonteCarloProblem.
Rimu.DictVectors.Norm1ProjectorPPop — TypeNorm1ProjectorPPop() <: AbstractProjectorResults in computing the one-norm per population when used in dot(). E.g.
dot(Norm1ProjectorPPop(),x)
-> norm(real.(x),1) + im*norm(imag.(x),1)See also PostStepStrategy, and AbstractProjector for use of projectors in ProjectorMonteCarloProblem.
Rimu.DictVectors.FrozenDVec — TypeFrozenDVecA frozen DVec(s) can't be modified or used in the conventional manner, but support faster dot products. See: freeze.
Rimu.DictVectors.FrozenPDVec — TypeFrozenPDVecParallel version of FrozenDVec. See: freeze, PDVec.
Initiator rules
Rimu.DictVectors.InitiatorRule — TypeInitiatorRule{V}Abstract type for defining initiator rules for InitiatorDVec. Concrete implementations:
Extended Help
InitiatorRules define how to store and retrieve data from associated AbstractInitiatorValues. When defining a new InitiatorRule, also define the following:
Rimu.DictVectors.AbstractInitiatorValue — Typeabstract type AbstractInitiatorValue{V}A value equipped with additional information that enables a variation of the initiator approximation. To be used with PDVec, InitiatorDVec and InitiatorRules.
Must define:
- Base.zero,- Base.:+,- Base.:-,- Base.:*
- from_initiator_valueand- to_initiator_value
Rimu.DictVectors.InitiatorValue — TypeInitiatorValue{V}(; safe::V, unsafe::V, initiator::V) where VComposite "walker" with three fields. For use with InitiatorDVecs.
Rimu.DictVectors.initiator_valtype — Functioninitiator_valtype(rule::InitiatorRule, T)Return the AbstractInitiatorValue{T} that is employed by the rule.
Rimu.DictVectors.to_initiator_value — Functionto_initiator_value(::InitiatorRule, k::K, v::V, parent)Convert v to an AbstractInitiatorValue, taking the initiator rule and the parent that spawned it into account.
Rimu.DictVectors.from_initiator_value — Functionfrom_initiator_value(i::InitiatorRule, v::AbstractInitiatorValue)Convert the AbstractInitiatorValue v into a scalar value according to the InitiatorRule i.
Rimu.DictVectors.Initiator — TypeInitiator(threshold = 1.0) <: InitiatorRuleInitiator rule to be passed to PDVec or InitiatorDVec. An initiator is a configuration add with a coefficient with magnitude abs(v[add]) > threshold. The threshold can be passed as a keyword argument. Rules:
- Initiators can spawn anywhere.
- Non-initiators can spawn to initiators.
See InitiatorRule.
Rimu.DictVectors.SimpleInitiator — TypeSimpleInitiator(threshold = 1.0) <: InitiatorRuleInitiator rule to be passed to PDVec or InitiatorDVec. An initiator is a configuration add with a coefficient with magnitude abs(v[add]) > threshold. The threshold can be passed as a keyword argument. Rules:
- Initiators can spawn anywhere.
- Non-initiators cannot spawn.
See InitiatorRule.
Rimu.DictVectors.CoherentInitiator — TypeCoherentInitiator(threshold = 1.0) <: InitiatorRuleInitiator rule to be passed to PDVec or InitiatorDVec. An initiator is a configuration add with a coefficient with magnitude abs(v[add]) > threshold. The threshold can be passed as a keyword argument. Rules:
- Initiators can spawn anywhere.
- Non-initiators can spawn to initiators.
- Multiple non-initiators can spawn to a single non-initiator if their contributions add up to a value greater than the initiator threshold.
See InitiatorRule.
Rimu.DictVectors.NonInitiator — TypeNonInitiator() <: InitiatorRuleInitiator rule that disables the approximation. This is the default setting for PDVec.
See InitiatorRule.
Rimu.DictVectors.NonInitiatorValue — TypeNonInitiatorValue{V}Value that does not contain any additional information - used with NonInitiator, the default initiator rule for PDVec.
PDVec internals
Working memory
Rimu.DictVectors.FirstColumnIterator — TypeFirstColumnIterator{W,D} <: AbstractVector{D}Iterates segments in the first column of a working memory that belong to a specified rank.
See PDWorkingMemory, remote_segments and local_segments.
Rimu.DictVectors.PDWorkingMemory — TypePDWorkingMemory(t::PDVec)The working memory that handles threading and MPI distribution for operations that involve operators, such as FCIQMC propagation, operator-vector multiplication and three-way dot products with PDVecs.
The working memory is structured as a two-dimensional array of segments, which themselves are Dicts (see PDVec). The number of rows in this array is equal to the number of segments across all MPI ranks (covering the entire address space), while the number of columns corresponds to the number of segments in the current MPI rank (i.e. column corresponds to the part of the address space that is local to the current rank).
The purpose of this organisation is to allow spawning in parallel without using locks or atomic operations. The spawning is performed by applying the following sequence of operations:
- perform_spawns!: each segment in the- PDVecis multiplied by the operator independently, with the results being stored in a column of the working memory.
- collect_local!: the rows of the working memory are summed to the first column.
- synchronize_remote!: the segments corresponding to other MPI ranks are distributed and transferred to the first column.
- move_and_compress!: the results are stochastically compressed and moved to the result- PDVec
When used with three-argument dot products, a full copy of the left-hand side vector is materialized in the first column of the working memory on all ranks.
Rimu.DictVectors.PDWorkingMemoryColumn — TypePDWorkingMemoryColumnA column in PDWorkingMemory. Supports getindex, deposit! and StochasticStyle and acts as a target for spawning. Can be used as a target in a three-way dot-product.
Rimu.DictVectors.collect_local! — Methodcollect_local!(w::PDWorkingMemory)Sum each row in w and store the result in the first column. This step must be performed before using local_segments or remote_segments to move the values elsewhere.
See PDWorkingMemory.
Rimu.DictVectors.first_column — Methodfirst_column(::PDWorkingMemory)Return the first column of the working memory. This is where the vectors are collected with collect_local!, synchronize_remote!, copy_to_local!.
See PDWorkingMemory.
Rimu.DictVectors.local_segments — Methodlocal_segments(w::PDWorkingMemory)Returns iterator over the segments in the first column of w on the current rank. Iterates Dicts.
See PDWorkingMemory.
Rimu.DictVectors.move_and_compress! — Methodmove_and_compress!(dst::PDVec, src::PDWorkingMemory)
move_and_compress!(::CompressionStrategy, dst::PDVec, src::PDWorkingMemory)Move the values in src to dst, compressing the according to the CompressionStrategy on the way. This step can only be performed after collect_local! and synchronize_remote!.
See PDWorkingMemory.
Rimu.DictVectors.num_columns — Methodnum_columns(w::PDWorkingMemory) -> IntNumber of columns in the working memory. The number of rows is equal to the number of segments in the local MPI rank.
See PDWorkingMemory.
Rimu.DictVectors.num_rows — Methodnum_rows(w::PDWorkingMemory) -> IntNumber of rows in the working memory. The number of rows is equal to the number of segments accross all MPI ranks.
See PDWorkingMemory.
Rimu.DictVectors.perform_spawns! — Methodperform_spawns!(w::PDWorkingMemory, v::PDVec, ham, boost)Perform spawns from v through ham to w. boost increases the number of spawns without affecting the expectation value of the process.
See PDVec and PDWorkingMemory.
Rimu.DictVectors.remote_segments — Methodremote_segments(w::PDWorkingMemory, rank_id)Returns iterator over the segments in the first column of w that belong to rank rank_id. Iterates Dicts.
See PDWorkingMemory.
Rimu.DictVectors.synchronize_remote! — Methodsynchronize_remote!([::Communicator,] w::PDWorkingMemory) -> names, valuesSynchronize non-local segments across MPI and add the results to the first column. Controlled by the Communicator. This can only be perfomed after collect_local!.
Should return a Tuple of names and a Tuple of values to report.
See PDWorkingMemory.
Communicators
Rimu.DictVectors.AllToAll — TypeAllToAll{K,V}(; mpi_comm, n_segments, report) <: CommunicatorCommunicator that uses collective communication using MPI.Alltoall[v]!.
Keyword arguments
- mpi_comm=MPI.COMM_WORLD: the MPI communicator to use.
- n_segments=Threads.nthreads(): the number of segments per rank to use. Should match the- PDVecthe communicator is used with.
- report=false: if set to true, report MPI communication times during a projector Monte Carlo run.
See also: Communicator.
Rimu.DictVectors.Communicator — Typeabstract type CommunicatorCommunicators are used to handle MPI communication when using PDVecs. Currently, three implementations are provided, NotDistributed, AllToAll and PointToPoint. The communicator is picked automatically according to the number of MPI ranks available.
When implementing a communicator, use local_segments and remote_segments.
Interface
Optional interface
- is_distributed: defaults to returning- true.
- merge_remote_reductions: defaults to using- MPI.Allreduce.
- total_num_segments: defaults to- n * mpi_size.
- target_segment: defaults to selecting using fastrange to pick the segment.
See also: PDVec, PDWorkingMemory.
Rimu.DictVectors.LocalPart — TypeLocalPart <: CommunicatorWhen localpart is used, the vector's Communicator is replaced with this. This allows iteration and local reductions.
Rimu.DictVectors.NestedSegmentedBuffer — TypeNestedSegmentedBuffer{T}(nrows) <: AbstractMatrix{AbstractVector{T}}Matrix of vectors stored in a single buffer with collective MPI communication support. The number of rows in the matrix is fixed to nrows.
Used in the AllToAll communication strategy, where each column corresponds to an MPI rank and each row corresponds to a segment in the PDVec.
Supported operations
- append_collections!: add a column to the matrix.
- append_empty_column!: add an empty column to the matrix.
- mpi_exchange_alltoall!: each rank sends the- i-th column of the matrix to the- (i-1)-st rank.
- mpi_exchange_allgather!: each rank sends the- 1-st column of the matrix to all ranks.
See also: SegmentedBuffer.
Rimu.DictVectors.NotDistributed — TypeNotDistributed <: CommunicatorThis Communicator is used when MPI is not available.
Rimu.DictVectors.PointToPoint — TypePointToPoint{K,V}(; mpi_comm, report) <: CommunicatorMPI Communicator that uses circular communication using MPI.Isend and MPI.Recv!.
Keyword arguments
- mpi_comm=MPI.COMM_WORLD: the MPI communicator to use.
- report=false: if set to true, report MPI communication times during a projector Monte Carlo run.
Rimu.DictVectors.SegmentedBuffer — TypeSegmentedBuffer{T}() <: AbstractVector{AbstractVector{T}}Behaves like a vector of vectors, but is stored in a single buffer. It can be sent/received over MPI keeping its structure intact. Used in the PointToPoint communication strategy.
Supported operations
- replace_collections!: insert data into the buffers
- mpi_send: send the contents of a buffer to a given rank
- mpi_recv_any!: receive a message sent by- mpi_sendfrom any rank, storing the contents in this buffer
See also: NestedSegmentedBuffer.
Rimu.DictVectors.append_collections! — Methodappend_collections!(buf::NestedSegmentedBuffer, iters)Add a column to buf. The length of iters should match buf.nrows.
See also: NestedSegmentedBuffer, append_empty_column!.
Rimu.DictVectors.append_empty_column! — Methodappend_empty_column!(buf::NestedSegmentedBuffer)Like append_collections!, but adds an empty column.
See also: NestedSegmentedBuffer, append_collections!.
Rimu.DictVectors.copy_to_local! — Functioncopy_to_local!([::Communicator,] w::PDWorkingMemory, t::PDVec) -> PDVecCopy pairs in t from all ranks and return them as a (possibly) new PDVec, possibly using the PDWorkingMemory as temporary storage.
See also: PDVec, PDWorkingMemory, Communicator.
Rimu.DictVectors.is_distributed — Methodis_distributed(::Communicator)Return true if Communicator operates over MPI.
Rimu.DictVectors.merge_remote_reductions — Methodmerge_remote_reductions(c::Communicator, op, x)Merge the results of reductions over MPI. By default, it uses MPI.Allreduce.
See also: Communicator.
Rimu.DictVectors.mpi_exchange_allgather! — Methodmpi_exchange_allgather!(src::NestedSegmentedBuffer, dst::NestedSegmentedBuffer, comm)The first and only column in src will be sent to all ranks. The data from all ranks will be gethered in dst. After this operation, dst will contain the same data on all ranks.
See also NestedSegmentedBuffer, mpi_exchange_alltoall!.
Rimu.DictVectors.mpi_exchange_alltoall! — Methodmpi_exchange_alltoall!(src::NestedSegmentedBuffer, dst::NestedSegmentedBuffer, comm)The n-th column from src will be sent to rank n-1. The data sent from rank r will be stored in the (r+1)-st column of dst.
See also: NestedSegmentedBuffer, mpi_exchange_allgather!.
Rimu.DictVectors.mpi_recv_any! — Methodmpi_recv_any!(buf::SegmentedBuffer, comm::MPI_Comm) -> IntFind a source that is ready to send a buffer and receive from it. Return the rank ID of the sender.
Rimu.DictVectors.mpi_send — Methodmpi_send(buf::SegmentedBuffer, dest, comm::MPI.Comm)Send the buffer to rank with id dest.
Rimu.DictVectors.replace_collections! — Methodreplace_collections!(buf::SegmentedBuffer, iters)Insert collections in iters into a SegmentedBuffer.
julia> using Rimu.DictVectors: SegmentedBuffer
julia> buf = SegmentedBuffer{Int}()
0-element SegmentedBuffer{Int64}
julia> Rimu.DictVectors.replace_collections!(buf, [[1,2,3], [4,5]])
2-element SegmentedBuffer{Int64}:
 [1, 2, 3]
 [4, 5]
julia> Rimu.DictVectors.replace_collections!(buf, [[1], [2,3], [4]])
3-element SegmentedBuffer{Int64}:
 [1]
 [2, 3]
 [4]Rimu.DictVectors.target_segment — Methodtarget_segment(c::Communicator, k, num_segments) -> target, is_localThis function is used to determine where in the PDVec a key should be stored. If the key is local (stored on the same MPI rank), return its segment index and true. If the key is non-local, return any value and false.
See also: PDVec, Communicator.
Rimu.DictVectors.total_num_segments — Methodtotal_num_segments(c::Communicator, n) -> IntReturn the total number of segments, including the remote ones, where n is number of local segments.
See also: PDVec, Communicator.
Rimu.mpi_comm — Functionmpi_comm(::Communicator) -> MPI.CommReturn the MPI.Comm that the Communicator operates on.
Rimu.mpi_rank — Functionmpi_rank(::Communicator) -> IntReturn the MPI rank of the Communicator.
Rimu.mpi_size — Functionmpi_size(::Communicator) -> IntReturn the total number of MPI ranks in the Communicator.
Index
- Rimu.DictVectors
- Rimu.DictVectors.AbstractInitiatorValue
- Rimu.DictVectors.AbstractProjector
- Rimu.DictVectors.AllToAll
- Rimu.DictVectors.CoherentInitiator
- Rimu.DictVectors.Communicator
- Rimu.DictVectors.DVec
- Rimu.DictVectors.FirstColumnIterator
- Rimu.DictVectors.FrozenDVec
- Rimu.DictVectors.FrozenPDVec
- Rimu.DictVectors.Initiator
- Rimu.DictVectors.InitiatorDVec
- Rimu.DictVectors.InitiatorRule
- Rimu.DictVectors.InitiatorValue
- Rimu.DictVectors.LocalPart
- Rimu.DictVectors.NestedSegmentedBuffer
- Rimu.DictVectors.NonInitiator
- Rimu.DictVectors.NonInitiatorValue
- Rimu.DictVectors.Norm1ProjectorPPop
- Rimu.DictVectors.Norm2Projector
- Rimu.DictVectors.NormProjector
- Rimu.DictVectors.NotDistributed
- Rimu.DictVectors.PDVec
- Rimu.DictVectors.PDWorkingMemory
- Rimu.DictVectors.PDWorkingMemoryColumn
- Rimu.DictVectors.PointToPoint
- Rimu.DictVectors.SegmentedBuffer
- Rimu.DictVectors.SimpleInitiator
- Rimu.DictVectors.UniformProjector
- Rimu.Interfaces.AbstractDVec
- Base.mapreduce
- OrderedCollections.freeze
- Rimu.DictVectors.append_collections!
- Rimu.DictVectors.append_empty_column!
- Rimu.DictVectors.collect_local!
- Rimu.DictVectors.copy_to_local!
- Rimu.DictVectors.first_column
- Rimu.DictVectors.from_initiator_value
- Rimu.DictVectors.initiator_valtype
- Rimu.DictVectors.is_distributed
- Rimu.DictVectors.local_segments
- Rimu.DictVectors.merge_remote_reductions
- Rimu.DictVectors.move_and_compress!
- Rimu.DictVectors.mpi_exchange_allgather!
- Rimu.DictVectors.mpi_exchange_alltoall!
- Rimu.DictVectors.mpi_recv_any!
- Rimu.DictVectors.mpi_send
- Rimu.DictVectors.num_columns
- Rimu.DictVectors.num_rows
- Rimu.DictVectors.perform_spawns!
- Rimu.DictVectors.remote_segments
- Rimu.DictVectors.replace_collections!
- Rimu.DictVectors.synchronize_remote!
- Rimu.DictVectors.target_segment
- Rimu.DictVectors.to_initiator_value
- Rimu.DictVectors.total_num_segments
- Rimu.DictVectors.walkernumber
- Rimu.DictVectors.walkernumber_and_length
- Rimu.Interfaces.apply_operator!
- Rimu.Interfaces.deposit!
- Rimu.Interfaces.dot_from_right
- Rimu.Interfaces.localpart
- Rimu.Interfaces.sort_into_targets!
- Rimu.Interfaces.storage
- Rimu.Interfaces.sum_mutating!
- Rimu.Interfaces.working_memory
- Rimu.mpi_comm
- Rimu.mpi_rank
- Rimu.mpi_size