Module DictVectors
Rimu.DictVectors — ModuleModule that provides concrete implementations of the AbstractDVec interface.
- DVec: basic- AbstractDVec
- PDVec: parallel- AbstractDVecwith MPI and initiator support
- InitiatorDVec: allows storing information about initiator status
See Interfaces.
Rimu.Interfaces.AbstractDVec — TypeAbstractDVec{K,V}Abstract data type for vector-like data structures with sparse storage. While conceptually AbstractDVecs represent elements of a vector space over a scalar type V, they are indexed by an arbitrary type K (could be non-integers) similar to dictionaries. They support the interface from VectorInterface.jl and are designed to work well for quantum Monte Carlo with lomc! and for matrix-free linear algebra with KrylovKit.
Concrete implementations are available as PDVec, DVec, and InitiatorDVec.
AbstractDVecs have a StochasticStyle which selects the spawning algorithm in FCIQMC. Looking up an element that is not stored in the AbstractDVec should return a zero, and setting a value to zero should remove it from the vector. To iterate over an AbstractDVec, use keys, pairs, or values. When possible, use reduction functions such as sum or mapreduce.
Interface
The interface is similar to the AbstractDict interface, but with the changed behaviour as noted above.  Implement what would be needed for the AbstractDict interface (pairs, keys, values, setindex!, getindex, delete!, length, empty, empty!) and, in addition:
- StochasticStyle
- storagereturns an- AbstractDictstoring the raw data with possibly different- valtypethan- V.
- deposit!
A default implementation for the VectorInterface.jl interface is provided through the above functions.
See also DictVectors, Interfaces.
Concrete implementations
Rimu.DictVectors.DVec — TypeDVec{K,V,D<:AbstractDict{K,V},S}Dictionary-based vector-like data structure for use with FCIQMC and KrylovKit. While mostly behaving like a Dict, it supports various linear algebra operations such as norm and dot. It has a StochasticStyle that is used to select an appropriate spawning strategy in the FCIQMC algorithm.
See also: AbstractDVec, InitiatorDVec, PDVec.
Constructors
- DVec(dict::AbstractDict[; style, capacity]): create a- DVecwith- dictfor storage. Note that the data may or may not be copied.
- DVec(args...[; style, capacity]):- args...are passed to the- Dictconstructor. The- Dictis used for storage.
- DVec{K,V}([; style, capacity]): create an empty- DVec{K,V}.
- DVec(dv::AbstractDVec[; style, capacity]): create a- DVecwith the same contents as- adv. The- styleis inherited from- dvby default.
The default style is selected based on the DVec's valtype (see default_style). If a style is given and the valtype does not match the style's eltype, the values are converted to an appropriate type.
The capacity argument is optional and sets the initial size of the DVec via Base.sizehint!.
Examples
julia> dv = DVec(:a => 1)
DVec{Symbol,Int64} with 1 entry, style = IsStochasticInteger{Int64}()
  :a => 1
julia> dv = DVec(:a => 2, :b => 3; style=IsDeterministic())
DVec{Symbol,Float64} with 2 entries, style = IsDeterministic{Float64}()
  :a => 2.0
  :b => 3.0Rimu.DictVectors.InitiatorDVec — TypeInitiatorDVec{K,V} <: AbstractDVec{K,V}Dictionary-based vector-like data structure for use with lomc! and KrylovKit.jl. See AbstractDVec. Functionally identical to DVec, but contains InitiatorValues internally in order to facilitate initiator methods. Initiator methods for controlling the Monte Carlo sign problem were first introduced in J. Chem. Phys. 132, 041103 (2010). How the initiators are handled is controlled by specifying an InitiatorRule with the initiator keyword argument (see below).
See also: AbstractDVec, DVec, PDVec.
Constructors
- InitiatorDVec(dict::AbstractDict[; style, initiator, capacity]): create an- InitiatorDVecwith- dictfor storage. Note that the data may or may not be copied.
- InitiatorDVec(args...[; style, initiator, capacity]):- args...are passed to the- Dictconstructor. The- Dictis used for storage.
- InitiatorDVec{K,V}([; style, initiator, capacity]): create an empty- InitiatorDVec{K,V}.
- InitiatorDVec(dv::AbstractDVec[; style, initiator, capacity]): create an- InitiatorDVecwith the same contents as- dv. The- styleis inherited from- dvby default.
Keyword arguments
- style: A valid- StochasticStyle. The default is selected based on the- InitiatorDVec's- valtype(see- default_style). If a style is given and the- valtypedoes not match the- style's- eltype, the values are converted to an appropriate type.
- initiator = Initiator(1): A valid- InitiatorRule. See- Initiator.
- capacity: Indicative size as- Int. Optional. Sets the initial size of the- InitiatorDVecvia- Base.sizehint!.
Rimu.DictVectors.PDVec — TypePDVec{K,V}(; kwargs...)
PDVec(iter; kwargs...)
PDVec(pairs...; kwargs...)Dictionary-based vector-like data structure for use with FCIQMC and KrylovKit.jl. While mostly behaving like a Dict, it supports various linear algebra operations such as norm and dot, and the interface defined in VectorInterface.
The P in PDVec stands for parallel. PDVecs perform mapreduce, foreach, and various linear algebra operations in a threaded manner. If MPI is available, these operations are automatically distributed as well. As such it is not recommended to iterate over pairs, keys, or values directly unless explicitly performing them on the localpart of the vector.
See also: AbstractDVec, DVec, InitiatorDVec.
Keyword arguments
- style =- default_style- (V): A- StochasticStylethat is used to select the spawning strategy in the FCIQMC algorithm.
- initiator =- NonInitiator- (): An- InitiatorRule, used in FCIQMC to remove the sign problem.
- communicator: A- Communicatorthat controls how operations are performed when using MPI. The defaults are- NotDistributedwhen not using MPI and- PointToPointwhen using MPI.
Extended Help
Segmentation
The vector is split into Threads.nthreads() subdictionaries called segments. Which dictionary a key-value pair is mapped to is determined by the hash of the key. The purpose of this segmentation is to allow parallel processing - functions such as mapreduce, add! or dot (full list below) process each subdictionary on a separate thread.
Example
julia> add = FermiFS2C((1,1,0,0), (0,0,1,1));
julia> op = HubbardMom1D(add; t=4/π^2, u=4);
julia> pv = PDVec(add => 1.0)
1-element PDVec: style = IsDeterministic{Float64}()
  fs"|↑↑↓↓⟩" => 1.0
julia> pv = op * pv
7-element PDVec: style = IsDeterministic{Float64}()
  fs"|↑↓↑↓⟩" => 1.0
  fs"|↑↑↓↓⟩" => 4.0
  fs"|↓↑↓↑⟩" => 1.0
  fs"|↓↑↑↓⟩" => -1.0
  fs"|⇅⋅⋅⇅⟩" => 1.0
  fs"|↑↓↓↑⟩" => -1.0
  fs"|⋅⇅⇅⋅⟩" => 1.0
julia> map!(x -> -x, values(pv)); pv
7-element PDVec: style = IsDeterministic{Float64}()
  fs"|↑↓↑↓⟩" => -1.0
  fs"|↑↑↓↓⟩" => -4.0
  fs"|↓↑↓↑⟩" => -1.0
  fs"|↓↑↑↓⟩" => 1.0
  fs"|⇅⋅⋅⇅⟩" => -1.0
  fs"|↑↓↓↑⟩" => 1.0
  fs"|⋅⇅⇅⋅⟩" => -1.0
julia> dest = similar(pv)
0-element PDVec: style = IsDeterministic{Float64}()
julia> map!(x -> x + 2, dest, values(pv))
7-element PDVec: style = IsDeterministic{Float64}()
  fs"|↑↓↑↓⟩" => 1.0
  fs"|↑↑↓↓⟩" => -2.0
  fs"|↓↑↓↑⟩" => 1.0
  fs"|↓↑↑↓⟩" => 3.0
  fs"|⇅⋅⋅⇅⟩" => 1.0
  fs"|↑↓↓↑⟩" => 3.0
  fs"|⋅⇅⇅⋅⟩" => 1.0
julia> sum(values(pv))
-6.0
julia> dot(dest, pv)
10.0
julia> dot(dest, op, pv)
44.0MPI
When MPI is active, all parallel reductions are automatically reduced across MPI ranks with a call to MPI.Allreduce.
In a distributed setting, PDVec does not support iteration without first making it explicit the iteration is only to be performed on the local segments of the vector. This is done with localpart. In general, even when not using MPI, it is best practice to use localpart when explicit iteration is required.
Use with KrylovKit
PDVec is compatible with eigsolve from KrylovKit.jl. When used, the diagonalisation is performed in a threaded and distributed manner. Using multiple MPI ranks with this method does not distribute the memory load effectively, but does result in significant speedups.
Example
julia> using KrylovKit
julia> add = BoseFS((0,0,5,0,0));
julia> op = HubbardMom1D(add; u=6.0);
julia> pv = PDVec(add => 1.0);
julia> results = eigsolve(op, pv, 4, :SR);
julia> results[1][1:4]
4-element Vector{Float64}:
 -3.4311156892322234
  1.1821748602612363
  3.7377753753082823
  6.996390417443125Parallel functionality
The following functions are threaded MPI-compatible:
- From Base: - mapreduceand derivatives (- sum,- prod,- reduce...),- all,- any,- map!(on- valuesonly),- +,- -,- *
- From LinearAlgebra: - rmul!,- lmul!,- mul!,- axpy!,- axpby!,- dot,- norm,- normalize,- normalize!
- The full interface defined in VectorInterface 
Interface functions
Rimu.Interfaces.deposit! — Functiondeposit!(w::InitiatorDVec, add, val, p_add=>p_val)Add val into w at address add as an AbstractInitiatorValue.
deposit!(w::AbstractDVec, add, val, parent::Pair)Add val into w at address add, taking into account initiator rules if applicable. parent contains the address => value pair from which the pair add => val was created. InitiatorDVec can intercept this and add its own functionality.
Rimu.Interfaces.storage — Functionstorage(dvec) -> AbstractDictReturn the raw storage associated with dvec as an AbstractDict. Used in MPI communication.
OrderedCollections.freeze — Functionfreeze(dv)Create a "frozen" version of dv which can no longer be modified or used in the conventional manner, but supports faster dot products.
If dv is an MPIData, synchronize its contents among the ranks first.
Rimu.Interfaces.localpart — Functionlocalpart(dv) -> AbstractDVecGet the part of dv that is located on this MPI rank. Returns dv itself for vectors that can't be MPI distributed (DVecs and InitiatorDVecs).
Rimu.Interfaces.apply_operator! — Functionapply_operator!(working_memory, target, source, operator, boost=1, compress=Val(true)) ->
    stat_names, stats, working_memory, targetPerform a single matrix(/operator)-vector multiplication:
\[v^{(n + 1)} = \hat{T} v^{(n)} ,\]
where $\hat{T}$ is the operator, $v^{(n+1)}$ is the target and $v^{(n)}$ is the source. The working_memory can be used as temporary storage.
The boost argument is passed to apply_column! and increases the number of spawns performed. For the operator to be applied without compressing the vector after, set compress to Val(false).
Whether the operation is performed in a stochastic, semistochastic, or determistic way is controlled by the trait StochasticStyle(target). See StochasticStyle.
Returns the step stats generated by the StochasticStyle, the working memory and the target vector. target and working_memory may be mutated and/or swapped.
Rimu.Interfaces.sort_into_targets! — Functionsort_into_targets!(target, source, stats) -> target, source, agg_statsAggregate coefficients from source to target and from stats to agg_stats according to thread- or MPI-level parallelism.
Returns the new target and source, as the sorting process may involve swapping them.
Rimu.Interfaces.working_memory — Functionworking_memory(dv::AbstractDVec)Create a working memory instance compatible with dv. The working memory must be compatible with sort_into_targets! and apply_operator!.
Supported operations
AbstractDVecs generally support most operations that are defined on Vectors and Dicts. This includes the interface from VectorInterface.jl, and many functions from the LinearAlgebra standard library.
A significant difference between AbstractDVecs, Vectors, and Dicts, is that iteration on them is disabled by default. Iteration must be explicitly performed on keys, values, or pairs, however, it is highly recommended you use mapreduce, reduce, or similar functions when performing reductions, as that will make the operations compatible with MPI.
In addition, Rimu defines the following function.
Rimu.DictVectors.walkernumber — Functionwalkernumber(v)Compute the number of walkers in v. It is used for updating the shift. Overload this function for modifying population control.
In most cases walkernumber(v) is identical to norm(v, 1). For AbstractDVecs with complex coefficients it reports the one norm separately for the real and the imaginary part as a ComplexF64. See Norm1ProjectorPPop.
walkernumber(md::MPIData)Compute the walkernumber of the distributed data on every MPI rank with MPI.Allreduce. MPI syncronizing.
Rimu.DictVectors.dot_from_right — Functiondot_from_right(w, op::AbstractHamiltonian, v)Internal function evaluates the 3-argument dot() function in order from right to left.
Projectors
Rimu.DictVectors.AbstractProjector — TypeAbstract supertype for projectors to be used in in lieu of DVecs or Vectors in dot products. Implemented subtypes:
See also PostStepStrategy for use of projectors in lomc!.
Interface
Define a method for LinearAlgebra.dot(projector, v).
Rimu.DictVectors.NormProjector — TypeNormProjector() <: AbstractProjectorResults in computing the one-norm when used in dot(). E.g.
dot(NormProjector(),x)
-> norm(x,1)NormProjector() thus represents the vector sign.(x).
See also PostStepStrategy, and AbstractProjector for use of projectors in lomc!.
Rimu.DictVectors.Norm2Projector — TypeNorm2Projector() <: AbstractProjectorResults in computing the two-norm when used in dot(). E.g.
dot(NormProjector(),x)
-> norm(x,2) # with type Float64See also PostStepStrategy, and AbstractProjector for use of projectors in lomc!.
Rimu.DictVectors.UniformProjector — TypeUniformProjector() <: AbstractProjectorRepresents a vector with all elements 1. To be used with dot(). Minimizes memory allocations.
UniformProjector()⋅v == sum(v)
dot(UniformProjector(), LO, v) == sum(LO*v)See also PostStepStrategy, and AbstractProjector for use of projectors in lomc!.
Rimu.DictVectors.Norm1ProjectorPPop — TypeNorm1ProjectorPPop() <: AbstractProjectorResults in computing the one-norm per population when used in dot(). E.g.
dot(Norm1ProjectorPPop(),x)
-> norm(real.(x),1) + im*norm(imag.(x),1)See also PostStepStrategy, and AbstractProjector for use of projectors in lomc!.
Rimu.DictVectors.FrozenDVec — TypeFrozenDVecA frozen DVec(s) can't be modified or used in the conventional manner, but support faster dot products. See: freeze.
Rimu.DictVectors.FrozenPDVec — TypeFrozenPDVecParallel version of FrozenDVec. See: freeze, PDVec.
Initiator rules
Rimu.DictVectors.InitiatorRule — TypeInitiatorRule{V}Abstract type for defining initiator rules for InitiatorDVec. Concrete implementations:
Extended Help
InitiatorRules define how to store and retrieve data from associated AbstractInitiatorValues. When defining a new InitiatorRule, also define the following:
Rimu.DictVectors.AbstractInitiatorValue — Typeabstract type AbstractInitiatorValue{V}A value equipped with additional information that enables a variation of the initiator approximation. To be used with PDVec, InitiatorDVec and InitiatorRules.
Must define:
- Base.zero,- Base.:+,- Base.:-,- Base.:*
- from_initiator_valueand- to_initiator_value
Rimu.DictVectors.InitiatorValue — TypeInitiatorValue{V}(; safe::V, unsafe::V, initiator::V) where VComposite "walker" with three fields. For use with InitiatorDVecs.
Rimu.DictVectors.initiator_valtype — Functioninitiator_valtype(rule::InitiatorRule, T)Return the AbstractInitiatorValue{T} that is employed by the rule.
Rimu.DictVectors.to_initiator_value — Functionto_initiator_value(::InitiatorRule, k::K, v::V, parent)Convert v to an AbstractInitiatorValue, taking the initiator rule and the parent that spawned it into account.
Rimu.DictVectors.from_initiator_value — Functionfrom_initiator_value(i::InitiatorRule, v::AbstractInitiatorValue)Convert the AbstractInitiatorValue v into a scalar value according to the InitiatorRule i.
Rimu.DictVectors.Initiator — TypeInitiator(threshold = 1.0) <: InitiatorRuleInitiator rule to be passed to PDVec or InitiatorDVec. An initiator is a configuration add with a coefficient with magnitude abs(v[add]) > threshold. The threshold can be passed as a keyword argument. Rules:
- Initiators can spawn anywhere.
- Non-initiators can spawn to initiators.
See InitiatorRule.
Rimu.DictVectors.SimpleInitiator — TypeSimpleInitiator(threshold = 1.0) <: InitiatorRuleInitiator rule to be passed to PDVec or InitiatorDVec. An initiator is a configuration add with a coefficient with magnitude abs(v[add]) > threshold. The threshold can be passed as a keyword argument. Rules:
- Initiators can spawn anywhere.
- Non-initiators cannot spawn.
See InitiatorRule.
Rimu.DictVectors.CoherentInitiator — TypeCoherentInitiator(threshold = 1.0) <: InitiatorRuleInitiator rule to be passed to PDVec or InitiatorDVec. An initiator is a configuration add with a coefficient with magnitude abs(v[add]) > threshold. The threshold can be passed as a keyword argument. Rules:
- Initiators can spawn anywhere.
- Non-initiators can spawn to initiators.
- Multiple non-initiators can spawn to a single non-initiator if their contributions add up to a value greater than the initiator threshold.
See InitiatorRule.
Rimu.DictVectors.NonInitiator — TypeNonInitiator() <: InitiatorRuleInitiator rule that disables the approximation. This is the default setting for PDVec.
See InitiatorRule.
Rimu.DictVectors.NonInitiatorValue — TypeNonInitiatorValue{V}Value that does not contain any additional information - used with NonInitiator, the default initiator rule for PDVec.
PDVec internals
Working memory
Rimu.DictVectors.MainSegmentIterator — TypeMainSegmentIterator{W,D} <: AbstractVector{D}Iterates the main segments of a specified rank. See remote_segments and local_segments.
Rimu.DictVectors.PDWorkingMemory — TypePDWorkingMemory(t::PDVec)The working memory that handles threading and MPI distribution for operations that involve operators, such as FCIQMC propagation, operator-vector multiplication and three-way dot products with PDVecs.
The working memory is structured in a series of columns, where each has a number of segments (see PDVec) equal to the number of segments across all MPI ranks. The purpose of this organisation is to allow spawning in parallel without using locks or atomic operations.
The steps performed on a PDWorkingMemory during a typical operation are perform_spawns!, collect_local!, synchronize_remote!, and move_and_compress!.
When used with three-argument dot products, a full copy of the left-hand side vector is materialized in the first column of the working memory on all ranks.
Rimu.DictVectors.PDWorkingMemoryColumn — TypePDWorkingMemoryColumnA column in PDWorkingMemory. Supports deposit! and StochasticStyle and acts as a target for spawning.
Rimu.DictVectors.collect_local! — Methodcollect_local!(w::PDWorkingMemory)Collect each row in w into its main segment. This step must be performed before using local_segments or remote_segments to move the values elsewhere.
See PDWorkingMemory.
Rimu.DictVectors.local_segments — Methodlocal_segments(w::PDWorkingMemory)Returns iterator over the main segments on the current rank. Iterates Dicts.
See PDWorkingMemory.
Rimu.DictVectors.main_column — Methodmain_column(::PDWorkingMemory) -> PDVecReturn the "main" column of the working memory wrapped in a PDVec.
See PDWorkingMemory.
Rimu.DictVectors.move_and_compress! — Methodmove_and_compress!(dst::PDVec, src::PDWorkingMemory)
move_and_compress!(::CompressionStrategy, dst::PDVec, src::PDWorkingMemory)Move the values in src to dst, compressing the according to the CompressionStrategy on the way. This step can only be performed after collect_local! and synchronize_remote!.
See PDWorkingMemory.
Rimu.DictVectors.num_columns — Methodnum_columns(w::PDWorkingMemory) -> IntNumber of columns in the working memory. The number of rows is equal to the number of segments in the local rank.
Rimu.DictVectors.num_rows — Methodnum_rows(w::PDWorkingMemory) -> IntNumber of rows in the working memory. The number of rows is equal to the number of segments accross all ranks.
Rimu.DictVectors.perform_spawns! — Methodperform_spawns!(w::PDWorkingMemory, t::PDVec, ham, boost)Perform spawns from t through ham to w.
See PDVec and PDWorkingMemory.
Rimu.DictVectors.remote_segments — Methodremote_segments(w::PDWorkingMemory, rank_id)Returns iterator over the main segments that belong to rank rank_id. Iterates Dicts.
See PDWorkingMemory.
Rimu.DictVectors.synchronize_remote! — Methodsynchronize_remote!(w::PDWorkingMemory)Synchronize non-local segments across MPI. Controlled by the Communicator. This can only be perfomed after collect_local!.
See PDWorkingMemory.
Communicators
Rimu.DictVectors.Communicator — Typeabstract type CommunicatorCommunicators are used to handle MPI communication when using PDVecs. Currently, two implementations are provided, NotDistributed, and PointToPoint. The communicator is picked automatically according to the number of MPI ranks available.
When implementing a communicator, use local_segments and remote_segments.
Interface
Optional interface
- is_distributed: defaults to returning- true.
- merge_remote_reductions: defaults to using- MPI.Allreduce.
- total_num_segments: defaults to- n * mpi_size.
- target_segment: defaults to selecting using fastrange to pick the segment.
Rimu.DictVectors.LocalPart — TypeLocalPart <: CommunicatorWhen localpart is used, the vector's Communicator is replaced with this. This allows iteration and local reductions.
Rimu.DictVectors.NotDistributed — TypeNotDistributed <: CommunicatorThis Communicator is used when MPI is not available.
Rimu.DictVectors.PointToPoint — TypePointToPoint <: CommunicatorCommunicator that uses circular communication using MPI.Isend and MPI.Recv!.
Rimu.DictVectors.SegmentedBuffer — TypeSegmentedBufferMultiple vectors stored in a simple buffer with MPI communication.
Rimu.DictVectors.copy_to_local! — Functioncopy_to_local!([::Communicator,] w::PDWorkingMemory, t::PDVec) -> PDVecCopy pairs in t from all ranks and return them as (possibly) new PDVec, possibly using the PDWorkingMemory as temporary storage.
Rimu.DictVectors.is_distributed — Methodis_distributed(::Communicator)Return true if communicator operates over MPI.
Rimu.DictVectors.merge_remote_reductions — Methodmerge_remote_reductions(c::Communicator, op, x)Merge the results of reductions over MPI. By default, it uses MPI.Allreduce.
Rimu.DictVectors.mpi_comm — Functionmpi_comm(::Communicator) -> MPI.CommReturn the MPI.Comm that the Communicator operates on.
Rimu.DictVectors.mpi_rank — Functionmpi_rank(::Communicator) -> IntReturn the MPI rank of the Communicator.
Rimu.DictVectors.mpi_recv_any! — Methodmpi_recv_any!(buf::SegmentedBuffer, comm) -> IntFind a source that is ready to send a buffer and receive from it. Return the rank ID of the sender.
Rimu.DictVectors.mpi_send — Methodmpi_send(buf::SegmentedBuffer, dest, comm)Send the buffers to dest.
Rimu.DictVectors.mpi_size — Functionmpi_size(::Communicator) -> IntReturn the total number of MPI ranks in the Communicator.
Rimu.DictVectors.replace_collections! — Methodreplace_collections!(buf::SegmentedBuffer, iters)Insert collections in iters into buffers.
Rimu.DictVectors.synchronize_remote! — Functionsynchronize_remote!([::Communicator,] ::PDWorkingMemory)Copy pairs from remote ranks to the local part of the PDWorkingMemory.
Rimu.DictVectors.target_segment — Methodtarget_segment(c::Communicator, k, num_segments) -> target, is_localThis function is used to determine where in the PDVec a key should be stored.
If the key is local (stored on the same MPI rank), return its segment index and true. If the key is non-local, return any value and false.
Rimu.DictVectors.total_num_segments — Methodtotal_num_segments(c::Communicator, n) -> IntReturn the total number of segments, including the remote ones, where n is number of local segments.
Index
- Rimu.DictVectors
- Rimu.DictVectors.AbstractInitiatorValue
- Rimu.DictVectors.AbstractProjector
- Rimu.DictVectors.CoherentInitiator
- Rimu.DictVectors.Communicator
- Rimu.DictVectors.DVec
- Rimu.DictVectors.FrozenDVec
- Rimu.DictVectors.FrozenPDVec
- Rimu.DictVectors.Initiator
- Rimu.DictVectors.InitiatorDVec
- Rimu.DictVectors.InitiatorRule
- Rimu.DictVectors.InitiatorValue
- Rimu.DictVectors.LocalPart
- Rimu.DictVectors.MainSegmentIterator
- Rimu.DictVectors.NonInitiator
- Rimu.DictVectors.NonInitiatorValue
- Rimu.DictVectors.Norm1ProjectorPPop
- Rimu.DictVectors.Norm2Projector
- Rimu.DictVectors.NormProjector
- Rimu.DictVectors.NotDistributed
- Rimu.DictVectors.PDVec
- Rimu.DictVectors.PDWorkingMemory
- Rimu.DictVectors.PDWorkingMemoryColumn
- Rimu.DictVectors.PointToPoint
- Rimu.DictVectors.SegmentedBuffer
- Rimu.DictVectors.SimpleInitiator
- Rimu.DictVectors.UniformProjector
- Rimu.Interfaces.AbstractDVec
- OrderedCollections.freeze
- Rimu.DictVectors.collect_local!
- Rimu.DictVectors.copy_to_local!
- Rimu.DictVectors.dot_from_right
- Rimu.DictVectors.from_initiator_value
- Rimu.DictVectors.initiator_valtype
- Rimu.DictVectors.is_distributed
- Rimu.DictVectors.local_segments
- Rimu.DictVectors.main_column
- Rimu.DictVectors.merge_remote_reductions
- Rimu.DictVectors.move_and_compress!
- Rimu.DictVectors.mpi_comm
- Rimu.DictVectors.mpi_rank
- Rimu.DictVectors.mpi_recv_any!
- Rimu.DictVectors.mpi_send
- Rimu.DictVectors.mpi_size
- Rimu.DictVectors.num_columns
- Rimu.DictVectors.num_rows
- Rimu.DictVectors.perform_spawns!
- Rimu.DictVectors.remote_segments
- Rimu.DictVectors.replace_collections!
- Rimu.DictVectors.synchronize_remote!
- Rimu.DictVectors.synchronize_remote!
- Rimu.DictVectors.target_segment
- Rimu.DictVectors.to_initiator_value
- Rimu.DictVectors.total_num_segments
- Rimu.DictVectors.walkernumber
- Rimu.Interfaces.apply_operator!
- Rimu.Interfaces.deposit!
- Rimu.Interfaces.localpart
- Rimu.Interfaces.sort_into_targets!
- Rimu.Interfaces.storage
- Rimu.Interfaces.working_memory