Example 4: Exact diagonalisation
When working with smaller systems, or when multiple eigenvalues of a system are required, one can use an exact diagonalisation method. There are a few ways to go about this, each with its pros and cons. The purpose of this tutorial is to show off the methods as well as provide a few tips regarding them.
A runnable script for this example is located here. Run it with julia exact-example.jl
.
We start by loading Rimu.
using Rimu
Introduction
We will look at a bosonic system of 4 particles in 5 sites, formulated in momentum space. Let's start by building the Hamiltonian. To create a Fock state where all particles have zero momentum, we put all the particles in the mode at the centre of the address.
M = 5
N = 4
add = BoseFS(M, cld(M, 2) => N)
ham = HubbardMom1D(add)
HubbardMom1D(BoseFS{4,5}(0, 0, 4, 0, 0); u=1.0, t=1.0)
Before performing exact diagonalisation, it is a good idea to check the dimension of the Hamiltonian.
dimension(ham)
70
Keep in mind that this is an estimate of the number of Fock states the Hamiltonian can act on, not the actual matrix size - the matrix size can sometimes be smaller. It can still be used as a guide to decide whether a Hamiltonian is amenable to exact diagonalisation and to determine which algorithm would be best suited to diagonalising it.
The BasisSetRep
As we'll see later, there are two ways to construct the matrices from Hamiltonians directly, but they both use BasisSetRep
under the hood. The BasisSetRep
, when called with a Hamiltonian and optionally a starting address, constructs the sparse matrix of the system and its basis. The starting address defaults to the one that was used to initialize the Hamiltonian. BasisSetRep
only returns the part of the matrix that is accessible from this starting address through non-zero offdiagonal elements.
bsr = BasisSetRep(ham);
To access the matrix or basis, access the sm
and basis
fields, respectively.
bsr.sm
14×14 SparseArrays.SparseMatrixCSC{Float64, Int64} with 104 stored entries:
-6.8 0.69282 0.69282 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
0.69282 -3.03607 0.4 0.8 0.4 0.8 0.4 0.565685 0.282843 ⋅ ⋅ ⋅ ⋅ ⋅
0.69282 0.4 1.43607 0.4 0.8 0.4 0.8 ⋅ 0.282843 0.565685 ⋅ ⋅ ⋅ ⋅
⋅ 0.8 0.4 0.581966 0.4 ⋅ 0.4 0.282843 0.565685 ⋅ 0.69282 0.69282 ⋅ ⋅
⋅ 0.4 0.8 0.4 2.81803 0.4 ⋅ ⋅ 0.565685 0.282843 0.69282 ⋅ 0.69282 ⋅
⋅ 0.8 0.4 ⋅ 0.4 0.581966 0.4 0.282843 0.565685 ⋅ ⋅ ⋅ 0.69282 0.69282
⋅ 0.4 0.8 0.4 ⋅ 0.4 2.81803 ⋅ 0.565685 0.282843 ⋅ 0.69282 ⋅ 0.69282
⋅ 0.565685 ⋅ 0.282843 ⋅ 0.282843 ⋅ -0.472136 0.8 ⋅ 0.489898 ⋅ ⋅ 0.489898
⋅ 0.282843 0.282843 0.565685 0.565685 0.565685 0.565685 0.8 4.4 0.8 0.489898 0.489898 0.489898 0.489898
⋅ ⋅ 0.565685 ⋅ 0.282843 ⋅ 0.282843 ⋅ 0.8 8.47214 ⋅ 0.489898 0.489898 ⋅
⋅ ⋅ ⋅ 0.69282 0.69282 ⋅ ⋅ 0.489898 0.489898 ⋅ 1.56393 ⋅ ⋅ ⋅
⋅ ⋅ ⋅ 0.69282 ⋅ ⋅ 0.69282 ⋅ 0.489898 0.489898 ⋅ 6.03607 ⋅ ⋅
⋅ ⋅ ⋅ ⋅ 0.69282 0.69282 ⋅ ⋅ 0.489898 0.489898 ⋅ ⋅ 6.03607 ⋅
⋅ ⋅ ⋅ ⋅ ⋅ 0.69282 0.69282 0.489898 0.489898 ⋅ ⋅ ⋅ ⋅ 1.56393
bsr.basis
14-element Vector{BoseFS{4, 5, BitString{8, 1, UInt8}}}:
fs"|0 0 4 0 0⟩"
fs"|0 1 2 1 0⟩"
fs"|1 0 2 0 1⟩"
fs"|1 0 1 2 0⟩"
fs"|0 0 1 1 2⟩"
fs"|0 2 1 0 1⟩"
fs"|2 1 1 0 0⟩"
fs"|0 2 0 2 0⟩"
fs"|1 1 0 1 1⟩"
fs"|2 0 0 0 2⟩"
fs"|0 0 0 3 1⟩"
fs"|3 0 0 1 0⟩"
fs"|0 1 0 0 3⟩"
fs"|1 3 0 0 0⟩"
When the basis is not needed, we can use Matrix
or sparse
directly.
Matrix(ham)
14×14 Matrix{Float64}:
-6.8 0.69282 0.69282 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.69282 -3.03607 0.4 0.8 0.4 0.8 0.4 0.565685 0.282843 0.0 0.0 0.0 0.0 0.0
0.69282 0.4 1.43607 0.4 0.8 0.4 0.8 0.0 0.282843 0.565685 0.0 0.0 0.0 0.0
0.0 0.8 0.4 0.581966 0.4 0.0 0.4 0.282843 0.565685 0.0 0.69282 0.69282 0.0 0.0
0.0 0.4 0.8 0.4 2.81803 0.4 0.0 0.0 0.565685 0.282843 0.69282 0.0 0.69282 0.0
0.0 0.8 0.4 0.0 0.4 0.581966 0.4 0.282843 0.565685 0.0 0.0 0.0 0.69282 0.69282
0.0 0.4 0.8 0.4 0.0 0.4 2.81803 0.0 0.565685 0.282843 0.0 0.69282 0.0 0.69282
0.0 0.565685 0.0 0.282843 0.0 0.282843 0.0 -0.472136 0.8 0.0 0.489898 0.0 0.0 0.489898
0.0 0.282843 0.282843 0.565685 0.565685 0.565685 0.565685 0.8 4.4 0.8 0.489898 0.489898 0.489898 0.489898
0.0 0.0 0.565685 0.0 0.282843 0.0 0.282843 0.0 0.8 8.47214 0.0 0.489898 0.489898 0.0
0.0 0.0 0.0 0.69282 0.69282 0.0 0.0 0.489898 0.489898 0.0 1.56393 0.0 0.0 0.0
0.0 0.0 0.0 0.69282 0.0 0.0 0.69282 0.0 0.489898 0.489898 0.0 6.03607 0.0 0.0
0.0 0.0 0.0 0.0 0.69282 0.69282 0.0 0.0 0.489898 0.489898 0.0 0.0 6.03607 0.0
0.0 0.0 0.0 0.0 0.0 0.69282 0.69282 0.489898 0.489898 0.0 0.0 0.0 0.0 1.56393
sparse(ham)
14×14 SparseArrays.SparseMatrixCSC{Float64, Int64} with 104 stored entries:
-6.8 0.69282 0.69282 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
0.69282 -3.03607 0.4 0.8 0.4 0.8 0.4 0.565685 0.282843 ⋅ ⋅ ⋅ ⋅ ⋅
0.69282 0.4 1.43607 0.4 0.8 0.4 0.8 ⋅ 0.282843 0.565685 ⋅ ⋅ ⋅ ⋅
⋅ 0.8 0.4 0.581966 0.4 ⋅ 0.4 0.282843 0.565685 ⋅ 0.69282 0.69282 ⋅ ⋅
⋅ 0.4 0.8 0.4 2.81803 0.4 ⋅ ⋅ 0.565685 0.282843 0.69282 ⋅ 0.69282 ⋅
⋅ 0.8 0.4 ⋅ 0.4 0.581966 0.4 0.282843 0.565685 ⋅ ⋅ ⋅ 0.69282 0.69282
⋅ 0.4 0.8 0.4 ⋅ 0.4 2.81803 ⋅ 0.565685 0.282843 ⋅ 0.69282 ⋅ 0.69282
⋅ 0.565685 ⋅ 0.282843 ⋅ 0.282843 ⋅ -0.472136 0.8 ⋅ 0.489898 ⋅ ⋅ 0.489898
⋅ 0.282843 0.282843 0.565685 0.565685 0.565685 0.565685 0.8 4.4 0.8 0.489898 0.489898 0.489898 0.489898
⋅ ⋅ 0.565685 ⋅ 0.282843 ⋅ 0.282843 ⋅ 0.8 8.47214 ⋅ 0.489898 0.489898 ⋅
⋅ ⋅ ⋅ 0.69282 0.69282 ⋅ ⋅ 0.489898 0.489898 ⋅ 1.56393 ⋅ ⋅ ⋅
⋅ ⋅ ⋅ 0.69282 ⋅ ⋅ 0.69282 ⋅ 0.489898 0.489898 ⋅ 6.03607 ⋅ ⋅
⋅ ⋅ ⋅ ⋅ 0.69282 0.69282 ⋅ ⋅ 0.489898 0.489898 ⋅ ⋅ 6.03607 ⋅
⋅ ⋅ ⋅ ⋅ ⋅ 0.69282 0.69282 0.489898 0.489898 ⋅ ⋅ ⋅ ⋅ 1.56393
Computing eigenvalues
Now that we have a way of constructing matrices from Hamiltonians, we can use standard Julia functionality to diagonalise them.
The built-in method
Let's begin by looking at the eigen
, eigvecs
, and eigvals
functions from the LinearAlgebra standard library. They operate on dense matrices and return the full spectra, hence they are only useful for small systems or when all eigenvalues are required.
using LinearAlgebra
mat = Matrix(ham)
eig = eigen(mat);
The values can be accessed like so:
eig.values
14-element Vector{Float64}:
-6.9798639983216155
-3.363124291613371
-0.7590191922770728
0.1358418221962303
0.15789998694609153
0.8767114411781431
1.5305929970973349
1.583573261186749
3.072870330325868
3.1256726539518525
4.862107221562181
6.260694850380592
6.402671211183115
9.093371706203957
The vectors are stored as columns in eig.vectors
:
eig.vectors
14×14 Matrix{Float64}:
-0.980348 0.175378 0.0135766 -5.46153e-15 -0.0221221 -0.0697193 3.30753e-15 -0.0314466 -3.25591e-16 -0.0360987 -0.0161557 -1.08119e-16 -0.00625248 -0.0058099
0.177701 0.932229 0.105473 5.97476e-14 0.225254 -0.132826 -3.44233e-17 0.00292026 -2.22083e-15 -0.0861158 -0.0907789 -2.63831e-16 -0.0591715 -0.0264275
0.0768085 -0.0622307 0.0129069 -1.1734e-13 -0.447424 -0.63969 4.40954e-14 -0.383444 -7.21042e-15 -0.431051 -0.181167 -5.40614e-16 -0.0599783 -0.106852
-0.0214153 -0.175119 -0.20169 0.616673 0.522017 -0.296818 -0.31234 0.122119 -0.123629 -0.106455 -0.126661 0.0829132 -0.148347 -0.0574235
-0.0119687 -0.0373038 0.0678797 0.0693699 0.0495446 0.416847 0.33773 -0.0584554 -0.601232 -0.467179 -0.215841 -0.140166 -0.190543 -0.114342
-0.0214153 -0.175119 -0.20169 -0.616673 0.522017 -0.296818 0.31234 0.122119 0.123629 -0.106455 -0.126661 -0.0829132 -0.148347 -0.0574235
-0.0119687 -0.0373038 0.0678797 -0.0693699 0.0495446 0.416847 -0.33773 -0.0584554 0.601232 -0.467179 -0.215841 0.140166 -0.190543 -0.114342
-0.0138439 -0.165902 0.922758 2.5354e-14 0.1111 -0.172508 -2.60229e-14 0.207974 2.57978e-15 0.0848902 -0.149153 -5.57509e-16 -0.0712921 -0.0301968
-0.00234782 0.00840544 -0.098969 -2.14419e-14 -0.0847116 0.082999 3.13063e-14 -0.274424 5.10554e-15 0.575786 -0.61385 -1.11262e-15 -0.353775 -0.259338
-0.00237613 0.00294196 0.00143189 1.10167e-14 0.0427486 0.0110863 -7.64953e-15 0.0663108 1.04296e-15 0.00167015 0.0832916 1.47632e-15 0.448519 -0.8863
0.00363555 0.0455298 -0.133824 -0.332825 -0.290831 -0.057199 -0.527627 0.588225 -0.332816 -0.0472302 -0.18528 -0.00844536 -0.0915588 -0.0346434
0.00195478 0.0150664 0.0206752 -0.0642658 -0.0638689 -0.0250518 0.0999631 0.012992 -0.111668 0.0393523 0.423534 0.688046 -0.513839 -0.222499
0.00195478 0.0150664 0.0206752 0.0642658 -0.0638689 -0.0250518 -0.0999631 0.012992 0.111668 0.0393523 0.423534 -0.688046 -0.513839 -0.222499
0.00363555 0.0455298 -0.133824 0.332825 -0.290831 -0.057199 0.527627 0.588225 0.332816 -0.0472302 -0.18528 0.00844536 -0.0915588 -0.0346434
If you need the full spectrum, but would like to use less memory, consider using the in-place eigen!
.
Iterative sparse solvers
For larger Hamiltonians, it is better to use an iterative solver. There are several options. We will look at eigs
from Arpack.jl
and eigsolve
from KrylovKit.jl.
Let's start with Arpack's eigs
. It is important to set the nev
and which
keyword arguments. nev
sets the number of eigenpairs to find. which
should in most cases be set to :SR
, which will find the eigenvalues with the smallest real part.
using Arpack
num_eigvals = 3
sm = sparse(ham)
vals_ar, vecs_ar = eigs(sm; which=:SR, nev=num_eigvals)
vals_ar
3-element Vector{Float64}:
-6.979863998321619
-3.363124291613358
-0.759019192277075
Using KrylovKit's eigsolve
is similar, but the nev
and which
are given as positional arguments. Note that KrylovKit may sometimes return more than nev
eigenpairs if it happens to find them.
using KrylovKit
vals_kk, vecs_kk = eigsolve(sm, num_eigvals, :SR)
vals_kk
14-element Vector{Float64}:
-6.979863998321598
-3.3631242916133637
-0.7590191922770888
0.1358418221962161
0.1578999869460862
0.8767114411781414
1.530592997097334
1.583573261186733
3.0728703303258573
3.1256726539518374
4.862107221562171
6.260694850380592
6.402671211183108
9.093371706203955
Both solvers use variants of the Lanczos algorithm for Hermitian matrices and the Arnoldi algorithm for non-Hermitian ones. These may in some cases miss degenerate eigenpairs.
If diagonalisation takes too long, you can reduce the tolerance by setting the tol
keyword argument to eigs
or eigsolve
. Using drastically lower tolerances than the default can still produce good results in practice. This, however, should be checked on a case-by-case basis.
The matrix-free method
KrylovKit's eigsolve
function is implemented in a way that does not require the linear operator and vector to be Julia arrays. Rimu leverages this functionality, which allows diagonalising Hamiltonians without ever needing to construct the matrix - all matrix elements are generated on the fly.
While this method is by far the slowest of the ones discussed, it also uses drastically less memory. This allows us to diagonalise much larger Hamiltonians.
To use this method, you first need a starting DVec
:
dvec = DVec(add => 1.0)
DVec{BoseFS{4, 5, BitString{8, 1, UInt8}},Float64} with 1 entry, style = IsDeterministic{Float64}()
fs"|0 0 4 0 0⟩" => 1.0
Then, pass that vector and the Hamiltonian to eigsolve
. Since the function has no way of knowing the Hamiltonian is Hermitian, we have to provide that information through the issymmetric
or ishermitian
keyword arguments. Make sure to only pass this argument when the Hamiltonian is actually symmetric. To check that, look at the LOStructure
trait:
LOStructure(ham)
IsHermitian()
vals_mf, vecs_mf = eigsolve(ham, dvec, num_eigvals, :SR; issymmetric=true)
vals_mf
10-element Vector{Float64}:
-6.9798639983216155
-3.3631242916133406
-0.7590191922770728
0.15789998694608443
0.8767114411781503
1.5835732611867428
3.125672653951841
4.862107221562172
6.402671211183111
9.093371706203953
Keep in mind that if an eigenvector is orthogonal to dvec
, KrylovKit will miss it. Consider the following example:
eigsolve(ham, vecs_mf[2], num_eigvals, :SR, issymmetric=true)[1]
1-element Vector{Float64}:
-3.363124291613361
Reducing matrix size with symmetries
As these matrices tend to get large quickly, memory is usually the bottleneck. There are currently two methods implemented to reduce the matrix size, ParitySymmetry
and TimeReversalSymmetry
. These symmetries work by performing a unitary transformation on the Hamiltonian which causes it to become block-diagonal. When building a matrix from a block-diagonal Hamiltonian, only the block that contains the starting address is constructed.
You should only use these where the relevant symmetries actually apply - no checks are performed to make sure they do. There is also currently no way of using both at the same time. Please consult the documentation for a more in-depth description of these options.
The Hamiltonian presented in this example is compatible with the ParitySymmetry
. Let's see how the matrix size is reduced when applying it.
size(sparse(ham))
(14, 14)
size(sparse(ParitySymmetry(ham)))
(10, 10)
In this small example, the size reduction is modest, but for larger systems, you can expect to reduce the dimension of the matrix by about half.
all_eigs = eigvals(Matrix(ham))
even_eigs = eigvals(Matrix(ParitySymmetry(ham)))
10-element Vector{Float64}:
-6.97986399832162
-3.363124291613361
-0.7590191922770769
0.15789998694608018
0.8767114411781443
1.5835732611867421
3.125672653951844
4.862107221562177
6.402671211183111
9.093371706203955
The eigenvalues of the transformed Hamiltonian are a subset of the full spectrum. To get the other half, we can pass the even=false
keyword argument to it. When doing that, we need to make sure the starting address of the Hamiltonian is not symmetric under reversal:
add_odd = BoseFS(M, cld(M, 2) => N - 3, cld(M, 2) - 1 => 2, cld(M, 2) + 2 => 1)
BoseFS{4,5}(0, 2, 1, 0, 1)
odd_eigs = eigvals(Matrix(ParitySymmetry(HubbardMom1D(add_odd); even=false)))
4-element Vector{Float64}:
0.13584182219621782
1.5305929970973278
3.0728703303258604
6.260694850380591
Now, let's check that combining the two sets of eigenvalues indeed recovers the whole spectrum.
sort([even_eigs; odd_eigs]) ≈ all_eigs
true
Computing observables
Since building a matrix from an operator only builds the part that is reachable from the starting address, we need to use a different approach when computing observables.
To demonstrate this, we will use the DensityMatrixDiagonal
operator, which in this case will give the momentum density.
The idea here is to construct a DVec
from the computed eigenvector and use it directly with the operator.
dvec = DVec(zip(bsr.basis, eigvecs(Matrix(ham))[:, 1]))
DVec{BoseFS{4, 5, BitString{8, 1, UInt8}},Float64} with 14 entries, style = IsDeterministic{Float64}()
fs"|1 3 0 0 0⟩" => 0.00363555
fs"|1 0 1 2 0⟩" => -0.0214153
fs"|2 0 0 0 2⟩" => -0.00237613
fs"|0 0 4 0 0⟩" => -0.980348
fs"|0 1 2 1 0⟩" => 0.177701
fs"|0 0 1 1 2⟩" => -0.0119687
fs"|3 0 0 1 0⟩" => 0.00195478
fs"|0 0 0 3 1⟩" => 0.00363555
fs"|0 2 0 2 0⟩" => -0.0138439
fs"|2 1 1 0 0⟩" => -0.0119687
fs"|1 1 0 1 1⟩" => -0.00234782
fs"|0 1 0 0 3⟩" => 0.00195478
fs"|1 0 2 0 1⟩" => 0.0768085
fs"|0 2 1 0 1⟩" => -0.0214153
The eigenvectors these methods produce are normalized, hence we can use the three-argument dot
to compute the values of observables. Here we are computing the single particle momentum density distribution, which is just the diagonal of the single-particle density matrix in momentum space:
[dot(dvec, DensityMatrixDiagonal(i), dvec) for i in 1:M]
5-element Vector{Float64}:
0.006686138945087796
0.03307039977204168
3.920486922565741
0.03307039977204166
0.006686138945087849
This page was generated using Literate.jl.