Public API
Index
OMEinsumContractionOrders.GreedyMethodOMEinsumContractionOrders.HyperNDOMEinsumContractionOrders.KaHyParBipartiteOMEinsumContractionOrders.MergeGreedyOMEinsumContractionOrders.MergeVectorsOMEinsumContractionOrders.SABipartiteOMEinsumContractionOrders.ScoreFunctionOMEinsumContractionOrders.TreeSAOMEinsumContractionOrders.TreeSASlicerTensorInference.ArtifactProblemSpecTensorInference.BeliefPropgationTensorInference.MMAPModelTensorInference.RescaledArrayTensorInference.TensorNetworkModelTensorInference.UAIModel
OMEinsumContractionOrders.contraction_complexityTensorInference.belief_propagateTensorInference.dataset_from_artifactTensorInference.get_cardsTensorInference.get_varsTensorInference.load_tensor_networkTensorInference.log_probabilityTensorInference.marginalsTensorInference.maximum_logpTensorInference.most_probable_configTensorInference.probabilityTensorInference.problem_from_artifactTensorInference.random_matrix_product_stateTensorInference.random_matrix_product_uaiTensorInference.random_tensor_train_uaiTensorInference.read_evidenceTensorInference.read_evidence_fileTensorInference.read_modelTensorInference.read_model_fileTensorInference.read_queryvarsTensorInference.read_solutionTensorInference.read_td_fileTensorInference.sampleTensorInference.save_tensor_networkTensorInference.update_evidence!TensorInference.update_temperature
Modules
TensorInference — ModuleMain module for TensorInference.jl – A toolbox for probabilistic inference using contraction of tensor networks.
Exports
ArtifactProblemSpecBeliefPropgationGreedyMethodHyperNDKaHyParBipartiteMMAPModelMergeGreedyMergeVectorsRescaledArraySABipartiteScoreFunctionTensorNetworkModelTreeSATreeSASlicerUAIModelbelief_propagatecontraction_complexitydataset_from_artifactget_cardsget_varsload_tensor_networklog_probabilitymarginalsmaximum_logpmost_probable_configprobabilityproblem_from_artifactrandom_matrix_product_staterandom_matrix_product_uairandom_tensor_train_uairead_evidenceread_evidence_fileread_modelread_model_fileread_queryvarsread_solutionread_td_filesamplesave_tensor_networkupdate_evidence!update_temperature
Types
OMEinsumContractionOrders.GreedyMethod — TypeGreedyMethod{MT}
GreedyMethod(; α = 0.0, temperature = 0.0)It may not be optimal, but it is fast.
Fields
αis the parameter for the loss function, for pairwise interaction, L = size(out) - α * (size(in1) + size(in2))temperatureis the parameter for sampling, if it is zero, the minimum loss is selected; for non-zero, the loss is selected by the Boltzmann distribution, given by p ~ exp(-loss/temperature).
OMEinsumContractionOrders.KaHyParBipartite — TypeKaHyParBipartite{RT,IT,GM}
KaHyParBipartite(; sc_target, imbalances=collect(0.0:0.005:0.8),
max_group_size=40, greedy_config=GreedyMethod())Optimize the einsum code contraction order using the KaHyPar + Greedy approach. This program first recursively cuts the tensors into several groups using KaHyPar, with maximum group size specifed by max_group_size and maximum space complexity specified by sc_target, Then finds the contraction order inside each group with the greedy search algorithm. Other arguments are
Fields
sc_targetis the target space complexity, defined aslog2(number of elements in the largest tensor),imbalancesis a KaHyPar parameter that controls the group sizes in hierarchical bipartition,max_group_sizeis the maximum size that allowed to used greedy search,sub_optimizeris the sub-optimizer used to find the contraction order when the group size is small enough.
References
OMEinsumContractionOrders.HyperND — TypeHyperND(;
dis = KaHyParND(),
algs = (MF(), AMF(), MMD()),
level = 6,
width = 120,
imbalances = 130:130,
score = ScoreFunction(),
)Nested-dissection based optimizer. Recursively partitions a tensor network, then calls a greedy algorithm on the leaves. The optimizer is run a number of times: once for each greedy algorithm in algs and each imbalance value in imbalances. The recursion depth is controlled by the parameters level and width.
The line graph is partitioned using the algorithm dis. OMEinsumContractionOrders currently supports two partitioning algorithms, both of which require importing an external library.
| type | package |
|---|---|
METISND | Metis.jl |
KaHyParND | KayHyPar.jl |
The optimizer is implemented using the tree decomposition library CliqueTrees.jl.
Arguments
dis: graph partitioning algorithmalgs: tuple of elimination algorithms.level: maximum levelwidth: minimum widthimbalances: imbalance parametersscore: a function to evaluate the quality of the contraction tree. Default isScoreFunction().
OMEinsumContractionOrders.TreeSASlicer — TypeTreeSASlicer{IT, LT} <: CodeSlicerA structure for configuring the Tree Simulated Annealing (TreeSA) slicing algorithm. The goal of slicing is to reach the target space complexity specified by score.sc_target.
Fields
ntrials,βsandnitersare annealing parameters, doingntrialsindepedent annealings, each has inverse tempteratures specified byβs, in each temperature, donitersupdates of the tree.fixed_slices::Vector{LT}: A vector of fixed slices that should not be altered. Default is an empty vector.optimization_ratio::Float64: A constant used for determining the number of iterations for slicing. Default is 2.0. i.e. if the current space complexity is 30, and the target space complexity is 20, then the number of iterations for slicing is (30 - 20) xoptimization_ratio.score::ScoreFunction: A function to evaluate the quality of the contraction tree. Default isScoreFunction(sc_target=30.0).decomposition_type::AbstractDecompositionType: The type of decomposition to use. Default isTreeDecomp().
References
OMEinsumContractionOrders.ScoreFunction — TypeScoreFunctionA function to compute the score of a contraction code:
score = tc_weight * 2^tc + rw_weight * 2^rw + sc_weight * max(0, 2^sc - 2^sc_target)Fields
tc_weight: the weight of the time complexity, default is 1.0.sc_weight: the weight of the space complexity (the size of the largest tensor), default is 1.0.rw_weight: the weight of the read-write complexity, default is 0.0.sc_target: the target space complexity, below which thesc_weightwill be set to 0 automatically, default is 0.0.
OMEinsumContractionOrders.MergeGreedy — TypeMergeGreedy <: CodeSimplifier
MergeGreedy(; threshhold=-1e-12)Contraction code simplifier (in order to reduce the time of calling optimizers) that merges tensors greedily if the space complexity of merged tensors is reduced (difference smaller than the threshhold).
OMEinsumContractionOrders.MergeVectors — TypeMergeVectors <: CodeSimplifier
MergeVectors()Contraction code simplifier (in order to reduce the time of calling optimizers) that merges vectors to closest tensors.
OMEinsumContractionOrders.SABipartite — TypeSABipartite{RT,BT}
SABipartite(; sc_target=25, ntrials=50, βs=0.1:0.2:15.0, niters=1000
max_group_size=40, greedy_config=GreedyMethod(), initializer=:random)Optimize the einsum code contraction order using the Simulated Annealing bipartition + Greedy approach. This program first recursively cuts the tensors into several groups using simulated annealing, with maximum group size specifed by max_group_size and maximum space complexity specified by sc_target, Then finds the contraction order inside each group with the greedy search algorithm. Other arguments are
Fields
sc_targetis the target space complexity, defined aslog2(number of elements in the largest tensor),ntrialsis the number of repetition (with different random seeds),βsis a list of inverse temperature1/T,nitersis the number of iteration in each temperature,max_group_sizeis the maximum size that allowed to used greedy search,sub_optimizeris the optimizer for the bipartited sub graphs, one can chooseGreedyMethod()orTreeSA(),initializeris the partition configuration initializer, one can choose:randomor:greedy(slow but better).
References
OMEinsumContractionOrders.TreeSA — TypeTreeSA{IT, DT} <: CodeOptimizer
TreeSA(; βs=collect(0.01:0.05:15), ntrials=10, niters=50, initializer=:greedy, score=ScoreFunction(), decomposition_type=TreeDeomp())Optimize the einsum contraction pattern using the simulated annealing on tensor expression tree.
Fields
ntrials,βsandnitersare annealing parameters, doingntrialsindepedent annealings, each has inverse tempteratures specified byβs, in each temperature, donitersupdates of the tree.initializerspecifies how to determine the initial configuration, it can be:greedy,:randomor:specified. If the initializer is:specified, the inputcodeshould be aNestedEinsumobject.scorespecifies the score function to evaluate the quality of the contraction tree, it is a function of time complexity, space complexity and read-write complexity.decomposition_typespecifies the type of decomposition to use, it can beTreeDeomporPathDecomp.
References
Breaking changes:
nslicesis removed, since the slicing part is now separated from the optimization part, seeslice_codefunction andTreeSASlicer.greedy_methodis removed. If you want to have detailed control of the initializer, please pre-optimize the code with another method and then use:specifiedto initialize the tree.
TensorInference.MMAPModel — Typestruct MMAPModel{LT, AT<:AbstractArray}Computing the most likely assignment to the query variables, Xₘ ⊆ X after marginalizing out the remaining variables Xₛ = X \ Xₘ.
\[{\rm MMAP}(X_i|E=e) = \arg \max_{X_M} \sum_{X_S} \prod_{F} f(x_M, x_S, e)\]
Fields
varsis the query variables in the tensor network.codeis the tropical tensor network contraction pattern.tensorsis the tensors fed into the tensor network.clustersis the clusters, each element of this cluster is aTensorNetworkModelinstance for marginalizing certain variables.evidenceis a dictionary to specifiy degree of freedoms fixed to certain values, which should not have overlap with the query variables.
TensorInference.RescaledArray — Typestruct RescaledArray{T, N, AT<:AbstractArray{T, N}} <: AbstractArray{T, N}RescaledArray(α, T) -> RescaledArrayAn array data type with a log-prefactor, and a l∞-normalized storage, i.e. the maximum element in a tensor is 1. This tensor type can avoid the potential underflow/overflow of numbers in a tensor network. The constructor RescaledArray(α, T) creates a rescaled array that equal to exp(α) * T.
TensorInference.TensorNetworkModel — Typestruct TensorNetworkModel{ET, MT<:AbstractArray}Probabilistic modeling with a tensor network.
Fields
nvarsare the number of variables in the tensor network.codeis the tensor network contraction pattern.tensorsare the tensors fed into the tensor network, the leading tensors are unity tensors associated withunity_tensors_labels.evidenceis a dictionary used to specify degrees of freedom that are fixed to certain values.unity_tensors_idxis a vector of indices pointing to the unity tensors in thetensorsarray. Unity tensors are dummy tensors with all entries equal to one, which are used to obtain the marginal probabilities.
TensorInference.ArtifactProblemSpec — Typestruct ArtifactProblemSpecSpecify the UAI models from the artifacts. It can be used as the input of read_model.
Fields
artifact_path::Stringtask::Stringproblem_set::Stringproblem_id::Int64
TensorInference.UAIModel — Typestruct UAIModel{ET, FT<:(TensorInference.Factor{ET})}Fields
nvarsis the number of variables,cardsis a vector of cardinalities for variables,factorsis a vector of factors,
TensorInference.BeliefPropgation — Typestruct BeliefPropgation{T}BeliefPropgation(nvars::Int, t2v::AbstractVector{Vector{Int}}, tensors::AbstractVector{AbstractArray{T}}) where TA belief propagation object.
Fields
t2v::Vector{Vector{Int}}: a mapping from tensors to variablesv2t::Vector{Vector{Int}}: a mapping from variables to tensorstensors::Vector{AbstractArray{T}}: the tensors
Functions
OMEinsumContractionOrders.contraction_complexity — Functioncontraction_complexity(tensor_network)Returns the contraction complexity of a tensor newtork model.
contraction_complexity(eincode, size_dict) -> ContractionComplexityReturns the time, space and read-write complexity of the einsum contraction. The returned ContractionComplexity object contains 3 fields:
tc: time complexity defined aslog2(number of element-wise multiplications).sc: space complexity defined aslog2(size of the maximum intermediate tensor).rwc: read-write complexity defined aslog2(the number of read-write operations).
TensorInference.get_cards — Functionget_cards(tn::TensorNetworkModel; fixedisone) -> Vector
Get the ardinalities of variables in this tensor network.
get_cards(mmap::MMAPModel; fixedisone) -> Vector
TensorInference.get_vars — Functionget_vars(tn::TensorNetworkModel) -> Vector{Int64}
Get the variables in this tensor network, they are also known as legs, labels, or degree of freedoms.
get_vars(mmap::MMAPModel) -> Vector
TensorInference.log_probability — Functionlog_probability(
tn::TensorNetworkModel,
config::Union{Dict, AbstractVector}
) -> Real
Evaluate the log probability (or partition function) of config.
log_probability(
tn::TensorNetworkModel;
usecuda
) -> AbstractArray
Evaluate the log probability (or partition function). It is the logged version of probability, which is less likely to overflow.
TensorInference.marginals — Functionmarginals(
tn::TensorNetworkModel;
usecuda,
rescale
) -> Dict{Vector{Int64}}
Queries the marginals of the variables in a TensorNetworkModel. The function returns a dictionary, where the keys are the variables and the values are their respective marginals. A marginal is a probability distribution over a subset of variables, obtained by integrating or summing over the remaining variables in the model. By default, the function returns the marginals of all individual variables. To specify which marginal variables to query, set the unity_tensors_labels field when constructing a TensorNetworkModel. Note that the choice of marginal variables will affect the contraction order of the tensor network.
Arguments
tn: TheTensorNetworkModelto query.
Keyword Arguments
usecuda::Bool: Specifies whether to use CUDA for tensor contraction.rescale::Bool: Specifies whether to rescale the tensors during contraction.
Example
The following example is taken from examples/asia-network/main.jl.
julia> model = read_model_file(pkgdir(TensorInference, "examples", "asia-network", "model.uai"));
julia> tn = TensorNetworkModel(model; evidence=Dict(1=>0));
julia> marginals(tn)
Dict{Vector{Int64}, Vector{Float64}} with 8 entries:
[8] => [0.450138, 0.549863]
[3] => [0.5, 0.5]
[1] => [1.0]
[5] => [0.45, 0.55]
[4] => [0.055, 0.945]
[6] => [0.10225, 0.89775]
[7] => [0.145092, 0.854908]
[2] => [0.05, 0.95]
julia> tn2 = TensorNetworkModel(model; evidence=Dict(1=>0), unity_tensors_labels = [[2, 3], [3, 4]]);
julia> marginals(tn2)
Dict{Vector{Int64}, Matrix{Float64}} with 2 entries:
[2, 3] => [0.025 0.025; 0.475 0.475]
[3, 4] => [0.05 0.45; 0.005 0.495]In this example, we first set the evidence for variable 1 to 0 and then query the marginals of all individual variables. The returned dictionary has keys that correspond to the queried variables and values that represent their marginals. These marginals are vectors, with each entry corresponding to the probability of the variable taking a specific value. In this example, the possible values are 0 or 1. For the evidence variable 1, the marginal is always [1.0] since its value is fixed at 0.
Next, we specify the marginal variables to query as variables 2 and 3, and variables 3 and 4, respectively. The joint marginals may or may not affect the contraction time and space. In this example, the contraction space complexity increases from 2^{2.0} to 2^{5.0}, and the contraction time complexity increases from 2^{5.977} to 2^{7.781}. The output marginals are the joint probabilities of the queried variables, represented by tensors.
marginals(
state::TensorInference.BPState{T, VT} where VT<:AbstractArray{T, 1}
) -> Dict
TensorInference.maximum_logp — Functionmaximum_logp(
tn::TensorNetworkModel;
usecuda
) -> AbstractArray{<:Real}
Returns an output array containing largest log-probabilities.
TensorInference.most_probable_config — Functionmost_probable_config(
tn::TensorNetworkModel;
usecuda
) -> Tuple{Real, Vector}
Returns the largest log-probability and the most probable configuration.
TensorInference.probability — Functionprobability(
tn::TensorNetworkModel;
usecuda,
rescale
) -> AbstractArray
Contract the tensor network and return an array of probability of evidence. Precisely speaking, the return value is the partition function, which may not be l1-normalized.
If the openvars of the input tensor networks is zero, the array rank is zero. Otherwise, the return values corresponds to marginal probabilities.
TensorInference.belief_propagate — Functionbelief_propagate(
bp::BeliefPropgation;
kwargs...
) -> Tuple{TensorInference.BPState{T, VT} where {T, VT<:Vector{T}}, TensorInference.BPInfo}
Run the belief propagation algorithm, and return the final state and the information about the convergence.
Arguments
bp::BeliefPropgation: the belief propagation object
Keyword Arguments
max_iter::Int=100: the maximum number of iterationstol::Float64=1e-6: the tolerance for the convergence, the convergence is checked by infidelity of messages in consecutive iterations. For complex numbers, the converged message may be different only by a phase factor.damping::Float64=0.2: the damping factor for the message update, updated-message = damping * old-message + (1 - damping) * new-message
TensorInference.dataset_from_artifact — Functiondataset_from_artifact(
artifact_name::AbstractString
) -> Dict{String, Dict{String, Dict{Int64, ArtifactProblemSpec}}}
Helper function that captures the problem names that belong to problem_set for the given task.
TensorInference.problem_from_artifact — Functionproblem_from_artifact(
artifact_name::String,
task::String,
problem_set::String,
problem_id::Int64
) -> ArtifactProblemSpec
Get artifact from artifact name, task name, problem set name and problem id.
TensorInference.read_model — Functionread_model(problem::ArtifactProblemSpec; eltype) -> UAIModel
Read an UAI model from an artifact.
TensorInference.read_evidence — Functionread_evidence(
problem::ArtifactProblemSpec
) -> Dict{Int64, Int64}
TensorInference.read_solution — Functionread_solution(
problem::ArtifactProblemSpec;
factor_eltype
) -> Union{Nothing, Float64, Vector}
Return the solution in the artifact.
The UAI file formats are defined in: https://uaicompetition.github.io/uci-2022/file-formats/
TensorInference.read_queryvars — Functionread_queryvars(
problem::ArtifactProblemSpec
) -> Vector{Int64}
TensorInference.read_model_file — Functionread_model_file(
model_filepath::AbstractString;
factor_eltype
) -> UAIModel
Parse the problem instance found in model_filepath defined in the UAI model format. If the provided file path is empty, return nothing.
The UAI file formats are defined in: https://uaicompetition.github.io/uci-2022/file-formats/
TensorInference.read_evidence_file — Functionread_evidence_file(
evidence_filepath::AbstractString
) -> Tuple{Vector{Int64}, Vector{Int64}}
Return the observed variables and values in evidence_filepath. If the passed file path is an empty string, return empty vectors.
The UAI file formats are defined in: https://uaicompetition.github.io/uci-2022/file-formats/
TensorInference.read_td_file — Functionread_td_file(
td_filepath::AbstractString
) -> Tuple{Int64, Int64, Int64, Vector{Vector{Int64}}, Vector{Vector{Int64}}}
Parse a tree decomposition instance described the PACE format.
The PACE file format is defined in: https://pacechallenge.org/2017/treewidth/
TensorInference.sample — Functionsample(
tn::TensorNetworkModel,
n::Int64;
usecuda,
queryvars,
rescale
) -> TensorInference.Samples{Int64}
Generate samples from a tensor network based probabilistic model. Returns a vector of vector, each element being a configurations defined on get_vars(tn).
Arguments
tnis the tensor network model.nis the number of samples to be returned.
Keyword Arguments
rescaleis a boolean flag to indicate whether to rescale the tensors during contraction.usecudais a boolean flag to indicate whether to use CUDA for tensor computation.queryvarsis the variables to be sampled, default isget_vars(tn).
TensorInference.update_evidence! — Functionupdate_evidence!(
tnet::TensorNetworkModel,
evidence::Dict
) -> TensorNetworkModel
Update the evidence of a tensor network model, without changing the set of observed variables!
Arguments
tnetis theTensorNetworkModelinstance.evidenceis the new evidence, the keys must be a subset of existing evidence.
TensorInference.update_temperature — Functionupdate_temperature(
tnet::TensorNetworkModel,
problem::ProblemReductions.ConstraintSatisfactionProblem,
β::Real
) -> TensorNetworkModel
Update the temperature of a tensor network model. The program will regenerate tensors from the problem, without repeated optimizing the contraction order.
Arguments
tnetis theTensorNetworkModelinstance.problemis the target constraint satisfiability problem.βis the inverse temperature.
TensorInference.random_matrix_product_state — Functionrandom_matrix_product_state(
::Type{T},
n::Int64,
chi::Int64
) -> TensorNetworkModel{OMEinsum.DynamicNestedEinsum{Int64}}
random_matrix_product_state(
::Type{T},
n::Int64,
chi::Int64,
d::Int64
) -> TensorNetworkModel{OMEinsum.DynamicNestedEinsum{Int64}}
Matrix product state (MPS) is a tensor network model that is widely used in quantum many-body physics. It is a special case of tensor network model where the tensors are rank-3 tensors and the physical indices are connected in a chain. The MPS is defined as:
\[\begin{align*} \left| \psi \right\rangle &= \sum_{x_1, x_2, \ldots, x_n} \text{Tr}(A_1^{x_1} A_2^{x_2} \cdots A_n^{x_n}) \left| x_1, x_2, \ldots, x_n \right\rangle \\ \left\langle \psi \right| &= \sum_{x_1, x_2, \ldots, x_n} \text{Tr}(A_n^{x_n} \cdots A_2^{x_2} A_1^{x_1}) \left\langle x_1, x_2, \ldots, x_n \right| \end{align*}\]
where $A_i^{x_i}$ is a rank-3 tensor with physical index $x_i$ and two virtual indices connecting to the next tensor. The MPS is a special case of the tensor network model where the tensors are rank-3 tensors and the physical indices are connected in a chain.
Arguments
nis the number of physical indices.chiis the bond dimension of the virtual indices.dis the dimension of the physical indices.
TensorInference.random_matrix_product_uai — Functionrandom_matrix_product_uai(
::Type{T},
n::Int64,
chi::Int64
) -> UAIModel
random_matrix_product_uai(
::Type{T},
n::Int64,
chi::Int64,
d::Int64
) -> UAIModel
Generate a random UAIModel that represents a matrix product state (MPS). Similar to random_matrix_product_state, but returns the UAIModel directly.
TensorInference.random_tensor_train_uai — Functionrandom_tensor_train_uai(
::Type{T},
n::Int64,
chi::Int64;
...
) -> UAIModel
random_tensor_train_uai(
::Type{T},
n::Int64,
chi::Int64,
d::Int64;
periodic
) -> UAIModel
Tensor train (TT) is a tensor network model that is widely used in quantum many-body physics. This model is different from the matrix product state (MPS) in that it does not have an extra copy for representing the bra state.
TensorInference.save_tensor_network — Functionsave_tensor_network(tn::TensorNetworkModel; folder::String)Save a tensor network model to a folder with separate files for code, tensors, and model metadata. The code is saved using OMEinsum.writejson, tensors as JSON, and model specifics in model.json.
Arguments
tn::TensorNetworkModel: The tensor network model to savefolder::String: The folder path to save the files
Files Created
code.json: Contains the einsum code using OMEinsum formattensors.json: Contains the tensor data as JSONmodel.json: Contains nvars, evidence, and unitytensorsidx
Example
tn = TensorNetworkModel(...) # create your model
save_tensor_network(tn; folder="my_model")TensorInference.load_tensor_network — Functionload_tensor_network(folder::String)Load a tensor network model from a folder containing code, tensors, and model files.
Arguments
folder::String: The folder path containing the files
Returns
TensorNetworkModel: The loaded tensor network model
Required Files
code.json: Contains the einsum code using OMEinsum formattensors.json: Contains the tensor data as JSONmodel.json: Contains nvars, evidence, and unitytensorsidx
Example
tn = load_tensor_network("my_model")