Performance evaluation

The graph below illustrates a comparison of the runtime performance of TensorInference.jl against Merlin [marinescu2022merlin], libDAI [mooij2010libdai], and JunctionTrees.jl [roa2022partial] libraries, specifically for the task of computing the marginal probabilities of all variables. Both Merlin and libDAI have previously participated in UAI inference competitions [gal2010summary][gogate2014uai], achieving favorable results. Additionally, we compared against JunctionTrees.jl, the predecessor of TensorInference.jl. The experiments were conducted on an Intel Core i9–9900K CPU @3.60GHz with 64 GB of RAM. Performance comparisons for other tasks will be added in the near future.

The benchmark problems are arranged along the x-axis in ascending order of complexity, measured by the induced tree width. On average, TensorInference.jl achieves a speedup of 20 times across all problems. Notably, for the 10 most complex problems, the average speedup increases to 148 times, highlighting its superior scalability. The graph features a fitted linear curve in log-space to underscore the exponential improvement in computation time achieved by TensorInference.jl in comparison to the other alternatives. This speedup is primarily due to our package's unique approach: while traditional solvers typically focus only on minimizing space complexity (as quantified by the induced tree width), TensorInference.jl is designed to optimize for both time and space complexity.

References