You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Implement Belief propagation as a new inference backend (#97)
* clean up
* update
* vars -> nvars
* update
* update
* update
* update
* implement marginals
* fix docs
* update
* Change julia version to 1.10 in the compat file
* add uai test
* format document and fix tests
* fix docstring
* clean up
Belief propagation[^Yedidia2003] is a message passing algorithm that can be used to compute the marginals of a probabilistic graphical model. It has close connections with the tensor networks. It can be viewed as a way to gauge the tensor networks[^Tindall2023], and can be combined with tensor networks to achieve better performance[^Wang2024].
211
+
212
+
Belief propagation is an approximate method, and the quality of the approximation can be improved by the loop series expansion[^Evenbly2024].
213
+
214
+
208
215
## References
209
216
210
217
[^Orus2014]:
@@ -227,3 +234,15 @@ Some of these have been implemented in the
227
234
228
235
[^Liu2023]:
229
236
Liu J G, Gao X, Cain M, et al. Computing solution space properties of combinatorial optimization problems via generic tensor networks[J]. SIAM Journal on Scientific Computing, 2023, 45(3): A1239-A1270.
237
+
238
+
[^Yedidia2003]:
239
+
Yedidia, J.S., Freeman, W.T., Weiss, Y., 2003. Understanding belief propagation and its generalizations, in: Exploring Artificial Intelligence in the New Millennium. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, pp. 239–269.
Copy file name to clipboardExpand all lines: src/Core.jl
+19-82
Original file line number
Diff line number
Diff line change
@@ -45,18 +45,18 @@ $(TYPEDEF)
45
45
Probabilistic modeling with a tensor network.
46
46
47
47
### Fields
48
-
* `vars` are the degrees of freedom in the tensor network.
48
+
* `nvars` are the number of variables in the tensor network.
49
49
* `code` is the tensor network contraction pattern.
50
-
* `tensors` are the tensors fed into the tensor network, the leading tensors are unity tensors associated with `mars`.
50
+
* `tensors` are the tensors fed into the tensor network, the leading tensors are unity tensors associated with `unity_tensors_labels`.
51
51
* `evidence` is a dictionary used to specify degrees of freedom that are fixed to certain values.
52
-
* `mars` is a vector, each element is a vector of variables to compute marginal probabilities.
52
+
* `unity_tensors_idx` is a vector of indices of the unity tensors in the `tensors` array. Unity tensors are dummy tensors used to obtain the marginal probabilities.
53
53
"""
54
-
struct TensorNetworkModel{LT, ET, MT <:AbstractArray}
55
-
vars::Vector{LT}
54
+
struct TensorNetworkModel{ET, MT <:AbstractArray}
55
+
nvars::Int
56
56
code::ET
57
57
tensors::Vector{MT}
58
-
evidence::Dict{LT, Int}
59
-
mars::Vector{Vector{LT}}
58
+
evidence::Dict{Int, Int}
59
+
unity_tensors_idx::Vector{Int}
60
60
end
61
61
62
62
"""
@@ -78,7 +78,7 @@ end
78
78
79
79
function Base.show(io::IO, tn::TensorNetworkModel)
80
80
open =getiyv(tn.code)
81
-
variables =join([string_var(var, open, tn.evidence) for var intn.vars], ", ")
81
+
variables =join([string_var(var, open, tn.evidence) for var inget_vars(tn)], ", ")
82
82
tc, sc, rw =contraction_complexity(tn)
83
83
println(io, "$(typeof(tn))")
84
84
println(io, "variables: $variables")
@@ -110,102 +110,42 @@ $(TYPEDSIGNATURES)
110
110
* `evidence` is a dictionary of evidences, the values are integers start counting from 0.
111
111
* `optimizer` is the tensor network contraction order optimizer, please check the package [`OMEinsumContractionOrders.jl`](https://github.com/TensorBFS/OMEinsumContractionOrders.jl) for available algorithms.
112
112
* `simplifier` is some strategies for speeding up the `optimizer`, please refer the same link above.
113
-
* `mars` is a list of marginal probabilities. It is all single variables by default, i.e. `[[1], [2], ..., [n]]`. One can also specify multi-variables, which may increase the computational complexity.
113
+
* `unity_tensors_labels` is a list of labels for the unity tensors. It is all single variables by default, i.e. `[[1], [2], ..., [n]]`. One can also specify multi-variables, which may increase the computational complexity.
114
114
"""
115
115
functionTensorNetworkModel(
116
-
model::UAIModel;
116
+
model::UAIModel{ET, FT};
117
117
openvars = (),
118
118
evidence =Dict{Int,Int}(),
119
119
optimizer =GreedyMethod(),
120
120
simplifier =nothing,
121
-
mars = [[i] for i=1:model.nvars]
122
-
)::TensorNetworkModel
123
-
returnTensorNetworkModel(
124
-
1:(model.nvars),
125
-
model.cards,
126
-
model.factors;
127
-
openvars,
128
-
evidence,
129
-
optimizer,
130
-
simplifier,
131
-
mars
132
-
)
133
-
end
134
-
135
-
"""
136
-
$(TYPEDSIGNATURES)
137
-
"""
138
-
functionTensorNetworkModel(
139
-
vars::AbstractVector{LT},
140
-
cards::AbstractVector{Int},
141
-
factors::Vector{<:Factor{T}};
142
-
openvars = (),
143
-
evidence =Dict{LT, Int}(),
144
-
optimizer =GreedyMethod(),
145
-
simplifier =nothing,
146
-
mars = [[v] for v in vars]
147
-
)::TensorNetworkModelwhere {T, LT}
148
-
# The 1st argument of `EinCode` is a vector of vector of labels for specifying the input tensors,
149
-
# The 2nd argument of `EinCode` is a vector of labels for specifying the output tensor,
150
-
# e.g.
151
-
# `EinCode([[1, 2], [2, 3]], [1, 3])` is the EinCode for matrix multiplication.
152
-
rawcode =EinCode([mars..., [[factor.vars...] for factor in factors]...], collect(LT, openvars)) # labels for vertex tensors (unity tensors) and edge tensors
153
-
tensors = Array{T}[[ones(T, [cards[i] for i in mar]...) for mar in mars]..., [t.vals for t in factors]...]
# `optimize_code` optimizes the contraction order of a raw tensor network without a contraction order specified.
170
124
# The 1st argument is the contraction pattern to be optimized (without contraction order).
171
125
# The 2nd arugment is the size dictionary, which is a label-integer dictionary.
172
126
# The 3rd and 4th arguments are the optimizer and simplifier that configures which algorithm to use and simplify.
127
+
rawcode =EinCode([unity_tensors_labels..., [[factor.vars...] for factor in model.factors]...], collect(Int, openvars)) # labels for vertex tensors (unity tensors) and edge tensors
128
+
tensors = Array{ET}[[ones(ET, [model.cards[i] for i in lb]...) for lb in unity_tensors_labels]..., [t.vals for t in model.factors]...]
0 commit comments