I am trying to implement Bayesian Networks.
My main graph is a factor graph that I want to use for belief propagation. But, in belief propagation when calculating messages, not all the arguments are passed to the function and the final function will be a restriction of the joint distribution.
The best way which comes to my mind is to somehow restrict the functions in order to not to do all of the replacement every time when I want to calculate a marginal for a new value.
I asked how to implement such a function here.
I want to know if there is a better way to do such a thing or are there more simple and faster approaches than the one I want to do.
Here's a suggestion: create a closure which accepts a map containing the initial variables and their respective values as its key-value pairs for the first computation. The same closure returns an inner function that accepts another map with the remaining variables and values for the final computation.
So define a closure where the first partial computation is done in the outer function. Based on your link, the partial computation is a sum but I imagine you will be computing products of probabilities. The inner function has access to the partial sum as a free variable. The computation is completed when you invoke it with a map containing the remaining variable-value pairs.
You also can define in the outer function a set to hold all variables used in the first computation. Then allow inner function to access this set as a free variable as well. This will ensure that the values of any variable keys encountered in the first computation are excluded in the final computation.
All of this are illustrated below.