- When implementing the macro, test it with "Expand macro recursively"
- Use the
Fn
Type to check that the given function is pure (note that polymorphic functions have to be handled aswell) - Error handling should be as comprehensible as possible to the end user (if possible use custom errors, if not, ensure at least that the right part of the code is highlighted)
- How do we hand the
tracing_*
arguments through to other prob functions being called?- Simple option: Any function call which takes arguments as a
prob
function would take is transformed during code analysis in the macro (Maketracing_path
a new-type for that). Problem: How do we handle funky uses of functions?
- Simple option: Any function call which takes arguments as a
- Clean up the trace after each iteration? Is this principally necessary or only for performance? (<--!!?)
- Implement more informed proposal functions
- Handle MCMC for a failed sample
- Simplify trace entry such that we have fewer bespoke enums for the distributions (->
new_structure.rs
) - Change the effect of
condition
to it simply adding-INF
to the log-probability, rather thanFnProb
having aResult
return type. - Define correct kernels for distributions! They have to be symmetric around current state, right?
- Idea: Add macro that can be applied to enums such that a categorical distribution over the enum is a primitive (optionally non-uniform)
- Idea: Add optional parameter to
sample!
which gives a name to the location instead of the normal numbering - Maybe add some kind of "Block Resimulation" MH? I.e. Instead of only changing one variable at a time in our MH implementation, we change a bunch at once, with the bunch being one "block". The blocks could be user picked (additional macro), or automatically detected?
- Factor out as much as possible from the MH implementation (i.e. the initial choice, the repetitions, etc.; We should end up with just a single MH step in a function). Also, just clone the trace to construct the new one, just makes life easier.
- Refactor to immutable as much as possible, esp in the macros
- Can we also refactor the
prob
functions themselves? I.e. instead of passing around&mut
traces, sampling from aprob
function (or primitive) returns a trace, which then is integrated into the current trace. - Are we correctly tracing (-> path!) sampling inside a
while
loop's condition? E.g.while sample!(bernoulli(0.5)) { ... }
should be traced likeloop {if ! sample!(bernoulli(0.5)) {break;}
smooth_condition!
/sc!
, e.g.sc!(x > 0)
is closer to probability 1 the futher positive x is, and closer to probability 0 the further negative x is. Is there some general, objectively most sensible way to do this? Can we handlesc!(x > 0 && y < 0)
?- Add lifetime handling of immutable references to macros. Easiest option would be just to require explicit annotation of all lifetimes of passed in references, and then add all lifetimes to the return type
impl FnProb<...> + 'a + 'b + 'c
. - Refactor
ParametrizedValue
into struct withdyn ...
. Ideally, we would only have to implement the right traits for some primitive distribution and then it can be packaged up. Though I'm not 100% whether that's possible with the currentdyn
constraints. - Multiple chains!!
- We could unify probprogs and primtives more by reconsidering how we treat
sample!
s from probprogs. Asample!
from a probprog is just the same as asample!
from a primitive, with the difference being that for deterministic exploration, the "value" we impose is the subtrace for it. So we wouldn't have to trace them differently, and the resampling would essentially be the same as well. For both we would impose a "trace" when sampling deterministically, the trace for the primitive being just a node, while the trace for the probprog being an actual subtree. Though it still might make sense to address them separately to decrease probability of misses. Also, for randomly picking some place in our trace to wiggle, we still want to look at a flat list of all primitive distributions (or do we?). - ^ From the last point above: Would it make more sense to rather than considering all primitives from an execution as flat list, to treat sampling from a probprog just the same as sampling from a primitive? The kernel for a probprog would be to wiggle at some sampling in it. So we uniformly choose some
sample
that was encountered and just kernel on it. If it's a primtive, cool, if its a probprog, cool, we descend and do the same on it. Plus is that this is prettier, idk though whether that would be better or worse though than flat exploration. Try it out! :) - Rather than just iterating through (primitive)
sample!
expressions as we encounter them, we could in theprob
macro give them unique addresses in the trace tree (with multiple encounters of the samesample!
expressions being simply sub-indexed). This might in some situations reduce misses during deterministic execution. Though this would prevent exploitation of sitauations likeif s!(...) { s!(a(...)) } else { s!(a(...)) }
, since the two inner sample expressions would have different addresses. Also, this might not even help that much, since loops and recursion are already banished to subtraces. Though this allow us to get rid of the need for injection of stuff into loops. Minus is that we would would elevatesample!
to a special token, with no guarantee that it actually is thesample!
we are looking for.