You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A user should get at least some feedback, at best interesting information, when performing an action.
Progress bars are a must (at least when doing exploratory analysis) when sampling; they make the sampling times more bearable.
How do we manage progress bars for the warmup which is currently managed program-side? I would like to avoid adding progress bar logic there.
JIT-compilation of logpdfs and kernel can take some time; inform the user what is being done, and how long it took to do it.
It is important to warn users about the number of divergences as the chains are sampling: many users won't want to keep sampling when there are too many divergences early on. Caveat: how do we do so when we have a large number of chains? -> set_postfix in tqdm
Interactive reporting of ESS -> set_postfix in tqdm
Give some important statistics on the sampled variables at the end of sampling: median value, variance across chains, Rhat, ESS / chain, average acceptance rate.
Inference data
There are several things we need to consider when thinking about how to represent inference data in mcx:
Full interoperability with ArviZ. It seems that many libraries add a functionality to transform their internal format to the ArviZ library. We can take care of that in mcx with a to_arviz() method if we go the object way;
Since sequential sampling is central in the library, we need to be able to add samples as we go. Dictionaries would make this cumbersome.
This begs the question of the diagnostics when we append new samples to the trace. If we decide to keep track of them in the trace, besides divergences we have to think about how to handle them: do we manage their value at the execution level, or the in the trace?
I tend to lean towards the execution: why should the trace be anything else than a data store and manage calculations as well? The generate executor can easily keep track of the state of the algorithms used to compute diagnostics; for sample we would have to add these states to the class’ state.
Opening this to have a discussion with myself about sampling UX.
The text was updated successfully, but these errors were encountered: