-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Strategy for adding more model parameters within the blocked Gibbs Sampling framework by adding more blocks #71
Comments
It should be noted that the three-block procedure would take ~1.5x longer than the existing 2-block procedure, but the Metropolis-Hasting convergence time would be superior in the 3-block case. It should also be noted that the three-block procedure would be roughly N_walkers/3 x faster than the |
I'm all for new implementations of the sampling procedure, especially if they can be done in a modular fashion. I'm a little unclear on how breaking the additional parameters (i.e., spot coverage fraction, T_eff_2, etc...) into a separate Gibbs step will help with the correlation. Since these parameters will still be correlated with the parameters in step #2, wouldn't it be faster to tune the M-H jump in step 2 to the correlations of the space? As you point out, in general this is a pain to do, since it requires first burning in some runs using a guess at the correlation, and then restarting the chain with a (noisy) estimate of the correlation made from the previous samples. Are you thinking that the third Gibbs step would ameliorate this issue? |
I assumed so based on watching the Iain Murray video segment in which he introduces Gibbs sampling as a way to ameliorate correlated samples. It's also mentioned in this Hogg Blog post: http://hoggresearch.blogspot.com/2012/12/emcee-vs-gibbs.html You're right that tuning the MH sampler should work, and it does for sampling normal singleton stars. The problem only arises in practice for starspot models where I observed acceptance fractions <<1%. I suppose iteration on the tuning parameters would eventually work, but again, in practice this was a pain and involved lots of human intervention. One alternative would be to make analytic affine transformation of variables-- we know the logOmegas will correlate in a relatively predictable way, for example. My hope was that tweaking the Gibbs sampler would avoid this step so that adding extensions to the model would be relatively effortless for the user. |
I had an idea while watching a video on Gibbs sampling (the bit beginning at roughly 1h09m): Maybe we can adapt the Gibbs sampler to deal with correlated parameters after all. I'm not sure how to implement it, but the main procedure would be something like this:
My guess is that this three-level blocked Gibbs sampler would outperform a two-level Gibbs with the new stellar parameters included in block 2.
For example, I witnessed poor convergence when I attempted to fit a mixture model with ~8 stellar parameters (3 of which were strongly correlated) in block 2, as discussed in detail in Issue #35. That tension led me to a major departure from the Gibbs sampler, in which I run
emcee
to sample all 14 stellar+nuisance parameters simultaneously, but with the major limitation that I had to chunk by spectral order thereby deriving multiple independent sets of stellar posteriors for ~50+ spectral orders. The extension to the Gibbs framework described here, if it can be implemented, would return to the much better situation of having a single posterior that is consistent with all the data.This seems obvious in hindsight, so it must be a good idea, right?
The text was updated successfully, but these errors were encountered: