Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error while running the multinest_marginals_fancy.py. #260

Open
SreelakshmiChakyar opened this issue Dec 4, 2024 · 5 comments
Open

Error while running the multinest_marginals_fancy.py. #260

SreelakshmiChakyar opened this issue Dec 4, 2024 · 5 comments

Comments

@SreelakshmiChakyar
Copy link

Hi,
I am doing parameter estimation with pymultinest. We have a 4 dimensional degenerate and non-linear parameter space. we are using a uniform prior. In some cases while running the multinest_marginals_fancy.py. the error " ValueError: Weights do not sum to 1" is reported even though all the output files are correctly generated. In one such case, the posterior is very narrow, with the standard deviation approximately 10^{-4} times the mean. Are these two issues related? It'll be great if you could let us know how this can be resolved.
Thanks in advance.

@JohannesBuchner
Copy link
Owner

Yes. Probably a rounding issue of normalising the weight probabilities to sum to 1. The code of multinest_marginals_fancy.py is largely copied over from the dynesty project's visualisation. You could try using getdist or corner (see multinest_marginals_corner.py) or any other plotting library and feeding it the posterior samples.

@SreelakshmiChakyar
Copy link
Author

Thanks, when we used multinest_marginals_corner.py script, we obtained the plots.

Regarding the second issue, the narrow posterior (sigma/mean ~ 1.e-4) we get for one of the data sets, we are concerned about the reliability of the result.

We are wondering whether it could be a numerical artifact, or a consequence of any limitations within our model, or due to any other issues. We gave broad uniform priors for all 4 parameters. Still this happens for all of them in the specific data we tried. Will be great to have some insights on this.

@JohannesBuchner
Copy link
Owner

To debug, you can try evaluating the likelihood at your best fit point, and then evaluate a sequence of points along a line moving away from it. You can then check within your likelihood (with prints, visualisations), if your likelihood should really fall off that steeply.

@SreelakshmiChakyar
Copy link
Author

Thank you. We checked the variation of the likelihood and found it to be changing drastically. But such results from Bayesian inference are not commonly seen in the literature Can we conclude the results are reliable?

@JohannesBuchner
Copy link
Owner

Take some time to think and carefully check whether the likelihood variation is reasonable, comparing the data and model, or whether it is a bad fit or numerical artifact.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants