Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why no binomial distribution in the Euro Problem #14

Open
heneryville opened this issue Jan 29, 2018 · 3 comments
Open

Why no binomial distribution in the Euro Problem #14

heneryville opened this issue Jan 29, 2018 · 3 comments

Comments

@heneryville
Copy link

In the Euro problem, when calculating the likelihood of the entire set at once, it seems like this should use the binomial distribution. The binomial distribution calculates what the odds are of seeing K instances in N draws if the probability is P, and it seems like that's exactly what the likelihood should be, with N being tails + heads, K being heads, and P being x.

How does this likelihood function differ from a binomial?

@AllenDowney
Copy link
Owner

AllenDowney commented Jan 29, 2018 via email

@yongduek
Copy link

I also had the same question, and concluded that:

  1. Binomial distribution must be used in calculating the likelihood for 250 trials.
  2. The likelihood for each hypothesis $x$ (1 out of 101) has the same binomial term: 250C140
  3. The binomial terms disappear when normalization for posterior distribution is applied.
  4. So, binomial term does not have to be included as long as the posterior is concerned.

So... my question is ... where is the explanation about the evidence postponed to the next chapter?

Compared to the explanation in MacKay's book, the approach in this book is much more clearer and simpler for me to understand. Many thanks again.

@ricardoV94
Copy link

@yongduek I was going through the same reasoning as you and a post on StackExchange led me to the formal explanation of this 'puzzle'. It follows from the likelihood principle that a bayesian inference about the parameter p will be the same regardless of which likelihood is used (bernoulli, binomial or negative_binomial) in this case. This argument is used in MacKay's book when he criticizes how frequentist inference changes when you assume different likelihood functions for the same data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants