-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TODO: Add intercepts to the GLM examples #100
Comments
@cscherrer How would we go about this? You mentioned that usually we use a different prior for the intercept? |
Something like m = @model X, s, t begin
p = size(X, 2) # number of features
α ~ InterceptPrior
β ~ Normal(0, s) |> iid(p) # coefficients
σ ~ HalfNormal(t) # dispersion
η = α .+ X * β # linear predictor
μ = η # `μ = g⁻¹(η) = η`
y ~ For(eachindex(μ)) do j
Normal(μ[j], σ) # `Yᵢ ~ Normal(mean=μᵢ, variance=σ²)`
end
end; |
Any recommendations for what priors we should use in the linear regression and the multinomial logistic regression? |
Yeah, that's trickier. I've sometimes used a broader prior for the intercept, but for general-purpose use there's some danger the result might not be identifiable. Maybe Gelman has good suggestions for a default? |
Maybe http://www.stat.columbia.edu/~gelman/research/published/priors11.pdf has some ideas |
We can spend some time thinking about this. The models are actually pretty good already |
What if we use Normals for all the priors, but just a bigger variance for the intercept? So e.g. the intercept has prior Normal(0, 5), and the coefficients have Normal(0,1). In this case, would the result be identifiable? Since we are using Normal for all of the priors? |
I'll admit, I don't particularly understand what conditions need to be met for the result to be identifiable. |
No description provided.
The text was updated successfully, but these errors were encountered: