From cfd727bf98de0b39a7c49588bed3924245fa6417 Mon Sep 17 00:00:00 2001 From: Ben Date: Sun, 8 Mar 2020 13:37:03 -0400 Subject: [PATCH 1/7] Added proposal for adding Hessian of log density in optimizing output for cmdstan --- designs/0080-hessian_optimize_cmdstan.md | 78 ++++++++++++++++++++++++ 1 file changed, 78 insertions(+) create mode 100644 designs/0080-hessian_optimize_cmdstan.md diff --git a/designs/0080-hessian_optimize_cmdstan.md b/designs/0080-hessian_optimize_cmdstan.md new file mode 100644 index 0000000..2130f07 --- /dev/null +++ b/designs/0080-hessian_optimize_cmdstan.md @@ -0,0 +1,78 @@ +- Feature Name: Allow Cmdstan to optionally print out Hessian of posterior when optimization finishes +- Start Date: 2020-03-08 +- RFC PR: +- Stan Issue: + +# Summary +[summary]: #summary + +When computing a MAP estimate, the Hessian of the log density can be used to construct a normal approximation to the posterior. + +# Motivation +[motivation]: #motivation + +I have a Stan model that is too slow to practically sample all the time. Because optimization seem to give reasonable results, it would be nice to have the normal approximation to the posterior to give some sense of the uncertainty in the problem as well. + +Rstan already supports this via the 'hessian' argument to 'optimizing'. + +# Guide-level explanation +[guide-level-explanation]: #guide-level-explanation + +I would like to add an argument to the cmdstan interface 'hessian'. So if the model 'diamonds' can be optimized with the command: + +``` +./diamonds optimize data file=diamonds.data.R +``` + +I would like to make the version that also computes the hessian: +``` +./diamonds optimize hessian=1 data file=diamonds.data.R +``` + +The option takes two values, 0 (the default) and 1. 0 means don't compute the Hessian. 1 means do compute the Hessian. + +Optimizing output currently looks like: +``` +# stan_version_major = 2 +... +# refresh = 100 (Default) +lp__,b.1,b.2 +3427.64,7.66366,5.33466 +``` + +I would like to print the Hessian of the log density with respect to the model parameters in the constrained space (not the unconstrained space) similarly to how the inverse metric is currently printed at the end of warmup: + +``` +# stan_version_major = 2 +... +# refresh = 100 (Default) +lp__,b.1,b.2 +3427.64,7.66366,5.33466 +# Hessian of log density: +# 0.0813676, 0.014598 +# 0.014598, 0.112342 +``` + +# Reference-level explanation +[reference-level-explanation]: #reference-level-explanation + +As far as computing the Hessian, because the higher order autodiff doesn't work with the ODE solvers 1D integrator and such, I think we should compute the Hessian with finite differences, and we use the sample finite difference implementation that the test framework does (https://github.com/stan-dev/math/blob/develop/stan/math/prim/functor/finite_diff_hessian_auto.hpp) + +# Drawbacks +[drawbacks]: #drawbacks + +Printing out the matrix in the comments like this might seem rather crude, but it is the state of the IO + +# Rationale and alternatives +[rationale-and-alternatives]: #rationale-and-alternatives + +Another design would be to print out the Hessian in unconstrained space. This is less interpretable thana constrained space and it would be non-trivial to compute this Hessian into the one in constrained space, so I think the constrained space Hessian should be preferred. We could possibly add another option to Hessian (0, 1, 2) to allow for this functionality. + +Rstan actually provides samples from the normal approximation to go along with the normal approximation to the posterior. I think we should not do this because it's not always true that the Hessian is numerically positive definite at the mode. This could be because the MAP estimate has gone to infinity or something (estimating the mode of 8-schools), or just something funny with the numerics. Either way it is quite possible it happens during model development and it would be nice to avoid producing too many errors. + +It is fairly straightforward in R and Python to sample from a multivariate normal, and so I think we can reasonably leave this to the user. + +# Prior art +[prior-art]: #prior-art + +Rstan does a version of this already. From 327710be950f390c0aef6bf7b3a12412ff668854 Mon Sep 17 00:00:00 2001 From: Ben Date: Sun, 8 Mar 2020 13:46:10 -0400 Subject: [PATCH 2/7] Fixed some sign stuff, specified where normal approximation comes from --- designs/0080-hessian_optimize_cmdstan.md | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/designs/0080-hessian_optimize_cmdstan.md b/designs/0080-hessian_optimize_cmdstan.md index 2130f07..5c0c8af 100644 --- a/designs/0080-hessian_optimize_cmdstan.md +++ b/designs/0080-hessian_optimize_cmdstan.md @@ -13,6 +13,8 @@ When computing a MAP estimate, the Hessian of the log density can be used to con I have a Stan model that is too slow to practically sample all the time. Because optimization seem to give reasonable results, it would be nice to have the normal approximation to the posterior to give some sense of the uncertainty in the problem as well. +An approximate posterior covariance comes from computing the inverse of the Hessian of log density. + Rstan already supports this via the 'hessian' argument to 'optimizing'. # Guide-level explanation @@ -49,8 +51,8 @@ I would like to print the Hessian of the log density with respect to the model p lp__,b.1,b.2 3427.64,7.66366,5.33466 # Hessian of log density: -# 0.0813676, 0.014598 -# 0.014598, 0.112342 +# -0.0813676, -0.014598 +# -0.014598, -0.112342 ``` # Reference-level explanation @@ -68,7 +70,7 @@ Printing out the matrix in the comments like this might seem rather crude, but i Another design would be to print out the Hessian in unconstrained space. This is less interpretable thana constrained space and it would be non-trivial to compute this Hessian into the one in constrained space, so I think the constrained space Hessian should be preferred. We could possibly add another option to Hessian (0, 1, 2) to allow for this functionality. -Rstan actually provides samples from the normal approximation to go along with the normal approximation to the posterior. I think we should not do this because it's not always true that the Hessian is numerically positive definite at the mode. This could be because the MAP estimate has gone to infinity or something (estimating the mode of 8-schools), or just something funny with the numerics. Either way it is quite possible it happens during model development and it would be nice to avoid producing too many errors. +Rstan actually provides samples from the normal approximation to go along with the normal approximation to the posterior. I think we should not do this because it's not always true that the Hessian is numerically negative definite at the mode. This could be because the MAP estimate has gone to infinity or something (estimating the mode of 8-schools), or just something funny with the numerics. Either way it is quite possible it happens during model development and it would be nice to avoid producing too many errors. It is fairly straightforward in R and Python to sample from a multivariate normal, and so I think we can reasonably leave this to the user. From 2e29c3e8e1516f136638e1553ec752d0c98cfd60 Mon Sep 17 00:00:00 2001 From: Ben Date: Sun, 15 Mar 2020 16:11:43 -0400 Subject: [PATCH 3/7] Updated design doc to reflect prototype implementation and pull request number --- designs/0016-hessian_optimize_cmdstan.md | 86 ++++++++++++++++++++++++ designs/0080-hessian_optimize_cmdstan.md | 80 ---------------------- 2 files changed, 86 insertions(+), 80 deletions(-) create mode 100644 designs/0016-hessian_optimize_cmdstan.md delete mode 100644 designs/0080-hessian_optimize_cmdstan.md diff --git a/designs/0016-hessian_optimize_cmdstan.md b/designs/0016-hessian_optimize_cmdstan.md new file mode 100644 index 0000000..bbb9142 --- /dev/null +++ b/designs/0016-hessian_optimize_cmdstan.md @@ -0,0 +1,86 @@ +- Feature Name: Allow Cmdstan to optionally print out draws from Laplace approximation of posterior when optimizing +- Start Date: 2020-03-08 +- RFC PR: 16 +- Stan Issue: + +# Summary +[summary]: #summary + +When computing a MAP estimate, the Hessian of the log density can be used to construct a normal approximation to the posterior. + +# Motivation +[motivation]: #motivation + +I have a Stan model that is too slow to practically sample all the time. Because optimization seem to give reasonable results, it would be nice to have the normal approximation to the posterior to give some sense of the uncertainty in the problem as well. + +An approximate posterior covariance comes from computing the inverse of the Hessian of the negative log density. + +Rstan already supports this via the 'hessian' argument to 'optimizing'. + +# Guide-level explanation +[guide-level-explanation]: #guide-level-explanation + +This adds two arguments to the cmdstan interface. + +```laplace_draws``` - The number of draws to take from the posterior approximation. By default, this is zero, and no laplace approximation is done + +```laplace_diag_shift``` - A value to add to the diagonal of the hessian approximation to fix small non-singularities (defaulting to zero) + +The output is printed after the optimimum. + +A model can be called by: +``` +./model optimize laplace_draws=100 data file=data.dat +``` + +or with the diagonal shift: +``` +./model optimize laplace_draws=100 laplace_diag_shift=1e-10 data file=data.dat +``` + +Optimizing output currently looks like: +``` +# stan_version_major = 2 +... +# refresh = 100 (Default) +lp__,b.1,b.2 +3427.64,7.66366,5.33466 +``` + +The new output would look like: + +``` +# stan_version_major = 2 +... +# refresh = 100 (Default) +lp__,b.1,b.2 +3427.64,7.66366,5.33466 +# Draws from Laplace approximation: +b.1,b.2 +7.66364,5.33463 +7.66367,5.33462 +``` + +# Reference-level explanation +[reference-level-explanation]: #reference-level-explanation + +As far as computing the Hessian, because the higher order autodiff doesn't work with the ODE solvers 1D integrator and such, I think we should compute the Hessian with finite differences, and we use the sample finite difference implementation that the test framework does (https://github.com/stan-dev/math/blob/develop/stan/math/prim/functor/finite_diff_hessian_auto.hpp) + +# Drawbacks +[drawbacks]: #drawbacks + +Providing draws instead of the Laplace approximation itself is rather inefficient, but it is the easiest thing to code. + +We also have to deal with possible singular Hessians. This is why I also added the laplace_diag_shift to overcome these. They'll probably be quite common, especially with the Hessians computed with finite differences. + +# Rationale and alternatives +[rationale-and-alternatives]: #rationale-and-alternatives + +Another design would be the print the Hessian on the unconstrained space and let users handle the sampling and the parameter transformation. The issue here is there is no good way for users to do these parameter transformations outside of certain interfaces (at least Rstan, maybe PyStan). + +Another design would be to print a Hessian on the constrained space and let users handle the sampling. In this case users would also be expected to handle the constraints, and I don't know how that would work practically rejection sampling maybe?) + +# Prior art +[prior-art]: #prior-art + +Rstan does a version of this already. diff --git a/designs/0080-hessian_optimize_cmdstan.md b/designs/0080-hessian_optimize_cmdstan.md deleted file mode 100644 index 5c0c8af..0000000 --- a/designs/0080-hessian_optimize_cmdstan.md +++ /dev/null @@ -1,80 +0,0 @@ -- Feature Name: Allow Cmdstan to optionally print out Hessian of posterior when optimization finishes -- Start Date: 2020-03-08 -- RFC PR: -- Stan Issue: - -# Summary -[summary]: #summary - -When computing a MAP estimate, the Hessian of the log density can be used to construct a normal approximation to the posterior. - -# Motivation -[motivation]: #motivation - -I have a Stan model that is too slow to practically sample all the time. Because optimization seem to give reasonable results, it would be nice to have the normal approximation to the posterior to give some sense of the uncertainty in the problem as well. - -An approximate posterior covariance comes from computing the inverse of the Hessian of log density. - -Rstan already supports this via the 'hessian' argument to 'optimizing'. - -# Guide-level explanation -[guide-level-explanation]: #guide-level-explanation - -I would like to add an argument to the cmdstan interface 'hessian'. So if the model 'diamonds' can be optimized with the command: - -``` -./diamonds optimize data file=diamonds.data.R -``` - -I would like to make the version that also computes the hessian: -``` -./diamonds optimize hessian=1 data file=diamonds.data.R -``` - -The option takes two values, 0 (the default) and 1. 0 means don't compute the Hessian. 1 means do compute the Hessian. - -Optimizing output currently looks like: -``` -# stan_version_major = 2 -... -# refresh = 100 (Default) -lp__,b.1,b.2 -3427.64,7.66366,5.33466 -``` - -I would like to print the Hessian of the log density with respect to the model parameters in the constrained space (not the unconstrained space) similarly to how the inverse metric is currently printed at the end of warmup: - -``` -# stan_version_major = 2 -... -# refresh = 100 (Default) -lp__,b.1,b.2 -3427.64,7.66366,5.33466 -# Hessian of log density: -# -0.0813676, -0.014598 -# -0.014598, -0.112342 -``` - -# Reference-level explanation -[reference-level-explanation]: #reference-level-explanation - -As far as computing the Hessian, because the higher order autodiff doesn't work with the ODE solvers 1D integrator and such, I think we should compute the Hessian with finite differences, and we use the sample finite difference implementation that the test framework does (https://github.com/stan-dev/math/blob/develop/stan/math/prim/functor/finite_diff_hessian_auto.hpp) - -# Drawbacks -[drawbacks]: #drawbacks - -Printing out the matrix in the comments like this might seem rather crude, but it is the state of the IO - -# Rationale and alternatives -[rationale-and-alternatives]: #rationale-and-alternatives - -Another design would be to print out the Hessian in unconstrained space. This is less interpretable thana constrained space and it would be non-trivial to compute this Hessian into the one in constrained space, so I think the constrained space Hessian should be preferred. We could possibly add another option to Hessian (0, 1, 2) to allow for this functionality. - -Rstan actually provides samples from the normal approximation to go along with the normal approximation to the posterior. I think we should not do this because it's not always true that the Hessian is numerically negative definite at the mode. This could be because the MAP estimate has gone to infinity or something (estimating the mode of 8-schools), or just something funny with the numerics. Either way it is quite possible it happens during model development and it would be nice to avoid producing too many errors. - -It is fairly straightforward in R and Python to sample from a multivariate normal, and so I think we can reasonably leave this to the user. - -# Prior art -[prior-art]: #prior-art - -Rstan does a version of this already. From 39eaa82e729745128c22845c94512b1bab6c586b Mon Sep 17 00:00:00 2001 From: Ben Date: Tue, 17 Mar 2020 15:42:39 -0400 Subject: [PATCH 4/7] Changed laplace_diag_shift to laplace_add_diag and added more info about how constrained samples are generated as response to reviews --- designs/0016-hessian_optimize_cmdstan.md | 29 +++++++++++++++++------- 1 file changed, 21 insertions(+), 8 deletions(-) diff --git a/designs/0016-hessian_optimize_cmdstan.md b/designs/0016-hessian_optimize_cmdstan.md index bbb9142..b969884 100644 --- a/designs/0016-hessian_optimize_cmdstan.md +++ b/designs/0016-hessian_optimize_cmdstan.md @@ -13,7 +13,16 @@ When computing a MAP estimate, the Hessian of the log density can be used to con I have a Stan model that is too slow to practically sample all the time. Because optimization seem to give reasonable results, it would be nice to have the normal approximation to the posterior to give some sense of the uncertainty in the problem as well. -An approximate posterior covariance comes from computing the inverse of the Hessian of the negative log density. +It is standard to compute a normal approximation to the posterior covariance comes from the inverse of the Hessian of the negative log density. + +If the MAP estimate is ```u```, and the Hessian of the unnormalized log density at u is ```H```, then a posterior draw on the constrained scale is: + +``` +unconstrained_sample = multivariate_normal_rng(mean = u, cov = -inverse(H)) +constrained_sample = constrain(unconstrained_sample) +``` + +We can output unnormalized log densities of the actual model and the approximate model to compute importance sampling diagnostics and estimates. Rstan already supports this via the 'hessian' argument to 'optimizing'. @@ -24,7 +33,7 @@ This adds two arguments to the cmdstan interface. ```laplace_draws``` - The number of draws to take from the posterior approximation. By default, this is zero, and no laplace approximation is done -```laplace_diag_shift``` - A value to add to the diagonal of the hessian approximation to fix small non-singularities (defaulting to zero) +```laplace_add_diag``` - A value to add to the diagonal of the hessian approximation to fix small non-singularities (defaulting to zero) The output is printed after the optimimum. @@ -33,9 +42,9 @@ A model can be called by: ./model optimize laplace_draws=100 data file=data.dat ``` -or with the diagonal shift: +or with the diagonal: ``` -./model optimize laplace_draws=100 laplace_diag_shift=1e-10 data file=data.dat +./model optimize laplace_draws=100 laplace_add_diag=1e-10 data file=data.dat ``` Optimizing output currently looks like: @@ -56,22 +65,26 @@ The new output would look like: lp__,b.1,b.2 3427.64,7.66366,5.33466 # Draws from Laplace approximation: -b.1,b.2 -7.66364,5.33463 -7.66367,5.33462 +lp__, log_p, log_g, b.1,b.2 +0, -1, -2, 7.66364,5.33463 +0, -2, -3, 7.66367,5.33462 ``` +The lp__, log_p, log_g formatting is intended to mirror the advi output. + # Reference-level explanation [reference-level-explanation]: #reference-level-explanation As far as computing the Hessian, because the higher order autodiff doesn't work with the ODE solvers 1D integrator and such, I think we should compute the Hessian with finite differences, and we use the sample finite difference implementation that the test framework does (https://github.com/stan-dev/math/blob/develop/stan/math/prim/functor/finite_diff_hessian_auto.hpp) +Ben Goodrich points out it would be better to get this Hessian using finite differences of first order gradients. He is correct, but the fully finite differenced hessian is what is implemented in Stan math currently and so that is what I'm rolling with. + # Drawbacks [drawbacks]: #drawbacks Providing draws instead of the Laplace approximation itself is rather inefficient, but it is the easiest thing to code. -We also have to deal with possible singular Hessians. This is why I also added the laplace_diag_shift to overcome these. They'll probably be quite common, especially with the Hessians computed with finite differences. +We also have to deal with possible singular Hessians. This is why I also added the laplace_add_diag to overcome these. They'll probably be quite common, especially with the Hessians computed with finite differences. # Rationale and alternatives [rationale-and-alternatives]: #rationale-and-alternatives From cf4530462a2f2e2734f9c32760e19fab8418aca4 Mon Sep 17 00:00:00 2001 From: Ben Date: Mon, 20 Apr 2020 15:52:14 -0400 Subject: [PATCH 5/7] Rewrote Laplace approximation design doc as new algorithm + fixed the unconstrained space stuff (design-doc #16) --- designs/0016-hessian_optimize_cmdstan.md | 143 ++++++++++++++++------- 1 file changed, 102 insertions(+), 41 deletions(-) diff --git a/designs/0016-hessian_optimize_cmdstan.md b/designs/0016-hessian_optimize_cmdstan.md index b969884..b6ebd8a 100644 --- a/designs/0016-hessian_optimize_cmdstan.md +++ b/designs/0016-hessian_optimize_cmdstan.md @@ -1,4 +1,4 @@ -- Feature Name: Allow Cmdstan to optionally print out draws from Laplace approximation of posterior when optimizing +- Feature Name: Laplace approximation as a new algorithm - Start Date: 2020-03-08 - RFC PR: 16 - Stan Issue: @@ -6,94 +6,155 @@ # Summary [summary]: #summary -When computing a MAP estimate, the Hessian of the log density can be used to construct a normal approximation to the posterior. +The proposal is to add a Laplace approximation on the unconstrained space as a +form of approximate inference. # Motivation [motivation]: #motivation -I have a Stan model that is too slow to practically sample all the time. Because optimization seem to give reasonable results, it would be nice to have the normal approximation to the posterior to give some sense of the uncertainty in the problem as well. +I have a Stan model that is too slow to sample. I would like to do something +better than optimization. Laplace approximations are a pretty standard way +of doing this. -It is standard to compute a normal approximation to the posterior covariance comes from the inverse of the Hessian of the negative log density. +# Guide-level explanation +[guide-level-explanation]: #guide-level-explanation -If the MAP estimate is ```u```, and the Hessian of the unnormalized log density at u is ```H```, then a posterior draw on the constrained scale is: +The `laplace` algorithm would work by forming a Laplace approximation to the +unconstrained posterior density. + +Assuming `u` are the unconstrained variables, `c` are the constrained variables, +and `c = g(u)`, the log density sampled by Stan is: ``` -unconstrained_sample = multivariate_normal_rng(mean = u, cov = -inverse(H)) -constrained_sample = constrain(unconstrained_sample) +log(p(u)) = log(p(g(u))) + log(det(jac(g))) ``` -We can output unnormalized log densities of the actual model and the approximate model to compute importance sampling diagnostics and estimates. +In the Laplace approximation, we search for a mode (a maximum) of +```log(p(u))```. Call this `u_mode`. This is not the same optimization that is +done in the `optimizing` algorithm. That searches for a mode of `log(p(g(u)))` +(or the equation above without the `log(det(jac(g)))` term. These are not the +same optimizations. -Rstan already supports this via the 'hessian' argument to 'optimizing'. +We can form a second order Taylor expansion of `log(p(u))` around `u_mode`: -# Guide-level explanation -[guide-level-explanation]: #guide-level-explanation +``` +log(p(u)) = log(p(u_mode)) + + gradient(log(p), u_mode) * (u - umode) + + 0.5 * (u - u_mode)^T * hessian(log(p), u_mode) * (u - u_mode) + + O(||u - u_mode||^3) +``` + +where `gradient(log(p), u)` is the gradient of `log(p)` at `u` and +`hessian(log(p), u)` is the hessian of `log(p)` at `u`. Because the gradient +is zero at the mode, the linear term drops out. Ignoring the third order +terms gives us a new distribution `p_approx(u)`: + +``` +log(p_approx(u)) = K + 0.5 * (u - u_mode)^T * hessian(log(p), u_mode) * (u - u_mode) +``` + +where K is a constant to make this normalize. `u_approx` (`u` sampled from +`p_approx(u)`) takes the distribution: +``` +u_approx ~ N(u_mode, -(hessian(log(p), u_mode))^{-1}) +``` + +Taking draws from `u_approx` gives us draws from our distribution on Stan's +unconstrained space. Once constrained, these draws can be used in the same +way that regular draws from the `sampling` algorithm are used. -This adds two arguments to the cmdstan interface. +The `laplace` algorithm would take in the same arguments as the `optimize` +algorithm plus two additional ones: -```laplace_draws``` - The number of draws to take from the posterior approximation. By default, this is zero, and no laplace approximation is done +```num_samples``` - The number of draws to take from the posterior +approximation. This should be greater than one. (default to 1000) -```laplace_add_diag``` - A value to add to the diagonal of the hessian approximation to fix small non-singularities (defaulting to zero) +```add_diag``` - A value to add to the diagonal of the hessian +approximation to fix small non-singularities (defaulting to zero) The output is printed after the optimimum. A model can be called by: ``` -./model optimize laplace_draws=100 data file=data.dat +./model laplace num_samples=100 data file=data.dat ``` or with the diagonal: ``` -./model optimize laplace_draws=100 laplace_add_diag=1e-10 data file=data.dat +./model laplace num_samples=100 add_diag=1e-10 data file=data.dat ``` -Optimizing output currently looks like: -``` -# stan_version_major = 2 -... -# refresh = 100 (Default) -lp__,b.1,b.2 -3427.64,7.66366,5.33466 -``` +The output would mirror the other interfaces and print all the algorithm +specific parameters with two trailing underscores appended to each followed +by all the other arguments. -The new output would look like: +The three algorithm specific parameters are: +1. ```log_p__``` - The log density of the model itself +2. ```log_g__``` - The log density of the Laplace approximation +3. ```rejected__``` - A boolean data indicating whether it was possible to + evaluate the log density of the model at this parameter value + +For instance, the new output might look like: ``` # stan_version_major = 2 ... -# refresh = 100 (Default) -lp__,b.1,b.2 -3427.64,7.66366,5.33466 # Draws from Laplace approximation: -lp__, log_p, log_g, b.1,b.2 -0, -1, -2, 7.66364,5.33463 -0, -2, -3, 7.66367,5.33462 +log_p__, log_g__, rejected__, b.1,b.2 +-1, -2, 0, 7.66364,5.33463 +-2, -3, 0, 7.66367,5.33462 +-3, -4, 1, 0, 0 ``` -The lp__, log_p, log_g formatting is intended to mirror the advi output. - # Reference-level explanation [reference-level-explanation]: #reference-level-explanation -As far as computing the Hessian, because the higher order autodiff doesn't work with the ODE solvers 1D integrator and such, I think we should compute the Hessian with finite differences, and we use the sample finite difference implementation that the test framework does (https://github.com/stan-dev/math/blob/develop/stan/math/prim/functor/finite_diff_hessian_auto.hpp) +The implementation of this would borrow heavily from the optimization code. The +difference would be that the Jacobian would be turned on for the optimization. -Ben Goodrich points out it would be better to get this Hessian using finite differences of first order gradients. He is correct, but the fully finite differenced hessian is what is implemented in Stan math currently and so that is what I'm rolling with. +We will also need to implement a way of computing Hessians with finite +differences of gradients. Simple finite differences were not sufficiently +accurate for an example I was working on. # Drawbacks [drawbacks]: #drawbacks -Providing draws instead of the Laplace approximation itself is rather inefficient, but it is the easiest thing to code. +It is not clear to me how to handle errors evaluating the log density. + +There are a few options with various drawbacks: + +1. Re-sample a new point in the unconstrained space until one is accepted -We also have to deal with possible singular Hessians. This is why I also added the laplace_add_diag to overcome these. They'll probably be quite common, especially with the Hessians computed with finite differences. +With a poorly written model, this may never terminate + +2. Quietly reject the sample and print nothing (so it is possible that if someone +requested 200 draws that they only get 150). + +This might lead to silent errors if the user is not vigilantly checking the +lengths of their outputs (they may compute incorrect standard errors, etc). + +3. Use `rejected__` diagnostic output to indicate a sample was rejected and +print zeros where usually the parameters would be. + +If the user does not check the `rejected__` column, then they will be using a +bunch of zeros in their follow-up calculations that could mislead them. + +Similarly to divergent transitions, we can print to the output information about +how many draws were rejected in any case. + +In the earlier part of the design document I assume #3 was implemented, but I +think #2 might be better. # Rationale and alternatives [rationale-and-alternatives]: #rationale-and-alternatives -Another design would be the print the Hessian on the unconstrained space and let users handle the sampling and the parameter transformation. The issue here is there is no good way for users to do these parameter transformations outside of certain interfaces (at least Rstan, maybe PyStan). - -Another design would be to print a Hessian on the constrained space and let users handle the sampling. In this case users would also be expected to handle the constraints, and I don't know how that would work practically rejection sampling maybe?) +An alternative that seems appealing at first is printing out the mode and +hessian so that it would be possible for people to make their own posterior +draws. This is not reasonable because the conversion from unconstrained to +constrained space is not exposed in all of the interfaces. # Prior art [prior-art]: #prior-art -Rstan does a version of this already. +This is a pretty common Bayesian approximation. It just is not implemented in +Stan. From 914a47f1fc97f81d6a00d85b55cacb678dfed442 Mon Sep 17 00:00:00 2001 From: Ben Date: Mon, 20 Apr 2020 16:08:56 -0400 Subject: [PATCH 6/7] Minor clarifications (design-doc #16) --- designs/0016-hessian_optimize_cmdstan.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/designs/0016-hessian_optimize_cmdstan.md b/designs/0016-hessian_optimize_cmdstan.md index b6ebd8a..2706571 100644 --- a/designs/0016-hessian_optimize_cmdstan.md +++ b/designs/0016-hessian_optimize_cmdstan.md @@ -26,14 +26,14 @@ Assuming `u` are the unconstrained variables, `c` are the constrained variables, and `c = g(u)`, the log density sampled by Stan is: ``` -log(p(u)) = log(p(g(u))) + log(det(jac(g))) +log(p(u)) = log(p(g(u))) + log(det(jac(g, u))) ``` -In the Laplace approximation, we search for a mode (a maximum) of -```log(p(u))```. Call this `u_mode`. This is not the same optimization that is -done in the `optimizing` algorithm. That searches for a mode of `log(p(g(u)))` -(or the equation above without the `log(det(jac(g)))` term. These are not the -same optimizations. +where `jac(g, u)` is the Jacobian of the function `g` at `u`. In the Laplace +approximation, we search for a mode (a maximum) of ```log(p(u))```. Call this +`u_mode`. This is not the same optimization that is done in the `optimizing` +algorithm. That searches for a mode of `log(p(g(u)))` (or the equation above +without the `log(det(jac(g, u)))` term. This is different. We can form a second order Taylor expansion of `log(p(u))` around `u_mode`: From 98b5395f96bb8894bf053560d3474d906c19a15b Mon Sep 17 00:00:00 2001 From: Ben Date: Fri, 11 Sep 2020 19:41:47 -0400 Subject: [PATCH 7/7] Minor updates and rename of laplace approximation algorithm (design-doc #16) --- ...> 0016-laplace_approximation_algorithm.md} | 63 +++++++++---------- 1 file changed, 30 insertions(+), 33 deletions(-) rename designs/{0016-hessian_optimize_cmdstan.md => 0016-laplace_approximation_algorithm.md} (67%) diff --git a/designs/0016-hessian_optimize_cmdstan.md b/designs/0016-laplace_approximation_algorithm.md similarity index 67% rename from designs/0016-hessian_optimize_cmdstan.md rename to designs/0016-laplace_approximation_algorithm.md index 2706571..84109f2 100644 --- a/designs/0016-hessian_optimize_cmdstan.md +++ b/designs/0016-laplace_approximation_algorithm.md @@ -12,9 +12,8 @@ form of approximate inference. # Motivation [motivation]: #motivation -I have a Stan model that is too slow to sample. I would like to do something -better than optimization. Laplace approximations are a pretty standard way -of doing this. +Laplace approximations are quick to compute. If a model can be fit with a Laplace +approximation, that will probably be much faster than running full hmc. # Guide-level explanation [guide-level-explanation]: #guide-level-explanation @@ -33,7 +32,7 @@ where `jac(g, u)` is the Jacobian of the function `g` at `u`. In the Laplace approximation, we search for a mode (a maximum) of ```log(p(u))```. Call this `u_mode`. This is not the same optimization that is done in the `optimizing` algorithm. That searches for a mode of `log(p(g(u)))` (or the equation above -without the `log(det(jac(g, u)))` term. This is different. +without the `log(det(jac(g, u)))` term). This is different. We can form a second order Taylor expansion of `log(p(u))` around `u_mode`: @@ -44,10 +43,10 @@ log(p(u)) = log(p(u_mode)) + O(||u - u_mode||^3) ``` -where `gradient(log(p), u)` is the gradient of `log(p)` at `u` and -`hessian(log(p), u)` is the hessian of `log(p)` at `u`. Because the gradient -is zero at the mode, the linear term drops out. Ignoring the third order -terms gives us a new distribution `p_approx(u)`: +where `gradient(log(p), u_mode)` is the gradient of `log(p(u))` at `u_mode` and +`hessian(log(p), u_mode)` is the hessian of `log(p(u))` at `u_mode`. Because the +gradient is zero at the mode, the linear term drops out. Ignoring the third +order terms gives us a new distribution `p_approx(u)`: ``` log(p_approx(u)) = K + 0.5 * (u - u_mode)^T * hessian(log(p), u_mode) * (u - u_mode) @@ -59,9 +58,10 @@ where K is a constant to make this normalize. `u_approx` (`u` sampled from u_approx ~ N(u_mode, -(hessian(log(p), u_mode))^{-1}) ``` -Taking draws from `u_approx` gives us draws from our distribution on Stan's -unconstrained space. Once constrained, these draws can be used in the same -way that regular draws from the `sampling` algorithm are used. +Taking draws from `u_approx` gives us draws from the distribution on the +unconstrained space. Once constrained (via the transform `g(u)`), these draws +can be used in the same way that regular draws from the `sampling` algorithm +are used. The `laplace` algorithm would take in the same arguments as the `optimize` algorithm plus two additional ones: @@ -79,7 +79,7 @@ A model can be called by: ./model laplace num_samples=100 data file=data.dat ``` -or with the diagonal: +or with a small value added to the diagonal: ``` ./model laplace num_samples=100 add_diag=1e-10 data file=data.dat ``` @@ -92,7 +92,10 @@ The three algorithm specific parameters are: 1. ```log_p__``` - The log density of the model itself 2. ```log_g__``` - The log density of the Laplace approximation 3. ```rejected__``` - A boolean data indicating whether it was possible to - evaluate the log density of the model at this parameter value +evaluate the log density of the model at this parameter value + +Reporting both the model and approximate log density allows for importance +sampling diagnostics to be applied to the model. For instance, the new output might look like: @@ -106,15 +109,24 @@ log_p__, log_g__, rejected__, b.1,b.2 -3, -4, 1, 0, 0 ``` +There are two additional diagnostics for the `laplace` approximation. The +`log_p__` and `log_q__` outputs can be used to do the Pareto-k +diagnostic from "Pareto Smoothed Importance Sampling" (Vehtari, 2015). + +There could also be a diagnostic printout for how many times it is not +possible to evaluate the log density of the model at a certain point from the +approximate posterior (this would indicate the approximation is likely +questionable). + # Reference-level explanation [reference-level-explanation]: #reference-level-explanation The implementation of this would borrow heavily from the optimization code. The difference would be that the Jacobian would be turned on for the optimization. -We will also need to implement a way of computing Hessians with finite -differences of gradients. Simple finite differences were not sufficiently -accurate for an example I was working on. +The Hessians should be computed using finite differences of reverse mode autodiff. +Higher order autodiff isn't possible for general Stan models, but finite differences +of reverse mode autodiff should be more stable than second order finite differences. # Drawbacks [drawbacks]: #drawbacks @@ -137,24 +149,9 @@ lengths of their outputs (they may compute incorrect standard errors, etc). print zeros where usually the parameters would be. If the user does not check the `rejected__` column, then they will be using a -bunch of zeros in their follow-up calculations that could mislead them. - -Similarly to divergent transitions, we can print to the output information about -how many draws were rejected in any case. - -In the earlier part of the design document I assume #3 was implemented, but I -think #2 might be better. - -# Rationale and alternatives -[rationale-and-alternatives]: #rationale-and-alternatives - -An alternative that seems appealing at first is printing out the mode and -hessian so that it would be possible for people to make their own posterior -draws. This is not reasonable because the conversion from unconstrained to -constrained space is not exposed in all of the interfaces. +bunch of zeros in their follow-up calculations that could be misleading. # Prior art [prior-art]: #prior-art -This is a pretty common Bayesian approximation. It just is not implemented in -Stan. +This is a common Bayesian approximation. It just is not implemented in Stan.