Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--info include information parameter transforms #1461

Closed
andrewfowlie opened this issue Oct 30, 2024 · 13 comments · May be fixed by #1462
Closed

--info include information parameter transforms #1461

andrewfowlie opened this issue Oct 30, 2024 · 13 comments · May be fixed by #1462
Labels
feature New feature or request

Comments

@andrewfowlie
Copy link

Could the stanc --info command include information about constraints? At the moment, e.g.,

parameters {
  real<lower=0, upper=1> theta;
}

and

parameters {
  real theta;
}

both result in identical output,

{
  "inputs": {},
  "parameters": { "theta": { "type": "real", "dimensions": 0 } },
  "transformed parameters": {},
  "generated quantities": {},
  "functions": [],
  "distributions": [],
  "included_files": []
}

that doesn't mention the constraints. It would be great if the one with constraints gave something like,

{
  "inputs": {},
  "parameters": { "theta": { "type": "real", "dimensions": 0, "lower": 0, "upper": 1 } },
  "transformed parameters": {},
  "generated quantities": {},
  "functions": [],
  "distributions": [],
  "included_files": []
}

Ideally, this would work everywhere (i.e., in all the blocks), and cover the shift and scale modifiers as well.

@andrewfowlie andrewfowlie added the feature New feature or request label Oct 30, 2024
@WardBrian
Copy link
Member

I believe we could include transform information (e.g. this, but also something like "this is a simplex"). Note that the actual bound may not be a simple number, so in general the output may not be that useful. E.g,

parameters {
  real a;
  real <lower=exp(a) - 3> b;
}

The output would now have to contain something like "lower": "exp(a) - 3", so you can't assume you just get a number there

@andrewfowlie
Copy link
Author

That’s very true. I suppose it’s also true of the dimension.

I would find this very useful - I want to be able to infer programmatically (statically, not at runtime) whether we’re operating on a constrained space or not.

@WardBrian WardBrian changed the title --info include information about lower and upper --info include information parameter transforms Oct 30, 2024
@bob-carpenter
Copy link
Contributor

I suppose it’s also true of the dimension.

For dimension, it has to be static data. So we can resolve the sizes as soon as the data is fixed. The bounds can't be resolved until we have parameter draws.

@WardBrian
Copy link
Member

#904 asked about the dimension information. It was never implemented.

It does seem like it is probably reasonable to know that a variable has a lower bound, even if the exact lower bound is some expression you can't resolve on the outside.

@andrewfowlie
Copy link
Author

Thanks both. I had confused dimension and size in my previous message. I now see that this is a harder problem than I first thought.

If the static analysis is challenging, maybe instead there could be some methods for introspection of a Stan model at runtime, once a Stan model + data is actually instantiated?

My use case is that I want to implement other inference algorithms with Stan models. It's very helpful to have as much information as possible about whether we are working on a constrained or unconstrained space, and what transforms we need to make. I've been using the bridgestan interface. Could we one day have methods bs_get_parameters_lower(model) etc for an instance of Stan model that retrieve any lower transforms of parameters in the parameters block?

If this were Python, an implementation would work like

# the stan model
.... data blocks ...
parameters {
  real<lower=1> x;
  real<upper=2> y;
  real<lower=2, upper=3> z;
}
... other blocks ...
model = ... some code to read above model ... 
bs_get_parameters_lower(model)  # [1, None, 2]
bs_get_parameters_upper(model)  # [None, 2, 3]

This is C/C++ so some more thought required about handling the None cases, but the basic idea would be the same. If this is better way forward than the static analysis, we can move the discussion over to bridgestan.

@WardBrian
Copy link
Member

The example you provide is actually still not the worst case scenario, which is you can have parameters’ bounds depend on other parameters, so you need to provide both data but also a set of (unconstrained) parameters to get the bounds.

Can you elaborate on what your goal is with this information?

@andrewfowlie
Copy link
Author

Ah, I didn't realise that. So we can have all sorts like

parameters {
  real<lower=0> x;
  real<lower=x> y;
  real<lower=x, upper=y> z;
}

yes, it becomes complicated, and even my bs_get_parameters_lower(model) idea on a model instance won't work, because as you say it needs the parameters too.

Sure, let me be more explicit. I am working on implementing nested sampling algorithms on Stan models

  • These algorithms require proper priors for all parameters (unlike MCMC)
  • The implementations usually operate on a constrained space, a U[0, 1]^d hypercube on which the prior is flat (unlike Stan)
  • They transform that into physical parameters (see e.g., https://dynesty.readthedocs.io/en/stable/quickstart.html#prior-transforms)
  • I want to write Stan models that give consistent results when called with native Stan samplers and with my new algorithms

To achieve this, I will write the models as follows

parameters {
  vector<lower=0, upper=1>[N] x;   // this is the U[0, 1]^d hypercube
}
transformed parameters {
  vector[N] theta = 10 * x  - 5;   // these are the actual parameters, transformed from [0, 1] to [-5, 5] for example
}

Possibly, for this simple case of a flat prior I could allow this as well

parameters {
  vector<lower=-5, upper=5>[N] theta; 
}

I think this is going to work as I want, and produce consistent results with native Stan samplers and nested sampling algorithms. I would like to be able to run a check though that model's are implemented as I required though (with lower=0 and upper=1) and refuse to compile (e.g. static assert) or throw an exception at runtime if they are not.

The only way I have thought up of doing this at the moment is comparing the unconstrained/constrained parameters to the results of logit transforms.

I don't know whether my use case seems obscure to you. My feeling, though, is that this sort of problem could arise when trying to implement algorithms outside of Stan that require some extra information or some extra constraints on a Stan model.

@WardBrian
Copy link
Member

The only way I have thought up of doing this at the moment is comparing the unconstrained/constrained parameters to the results of logit transforms.

This seems reasonable. It may not be 100% accurate, but it would probably catch anyone who isn't actively trying to fool you

@bob-carpenter
Copy link
Contributor

parameters {
  vector<lower=0, upper=1>[N] x;   // this is the U[0, 1]^d hypercube
}
transformed parameters {
  vector[N] theta = 10 * x  - 5;   // these are the actual parameters, transformed from [0, 1] to [-5, 5] for example
}

Are there then going to be things like priors on theta? If so, and the transforms can be non-linear or involve multiplication by something that's another parameter rather than a constant, then you need to be careful to include the change-of-variables adjustment for the transform.

Stan will then take all of the x and log-odds transform to unconstrained using logit. In practice, we work on the unconstrained scale, then apply the inverse transform (inv_logit here) along with the change of variables correction for the non-linear transform. We then inverse transform again to present MCMC results on the constrained scale.

I would like to be able to run a check though that model's are implemented as I required though (with lower=0 and upper=1) and refuse to compile (e.g. static assert) or throw an exception at runtime if they are not.

I don't know anything about nested sampling, including why you are trying to do this. But if all you need is a distribution on $[0, 1]^D$ you can transform Stan's models which have support over $\mathbb{R}^N$ using inverse logit and the change of variables is pretty simple, because $\textrm{invLogit}'(x) = \text{invLogit}(x) \cdot (1 - \text{invLogit}(x))$.

@WardBrian
Copy link
Member

I think for your purposes, the implementation in #1462 might work anyway, since you're requiring such a restricted class of models you can just reject anything that is more complicated than "0" or "1"

@andrewfowlie
Copy link
Author

TLDR

  • Add transformation information to --info #1462 will implement some relevant functionality, that might fix my use case and be useful to others
  • it would also be possible to investigate the constraints by comparing constrained and unconstrained parameters to the results of logit transforms

Hi both,

Thanks so much for the feedback & ideas. Bob, in these algorithms priors are implemented by transforming from the U[0, 1]^d hypercube. I.e., we have an parameterization on which the priors are flat on U[0, 1] and we use inverse transform sampling to convert that to whatever we want.

@andrewfowlie
Copy link
Author

PS I've invited you to take a look at the code repo for using one of these algorithms & stan, it will be public soon anyway,

@bob-carpenter
Copy link
Contributor

Thanks. Once you go to inverse transform sampling over (0, 1)^D, then you can use Quasi Monte Carlo.

Stan could be easily transformed to sample on (0, 1)^D, but it won't be uniform.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants