-
-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
--info
include information parameter transforms
#1461
Comments
I believe we could include transform information (e.g. this, but also something like "this is a simplex"). Note that the actual bound may not be a simple number, so in general the output may not be that useful. E.g,
The output would now have to contain something like |
That’s very true. I suppose it’s also true of the dimension. I would find this very useful - I want to be able to infer programmatically (statically, not at runtime) whether we’re operating on a constrained space or not. |
--info
include information about lower and upper--info
include information parameter transforms
For dimension, it has to be static data. So we can resolve the sizes as soon as the data is fixed. The bounds can't be resolved until we have parameter draws. |
#904 asked about the dimension information. It was never implemented. It does seem like it is probably reasonable to know that a variable has a lower bound, even if the exact lower bound is some expression you can't resolve on the outside. |
Thanks both. I had confused dimension and size in my previous message. I now see that this is a harder problem than I first thought. If the static analysis is challenging, maybe instead there could be some methods for introspection of a Stan model at runtime, once a Stan model + data is actually instantiated? My use case is that I want to implement other inference algorithms with Stan models. It's very helpful to have as much information as possible about whether we are working on a constrained or unconstrained space, and what transforms we need to make. I've been using the If this were Python, an implementation would work like # the stan model
.... data blocks ...
parameters {
real<lower=1> x;
real<upper=2> y;
real<lower=2, upper=3> z;
}
... other blocks ... model = ... some code to read above model ...
bs_get_parameters_lower(model) # [1, None, 2]
bs_get_parameters_upper(model) # [None, 2, 3] This is C/C++ so some more thought required about handling the |
The example you provide is actually still not the worst case scenario, which is you can have parameters’ bounds depend on other parameters, so you need to provide both data but also a set of (unconstrained) parameters to get the bounds. Can you elaborate on what your goal is with this information? |
Ah, I didn't realise that. So we can have all sorts like parameters {
real<lower=0> x;
real<lower=x> y;
real<lower=x, upper=y> z;
} yes, it becomes complicated, and even my Sure, let me be more explicit. I am working on implementing nested sampling algorithms on Stan models
To achieve this, I will write the models as follows parameters {
vector<lower=0, upper=1>[N] x; // this is the U[0, 1]^d hypercube
}
transformed parameters {
vector[N] theta = 10 * x - 5; // these are the actual parameters, transformed from [0, 1] to [-5, 5] for example
} Possibly, for this simple case of a flat prior I could allow this as well parameters {
vector<lower=-5, upper=5>[N] theta;
} I think this is going to work as I want, and produce consistent results with native Stan samplers and nested sampling algorithms. I would like to be able to run a check though that model's are implemented as I required though (with lower=0 and upper=1) and refuse to compile (e.g. static assert) or throw an exception at runtime if they are not. The only way I have thought up of doing this at the moment is comparing the unconstrained/constrained parameters to the results of logit transforms. I don't know whether my use case seems obscure to you. My feeling, though, is that this sort of problem could arise when trying to implement algorithms outside of Stan that require some extra information or some extra constraints on a Stan model. |
This seems reasonable. It may not be 100% accurate, but it would probably catch anyone who isn't actively trying to fool you |
parameters {
vector<lower=0, upper=1>[N] x; // this is the U[0, 1]^d hypercube
}
transformed parameters {
vector[N] theta = 10 * x - 5; // these are the actual parameters, transformed from [0, 1] to [-5, 5] for example
} Are there then going to be things like priors on Stan will then take all of the
I don't know anything about nested sampling, including why you are trying to do this. But if all you need is a distribution on |
I think for your purposes, the implementation in #1462 might work anyway, since you're requiring such a restricted class of models you can just reject anything that is more complicated than "0" or "1" |
TLDR
Hi both, Thanks so much for the feedback & ideas. Bob, in these algorithms priors are implemented by transforming from the U[0, 1]^d hypercube. I.e., we have an parameterization on which the priors are flat on U[0, 1] and we use inverse transform sampling to convert that to whatever we want. |
PS I've invited you to take a look at the code repo for using one of these algorithms & stan, it will be public soon anyway, |
Thanks. Once you go to inverse transform sampling over (0, 1)^D, then you can use Quasi Monte Carlo. Stan could be easily transformed to sample on (0, 1)^D, but it won't be uniform. |
Could the
stanc --info
command include information about constraints? At the moment, e.g.,and
both result in identical output,
that doesn't mention the constraints. It would be great if the one with constraints gave something like,
Ideally, this would work everywhere (i.e., in all the blocks), and cover the shift and scale modifiers as well.
The text was updated successfully, but these errors were encountered: