Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Controlling randomness of results #26

Open
sglln opened this issue Feb 28, 2019 · 1 comment
Open

Controlling randomness of results #26

sglln opened this issue Feb 28, 2019 · 1 comment

Comments

@sglln
Copy link

sglln commented Feb 28, 2019

Using the same dataset, multiple runs of CausalImpact produce different p-values and confidence intervals using bsts. They are within a small range of each other when using large niter values, but I want to produce the same values each time. How can these be controlled so that the results are consistent with every run?

Thanks

@dklinenberg2020
Copy link

Have you tried setting the seed on your CPU before running? Your code would look something like this:

set.seed(1234)
CausalImpact::CausalImpact(...)

That's worked for me!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants