Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatic benchmarks #53

Open
mauro3 opened this issue May 16, 2018 · 2 comments
Open

Automatic benchmarks #53

mauro3 opened this issue May 16, 2018 · 2 comments

Comments

@mauro3
Copy link
Owner

mauro3 commented May 16, 2018

Would be good to have to check that, e.g., @pack stays performant.

@jlperla
Copy link

jlperla commented Jul 11, 2018

How does one do these sorts of benchmarks? It might be something that Arnav can play around with. I know about BenchmarkTools, but I don't know the canonical way to ensure that no major performance regressions occur.

@mauro3
Copy link
Owner Author

mauro3 commented Jul 12, 2018

I have never done this either. I think the way to do it, is to have a script which runs a bunch of benchmarks using BenchmarkTools.jl and then compares them to previous, stored benchmarks. As stated here the steps are:

Start a Julia session
Execute a benchmark suite using an old version of your package
Save the results somehow (e.g. in a JSON file)
Start a new Julia session
Execute a benchmark suite using a new version of your package
Compare the new results with the results saved in step 3 to determine regression status

Note that the benchmarks need to run locally not on CI, as CI cannot guarantee that results are comparable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants