-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document benchmark process #66
Comments
@jlperla I took a stab at this. It could be more tedious than you'd like (i.e., boring/wordy for some students) but I wanted to err on the side of helping noobs. If it's too much, just let me know what to cut. Points re the above:
|
Side note: I think one issue is making sure that a given JSON/results file corresponds to a given state of the package. Otherwise, you can’t answer “what am I comparing?” I wonder how these guys solve it. This could be a good way to get people to start versioning their personal projects. |
@jlperla Removed things from tutorial as we discussed (initial "quick and dirty" section, and subsetting). And added a small worked example in the |
After getting the |
Done. |
I think this is completed? Although for what it is worth, I think that https://github.com/econtoolkit/Expectations.jl/blob/master/test/runbenchmarks.jl#L10 is supposed to have the I wonder if it is just easier to tell people to create mini functions most of the time for these sorts of benchmarks... |
@jlperla You know, I was debating splicing in the expectations. I figured it was okay for the same reason that functions like |
I think it needs to be spliced because it isn't actually a function... it just looks like one. |
You're right. Callable objects... I'll update the docs to add that in. |
I put a placeholder in https://github.com/econtoolkit/tutorials/blob/master/julia/benchmark_regressions.md which shows a basic approach, but I think it is just a starting point.
In particular, https://github.com/econtoolkit/tutorials/blob/master/julia/benchmark_regressions.md#using-and-serializing-benchmark-groups is a placeholder to put the "real" way to do it.
BenchmarkGroups
can be done in a very simple way (see https://github.com/JuliaCI/BenchmarkTools.jl/blob/master/doc/manual.md#the-benchmarkgroup-type ) to keep track of these things, run, and compareTrial
orTrialEstimate
of the median, or something like that, is all we really want.overwritebenchmarks
, which takes the lastest run of the benchmark group and just saves overtop of the current file?test/runbenchmarks.jl
? You definetely do not want this to run inside of the CI, soruntests.jl
should not call it.My hope is that we can keep this as simple as possible so that we can teach everyone to do it in their own projects.
Where to do it:
The text was updated successfully, but these errors were encountered: