You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How does one do these sorts of benchmarks? It might be something that Arnav can play around with. I know about BenchmarkTools, but I don't know the canonical way to ensure that no major performance regressions occur.
I have never done this either. I think the way to do it, is to have a script which runs a bunch of benchmarks using BenchmarkTools.jl and then compares them to previous, stored benchmarks. As stated here the steps are:
Start a Julia session
Execute a benchmark suite using an old version of your package
Save the results somehow (e.g. in a JSON file)
Start a new Julia session
Execute a benchmark suite using a new version of your package
Compare the new results with the results saved in step 3 to determine regression status
Note that the benchmarks need to run locally not on CI, as CI cannot guarantee that results are comparable.
Would be good to have to check that, e.g.,
@pack
stays performant.The text was updated successfully, but these errors were encountered: