Skip to content

Commit

Permalink
update
Browse files Browse the repository at this point in the history
  • Loading branch information
schurhammer committed Nov 1, 2023
1 parent 1df368b commit a696be3
Show file tree
Hide file tree
Showing 3 changed files with 33 additions and 8 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ A library for benchmarking gleam code.
# How To

```rust
import gleamy_bench.{Bench, Function, IPS, Input, Min, P, run, table}
import gleamy_bench.{Bench, BenchTime, Function, IPS, Input, Min, P, run, table}

// ..

Expand All @@ -23,7 +23,7 @@ Bench(
Function("fib2", fib2),
],
)
|> run()
|> run([BenchTime(500)])
|> table([IPS, Min, P(99)])
|> io.println()

Expand All @@ -33,7 +33,7 @@ A benchmark is defined by giving a list of inputs and a list of functions to run

The inputs should all be the same type, and the functions should all accept that type as the only argument. The return type of the function does not matter, only that they all return the same type.

The `run` function actually runs the benchmark and collects the results.
The `run` function actually runs the benchmark and collects the results. It accepts a list of options to change default behaviour, for example `BenchTime(100)` can be used to change how long each function is run repeatedly when collecting results (in milliseconds).

The `table` function makes a table out of the results. You can choose the list of statistics you would like to include in the table.

Expand Down
31 changes: 28 additions & 3 deletions src/gleamy_bench.gleam
Original file line number Diff line number Diff line change
Expand Up @@ -111,14 +111,39 @@ fn repeat_until(duration: Float, value: a, fun: fn(a) -> b) {
do_repeat_until([], now() +. duration, value, fun)
}

pub fn run(bench: Bench(a, b)) -> List(Set) {
pub type Option {
WarmupTime(ms: Int)
BenchTime(ms: Int)
}

type Options {
Options(warmup_time: Int, bench_time: Int)
}

fn default_options() -> Options {
Options(warmup_time: 50, bench_time: 500)
}

fn apply_options(default: Options, options: List(Option)) -> Options {
case options {
[] -> default
[x, ..xs] ->
case x {
WarmupTime(ms) -> apply_options(Options(..default, warmup_time: ms), xs)
BenchTime(ms) -> apply_options(Options(..default, bench_time: ms), xs)
}
}
}

pub fn run(bench: Bench(a, b), options: List(Option)) -> List(Set) {
let options = apply_options(default_options(), options)
use Input(input_label, input) <- list.flat_map(bench.inputs)
use function <- list.map(bench.functions)
case function {
Function(fun_label, fun) -> {
io.println("benching set " <> input_label <> " " <> fun_label)
let _warmup = repeat_until(10.0, input, fun)
let timings = repeat_until(500.0, input, fun)
let _warmup = repeat_until(int.to_float(options.warmup_time), input, fun)
let timings = repeat_until(int.to_float(options.bench_time), input, fun)
Set(input_label, fun_label, timings)
}
}
Expand Down
4 changes: 2 additions & 2 deletions src/gleamy_bench_example.gleam
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
import gleamy_bench.{Bench, Function, IPS, Input, Min, P, run, table}
import gleamy_bench.{Bench, BenchTime, Function, IPS, Input, Min, P, run, table}
import gleam/io

fn fib1(n: Int) -> Int {
Expand All @@ -25,7 +25,7 @@ pub fn main() {
[Input("n=5", 5), Input("n=10", 10), Input("n=15", 15)],
[Function("fib1", fib1), Function("fib2", fib2)],
)
|> run()
|> run([BenchTime(100)])
|> table([IPS, Min, P(99)])
|> io.println()
}

0 comments on commit a696be3

Please sign in to comment.