Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Indent trailing lines of benchmark display by one space #101

Merged
merged 3 commits into from
May 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions docs/src/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,10 +61,10 @@ output)
```jldoctest
julia> @be rand(100)
Benchmark: 19442 samples with 25 evaluations
min 95.000 ns (2 allocs: 928 bytes)
median 103.320 ns (2 allocs: 928 bytes)
mean 140.096 ns (2 allocs: 928 bytes, 0.36% gc time)
max 19.748 μs (2 allocs: 928 bytes, 96.95% gc time)
min 95.000 ns (2 allocs: 928 bytes)
median 103.320 ns (2 allocs: 928 bytes)
mean 140.096 ns (2 allocs: 928 bytes, 0.36% gc time)
max 19.748 μs (2 allocs: 928 bytes, 96.95% gc time)
```

This invocation runs the same experiment as `@b`, but reports more results. It ran 19442
Expand Down
58 changes: 29 additions & 29 deletions src/public.jl
Original file line number Diff line number Diff line change
Expand Up @@ -149,56 +149,56 @@ So `init` will be called once, `setup` and `teardown` will be called once per sa
```jldoctest; filter = [r"\\d\\d?\\d?\\.\\d{3} [μmn]?s( \\(.*\\))?"=>s"RES", r"\\d+ (sample|evaluation)s?"=>s"### \\1"], setup=(using Random)
julia> @be rand(10000) # Benchmark a function
Benchmark: 267 samples with 2 evaluations
min 8.500 μs (2 allocs: 78.172 KiB)
median 10.354 μs (2 allocs: 78.172 KiB)
mean 159.639 μs (2 allocs: 78.172 KiB, 0.37% gc time)
max 39.579 ms (2 allocs: 78.172 KiB, 99.93% gc time)
min 8.500 μs (2 allocs: 78.172 KiB)
median 10.354 μs (2 allocs: 78.172 KiB)
mean 159.639 μs (2 allocs: 78.172 KiB, 0.37% gc time)
max 39.579 ms (2 allocs: 78.172 KiB, 99.93% gc time)

julia> @be rand hash # How long does it take to hash a random Float64?
Benchmark: 4967 samples with 10805 evaluations
min 1.758 ns
median 1.774 ns
mean 1.820 ns
max 5.279 ns
min 1.758 ns
median 1.774 ns
mean 1.820 ns
max 5.279 ns

julia> @be rand(1000) sort issorted(_) || error() # Simultaneously benchmark and test
Benchmark: 2689 samples with 2 evaluations
min 9.771 μs (3 allocs: 18.062 KiB)
median 11.562 μs (3 allocs: 18.062 KiB)
mean 14.933 μs (3 allocs: 18.097 KiB, 0.04% gc time)
max 4.916 ms (3 allocs: 20.062 KiB, 99.52% gc time)
min 9.771 μs (3 allocs: 18.062 KiB)
median 11.562 μs (3 allocs: 18.062 KiB)
mean 14.933 μs (3 allocs: 18.097 KiB, 0.04% gc time)
max 4.916 ms (3 allocs: 20.062 KiB, 99.52% gc time)

julia> @be rand(1000) sort! issorted(_) || error() # BAD! This repeatedly resorts the same array!
Benchmark: 2850 samples with 13 evaluations
min 1.647 μs (0.15 allocs: 797.538 bytes)
median 1.971 μs (0.15 allocs: 797.538 bytes)
mean 2.212 μs (0.15 allocs: 800.745 bytes, 0.03% gc time)
max 262.163 μs (0.15 allocs: 955.077 bytes, 98.95% gc time)
min 1.647 μs (0.15 allocs: 797.538 bytes)
median 1.971 μs (0.15 allocs: 797.538 bytes)
mean 2.212 μs (0.15 allocs: 800.745 bytes, 0.03% gc time)
max 262.163 μs (0.15 allocs: 955.077 bytes, 98.95% gc time)

julia> @be rand(1000) sort! issorted(_) || error() evals=1 # Specify evals=1 to ensure the function is only run once between setup and teardown
Benchmark: 6015 samples with 1 evaluation
min 9.666 μs (2 allocs: 10.125 KiB)
median 10.916 μs (2 allocs: 10.125 KiB)
mean 12.330 μs (2 allocs: 10.159 KiB, 0.02% gc time)
max 6.883 ms (2 allocs: 12.125 KiB, 99.56% gc time)
min 9.666 μs (2 allocs: 10.125 KiB)
median 10.916 μs (2 allocs: 10.125 KiB)
mean 12.330 μs (2 allocs: 10.159 KiB, 0.02% gc time)
max 6.883 ms (2 allocs: 12.125 KiB, 99.56% gc time)

julia> @be rand(10) _ sort!∘rand! issorted(_) || error() # Or, include randomization in the benchmarked function and only allocate once
Benchmark: 3093 samples with 237 evaluations
min 121.308 ns
median 126.055 ns
mean 128.108 ns
max 303.447 ns
min 121.308 ns
median 126.055 ns
mean 128.108 ns
max 303.447 ns

julia> @be (x = 0; for _ in 1:50; x = hash(x); end; x) # We can use arbitrary expressions in any position in the pipeline, not just simple functions.
Benchmark: 3387 samples with 144 evaluations
min 183.160 ns
median 184.611 ns
mean 188.869 ns
max 541.667 ns
min 183.160 ns
median 184.611 ns
mean 188.869 ns
max 541.667 ns

julia> @be (x = 0; for _ in 1:5e8; x = hash(x); end; x) # This runs for a long time, so it is only run once (with no warmup)
Benchmark: 1 sample with 1 evaluation
2.488 s (without a warmup)
2.488 s (without a warmup)
```
"""
macro be(args...)
Expand Down
10 changes: 5 additions & 5 deletions src/show.jl
Original file line number Diff line number Diff line change
Expand Up @@ -120,21 +120,21 @@ function Base.show(io::IO, m::MIME"text/plain", b::Benchmark)
if samples ≤ 4
sd = sort(b.samples, by = s->s.time)
for (i, s) in enumerate(sd)
print(io, " ")
print(io, " ")
show(io, m, s)
i == length(sd) || println(io)
end
else
print(io, "min ")
print(io, " min ")
show(io, m, minimum(b))
println(io)
print(io, "median ")
print(io, " median ")
show(io, m, median(b))
println(io)
print(io, "mean ")
print(io, " mean ")
show(io, m, mean(b))
println(io)
print(io, "max ")
print(io, " max ")
show(io, m, maximum(b))
end
end
8 changes: 4 additions & 4 deletions src/types.jl
Original file line number Diff line number Diff line change
Expand Up @@ -59,10 +59,10 @@ objects and return `Sample`s.
```jldoctest; filter = [r"\\d\\d?\\d?\\.\\d{3} [μmn]?s( \\(.*\\))?"=>s"RES", r"\\d+ (sample|evaluation)s?"=>s"### \\1"]
julia> @be eval(:(for _ in 1:10; sqrt(rand()); end))
Benchmark: 15 samples with 1 evaluation
min 4.307 ms (3608 allocs: 173.453 KiB, 92.21% compile time)
median 4.778 ms (3608 allocs: 173.453 KiB, 94.65% compile time)
mean 6.494 ms (3608 allocs: 173.453 KiB, 94.15% compile time)
max 12.021 ms (3608 allocs: 173.453 KiB, 95.03% compile time)
min 4.307 ms (3608 allocs: 173.453 KiB, 92.21% compile time)
median 4.778 ms (3608 allocs: 173.453 KiB, 94.65% compile time)
mean 6.494 ms (3608 allocs: 173.453 KiB, 94.15% compile time)
max 12.021 ms (3608 allocs: 173.453 KiB, 95.03% compile time)

julia> minimum(ans)
4.307 ms (3608 allocs: 173.453 KiB, 92.21% compile time)
Expand Down
18 changes: 9 additions & 9 deletions test/runtests.jl
Original file line number Diff line number Diff line change
Expand Up @@ -147,19 +147,19 @@ using Chairmarks: Sample, Benchmark
@test eval(Meta.parse(repr(x))).samples == x.samples
VERSION >= v"1.6" && @test sprint(show, MIME"text/plain"(), x) == """
Benchmark: 5 samples with 1 evaluation
min 101.540 ms (166 allocs: 16.195 KiB)
median 101.623 ms (166 allocs: 16.195 KiB)
mean 101.728 ms (166 allocs: 16.195 KiB)
max 102.239 ms (166 allocs: 16.195 KiB)"""
min 101.540 ms (166 allocs: 16.195 KiB)
median 101.623 ms (166 allocs: 16.195 KiB)
mean 101.728 ms (166 allocs: 16.195 KiB)
max 102.239 ms (166 allocs: 16.195 KiB)"""

x = Benchmark(x.samples[1:3])

@test eval(Meta.parse(repr(x))).samples == x.samples
VERSION >= v"1.6" && @test sprint(show, MIME"text/plain"(), x) == """
Benchmark: 3 samples with 1 evaluation
101.540 ms (166 allocs: 16.195 KiB)
101.591 ms (166 allocs: 16.195 KiB)
102.239 ms (166 allocs: 16.195 KiB)"""
101.540 ms (166 allocs: 16.195 KiB)
101.591 ms (166 allocs: 16.195 KiB)
102.239 ms (166 allocs: 16.195 KiB)"""

x = Benchmark(x.samples[1:0])
@test eval(Meta.parse(repr(x))).samples == x.samples
Expand Down Expand Up @@ -213,8 +213,8 @@ using Chairmarks: Sample, Benchmark
@test eval(Meta.parse(repr(x))).samples == x.samples
@test sprint(show, MIME"text/plain"(), x) == """
Benchmark: 2 samples with variable evaluations
100.000 ms
100.000 ms"""
100.000 ms
100.000 ms"""
end
end

Expand Down
Loading