Skip to content

Commit

Permalink
Add support for skipping testitems (#117)
Browse files Browse the repository at this point in the history
* Initial support for skipping testitems

* Add some tests

* Simplify module building

* Test skipped testitems have empty stats

* WIP integration tests for skipping testitems

* more tests

* more tests 2

* docs

* Test JUnit report for skipped test-items

* cleanup

* Fixup block expr test on v1.10

* Update README.md

Co-authored-by: Nathan Daly <[email protected]>

* Update src/macros.jl

Co-authored-by: Nathan Daly <[email protected]>

* Remove unused file

* Fix and test log alignment

* Print SKIP in warning color

* Emphasise difference between `skip` and filtering `runtests`

* fixup! Emphasise difference between `skip` and filtering `runtests`

* Bump version

* fixup! Fix and test log alignment

---------

Co-authored-by: Nathan Daly <[email protected]>
  • Loading branch information
nickrobinson251 and NHDaly authored Dec 18, 2023
1 parent 597f7df commit 3b8788d
Show file tree
Hide file tree
Showing 12 changed files with 455 additions and 48 deletions.
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name = "ReTestItems"
uuid = "817f1d60-ba6b-4fd5-9520-3cf149f6a823"
version = "1.22.0"
version = "1.23.0"

[deps]
Dates = "ade2ca70-3891-5945-98fb-dc099432e06a"
Expand Down
52 changes: 47 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,17 @@ julia> runtests(
)
```

You can use the `name` keyword, to select test-items by name.
For interactive sessions, all logs from the tests will be printed out in the REPL by default.
You can disable this by passing `logs=:issues` in which case logs from a test-item are only printed if that test-items errors or fails.
`logs=:issues` is also the default for non-interactive sessions.

```julia
julia> runtests("test/Database/"; logs=:issues)
```

#### Filtering tests

You can use the `name` keyword to select test-items by name.
Pass a string to select a test-item by its exact name,
or pass a regular expression (regex) to match multiple test-item names.

Expand All @@ -70,12 +80,19 @@ julia> runtests("test/Database/"; name="issue-123")
julia> runtests("test/Database/"; name=r"^issue")
```

For interactive sessions, all logs from the tests will be printed out in the REPL by default.
You can disable this by passing `logs=:issues` in which case logs from a test-item are only printed if that test-items errors or fails.
`logs=:issues` is also the default for non-interactive sessions.
You can pass `tags` to select test-items by tag.
When passing multiple tags a test-item is only run if it has all the requested tags.

```julia
julia> runtests("test/Database/"; logs=:issues)
# Run tests that are tagged as both `regression` and `fast`
julia> runtests("test/Database/"; tags=[:regression, :fast])
```

Filtering by `name` and `tags` can be combined to run only test-items that match both the name and tags.

```julia
# Run tests named `issue*` which also have tag `regression`.
julia> runtests("test/Database/"; tags=:regression, name=r"^issue")
```

## Writing tests
Expand Down Expand Up @@ -130,6 +147,31 @@ end
The `setup` is run once on each worker process that requires it;
it is not run before every `@testitem` that depends on the setup.

#### Skipping tests

The `skip` keyword can be used to skip a `@testitem`, meaning no code inside that test-item will run.
A skipped test-item logs that it is being skipped and records a single "skipped" test result, similar to `@test_skip`.

```julia
@testitem "skipped" skip=true begin
@test false
end
```

If `skip` is given as an `Expr`, it must return a `Bool` indicating whether or not to skip the test-item.
This expression will be run in a new module similar to a test-item immediately before the test-item would be run.

```julia
# Don't run "orc v1" tests if we don't have orc v1
@testitem "orc v1" skip=:(using LLVM; !LLVM.has_orc_v1()) begin
# tests
end
```

The `skip` keyword allows you to define the condition under which a test needs to be skipped,
for example if it can only be run on a certain platform.
See [filtering tests](#filtering-tests) for controlling which tests run in a particular `runtests` call.

#### Post-testitem hook

If there is something that should be checked after every single `@testitem`, then it's possible to pass an expression to `runtests` using the `test_end_expr` keyword.
Expand Down
37 changes: 37 additions & 0 deletions src/ReTestItems.jl
Original file line number Diff line number Diff line change
Expand Up @@ -861,6 +861,40 @@ end
const GLOBAL_TEST_CONTEXT_FOR_TESTING = TestContext("ReTestItems", 0)
const GLOBAL_TEST_SETUPS_FOR_TESTING = Dict{Symbol, TestSetup}()

# Check the `skip` keyword, and return a `Bool` indicating if we should skip the testitem.
# If `skip` is an expression, run it in a new module just like how we run testitems.
# If the `skip` expression doesn't return a Bool, throw an informative error.
function should_skip(ti::TestItem)
ti.skip isa Bool && return ti.skip
# `skip` is an expression.
# Give same scope as testitem body, e.g. imports should work.
skip_body = deepcopy(ti.skip::Expr)
softscope_all!(skip_body)
# Run in a new module to not pollute `Main`.
# Need to store the result of the `skip` expression so we can check it.
mod_name = gensym(Symbol(:skip_, ti.name))
skip_var = gensym(:skip)
skip_mod_expr = :(module $mod_name; $skip_var = $skip_body; end)
skip_mod = Core.eval(Main, skip_mod_expr)
# Check what the expression evaluated to.
skip = getfield(skip_mod, skip_var)
!isa(skip, Bool) && _throw_not_bool(ti, skip)
return skip::Bool
end
_throw_not_bool(ti, skip) = error("Test item $(repr(ti.name)) `skip` keyword must be a `Bool`, got `skip=$(repr(skip))`")

# Log that we skipped the testitem, and record a "skipped" test result with empty stats.
function skiptestitem(ti::TestItem, ctx::TestContext; verbose_results::Bool=true)
ts = DefaultTestSet(ti.name; verbose=verbose_results)
Test.record(ts, Test.Broken(:skipped, ti.name))
push!(ti.testsets, ts)
stats = PerfStats()
push!(ti.stats, stats)
log_testitem_skipped(ti, ctx.ntestitems)
return TestItemResult(ts, stats)
end


# assumes any required setups were expanded outside of a runtests context
function runtestitem(ti::TestItem; kw...)
# make a fresh TestSetupModules for each testitem run
Expand All @@ -879,6 +913,9 @@ function runtestitem(
ti::TestItem, ctx::TestContext;
test_end_expr::Expr=Expr(:block), logs::Symbol=:eager, verbose_results::Bool=true, finish_test::Bool=true,
)
if should_skip(ti)::Bool
return skiptestitem(ti, ctx; verbose_results)
end
name = ti.name
log_testitem_start(ti, ctx.ntestitems)
ts = DefaultTestSet(name; verbose=verbose_results)
Expand Down
44 changes: 28 additions & 16 deletions src/log_capture.jl
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ function _print_scaled_one_dec(io, value, scale, label="")
end
print(io, label)
end
function time_print(io; elapsedtime, bytes=0, gctime=0, allocs=0, compile_time=0, recompile_time=0)
function print_time(io; elapsedtime, bytes=0, gctime=0, allocs=0, compile_time=0, recompile_time=0)
_print_scaled_one_dec(io, elapsedtime, 1e9, " secs")
if gctime > 0 || compile_time > 0
print(io, " (")
Expand Down Expand Up @@ -241,35 +241,47 @@ function _print_test_errors(report_iob, ts::DefaultTestSet, worker_info)
return nothing
end

# Marks the start of each test item
function log_testitem_start(ti::TestItem, ntestitems=0)
io = IOContext(IOBuffer(), :color => get(DEFAULT_STDOUT[], :color, false)::Bool)
function print_state(io, state, ti, ntestitems; color=:default)
interactive = parse(Bool, get(ENV, "RETESTITEMS_INTERACTIVE", string(Base.isinteractive())))
print(io, format(now(), "HH:MM:SS | "))
!interactive && print(io, _mem_watermark())
printstyled(io, "START"; bold=true)
if ntestitems > 0
# rpad/lpad so that the eval numbers are all vertically aligned
printstyled(io, rpad(uppercase(state), 5); bold=true, color)
print(io, " (", lpad(ti.eval_number[], ndigits(ntestitems)), "/", ntestitems, ")")
else
printstyled(io, uppercase(state); bold=true)
end
print(io, " test item $(repr(ti.name)) at ")
print(io, " test item $(repr(ti.name)) ")
end

function print_file_info(io, ti)
print(io, "at ")
printstyled(io, _file_info(ti); bold=true, color=:default)
end

function log_testitem_skipped(ti::TestItem, ntestitems=0)
io = IOContext(IOBuffer(), :color => get(DEFAULT_STDOUT[], :color, false)::Bool)
print_state(io, "SKIP", ti, ntestitems; color=Base.warn_color())
print_file_info(io, ti)
println(io)
write(DEFAULT_STDOUT[], take!(io.io))
end

# Marks the start of each test item
function log_testitem_start(ti::TestItem, ntestitems=0)
io = IOContext(IOBuffer(), :color => get(DEFAULT_STDOUT[], :color, false)::Bool)
print_state(io, "START", ti, ntestitems)
print_file_info(io, ti)
println(io)
write(DEFAULT_STDOUT[], take!(io.io))
end

# mostly copied from timing.jl
function log_testitem_done(ti::TestItem, ntestitems=0)
io = IOContext(IOBuffer(), :color => get(DEFAULT_STDOUT[], :color, false)::Bool)
interactive = parse(Bool, get(ENV, "RETESTITEMS_INTERACTIVE", string(Base.isinteractive())))
print(io, format(now(), "HH:MM:SS | "))
!interactive && print(io, _mem_watermark())
printstyled(io, "DONE "; bold=true)
if ntestitems > 0
print(io, " (", lpad(ti.eval_number[], ndigits(ntestitems)), "/", ntestitems, ")")
end
print(io, " test item $(repr(ti.name)) ")
print_state(io, "DONE", ti, ntestitems)
x = last(ti.stats) # always print stats for most recent run
time_print(io; x.elapsedtime, x.bytes, x.gctime, x.allocs, x.compile_time, x.recompile_time)
print_time(io; x.elapsedtime, x.bytes, x.gctime, x.allocs, x.compile_time, x.recompile_time)
println(io)
write(DEFAULT_STDOUT[], take!(io.io))
end
Expand Down
28 changes: 23 additions & 5 deletions src/macros.jl
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,7 @@ struct TestItem
setups::Vector{Symbol}
retries::Int
timeout::Union{Int,Nothing} # in seconds
skip::Union{Bool,Expr}
file::String
line::Int
project_root::String
Expand All @@ -131,10 +132,10 @@ struct TestItem
stats::Vector{PerfStats} # populated when the test item is finished running
scheduled_for_evaluation::ScheduledForEvaluation # to keep track of whether the test item has been scheduled for evaluation
end
function TestItem(number, name, id, tags, default_imports, setups, retries, timeout, file, line, project_root, code)
function TestItem(number, name, id, tags, default_imports, setups, retries, timeout, skip, file, line, project_root, code)
_id = @something(id, repr(hash(name, hash(relpath(file, project_root)))))
return TestItem(
number, name, _id, tags, default_imports, setups, retries, timeout, file, line, project_root, code,
number, name, _id, tags, default_imports, setups, retries, timeout, skip, file, line, project_root, code,
TestSetup[],
Ref{Int}(0),
DefaultTestSet[],
Expand All @@ -145,7 +146,7 @@ function TestItem(number, name, id, tags, default_imports, setups, retries, time
end

"""
@testitem "name" [tags=[] setup=[] retries=0 default_imports=true] begin
@testitem "name" [tags=[] setup=[] retries=0 skip=false default_imports=true] begin
# code that will be run as tests
end
Expand Down Expand Up @@ -228,13 +229,26 @@ Note that `timeout` currently only works when tests are run with multiple worker
@testitem "Sometimes too slow" timeout=10 begin
@test sleep(rand(1:100))
end
If a `@testitem` needs to be skipped, then you can set the `skip` keyword.
Either pass `skip=true` to unconditionally skip the test item, or pass `skip` an
expression that returns a `Bool` to determine if the testitem should be skipped.
@testitem "Skip on old Julia" skip=(VERSION < v"1.9") begin
v = [1]
@test 0 == @allocations sum(v)
end
The `skip` expression is run in its own module, just like a test-item.
No code inside a `@testitem` is run when a test-item is skipped.
"""
macro testitem(nm, exs...)
default_imports = true
retries = 0
timeout = nothing
tags = Symbol[]
setup = Any[]
skip = false
_id = nothing
_run = true # useful for testing `@testitem` itself
_source = QuoteNode(__source__)
Expand All @@ -257,12 +271,16 @@ macro testitem(nm, exs...)
setup = map(Symbol, setup.args)
elseif kw == :retries
retries = ex.args[2]
@assert retries isa Integer "`default_imports` keyword must be passed an `Integer`"
@assert retries isa Integer "`retries` keyword must be passed an `Integer`"
elseif kw == :timeout
t = ex.args[2]
@assert t isa Real "`timeout` keyword must be passed a `Real`"
@assert t > 0 "`timeout` keyword must be passed a positive number. Got `timeout=$t`"
timeout = ceil(Int, t)
elseif kw == :skip
skip = ex.args[2]
# If the `Expr` doesn't evaluate to a Bool, throws at runtime.
@assert skip isa Union{Bool,Expr} "`skip` keyword must be passed a `Bool`"
elseif kw == :_id
_id = ex.args[2]
# This will always be written to the JUnit XML as a String, require the user
Expand All @@ -287,7 +305,7 @@ macro testitem(nm, exs...)
ti = gensym(:ti)
esc(quote
let $ti = $TestItem(
$Ref(0), $nm, $_id, $tags, $default_imports, $setup, $retries, $timeout,
$Ref(0), $nm, $_id, $tags, $default_imports, $setup, $retries, $timeout, $skip,
$String($_source.file), $_source.line,
$gettls(:__RE_TEST_PROJECT__, "."),
$q,
Expand Down
27 changes: 27 additions & 0 deletions test/integrationtests.jl
Original file line number Diff line number Diff line change
Expand Up @@ -1032,4 +1032,31 @@ end
@test_throws expected_err runtests(file; nworkers=1, memory_threshold=xx)
end

@testset "skipping testitems" begin
# Test report printing has test items as "skipped" (which appear under "Broken")
using IOCapture
file = joinpath(TEST_FILES_DIR, "_skip_tests.jl")
results = encased_testset(()->runtests(file; nworkers=1))
c = IOCapture.capture() do
Test.print_test_results(results)
end
@test contains(
c.output,
r"""
Test Summary: \s* \| Pass Fail Broken Total Time
ReTestItems \s* \| 4 1 3 8 \s*\d*.\ds
"""
)
end

@testset "logs are aligned" begin
file = joinpath(TEST_FILES_DIR, "_skip_tests.jl")
c1 = IOCapture.capture() do
encased_testset(()->runtests(file))
end
@test contains(c1.output, r"START \(1/6\) test item \"no skip, 1 pass\"")
@test contains(c1.output, r"DONE \(1/6\) test item \"no skip, 1 pass\"")
@test contains(c1.output, r"SKIP \(3/6\) test item \"skip true\"")
end

end # integrationtests.jl testset
44 changes: 43 additions & 1 deletion test/internals.jl
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@ end # `include_testfiles!` testset
@testset "report_empty_testsets" begin
using ReTestItems: TestItem, report_empty_testsets, PerfStats, ScheduledForEvaluation
using Test: DefaultTestSet, Fail, Error
ti = TestItem(Ref(42), "Dummy TestItem", "DummyID", [], false, [], 0, nothing, "source/path", 42, ".", nothing)
ti = TestItem(Ref(42), "Dummy TestItem", "DummyID", [], false, [], 0, nothing, false, "source/path", 42, ".", nothing)

ts = DefaultTestSet("Empty testset")
report_empty_testsets(ti, ts)
Expand Down Expand Up @@ -281,4 +281,46 @@ end
@test_throws ArgumentError("\"$nontest_file\" is not a test file") _validated_paths((nontest_file,), true)
end

@testset "skiptestitem" begin
# Test that `skiptestitem` unconditionally skips a testitem
# and returns `TestItemResult` with a single "skipped" `Test.Result`
ti = @testitem "skip" _run=false begin
@test true
@test false
@test error()
end
ctx = ReTestItems.TestContext("test_ctx", 1)
ti_res = ReTestItems.skiptestitem(ti, ctx)
@test ti_res isa TestItemResult
test_res = only(ti_res.testset.results)
@test test_res isa Test.Result
@test test_res isa Test.Broken
@test test_res.test_type == :skipped
end

@testset "should_skip" begin
should_skip = ReTestItems.should_skip

ti = @testitem("x", skip=true, _run=false, begin end)
@test should_skip(ti)
ti = @testitem("x", skip=false, _run=false, begin end)
@test !should_skip(ti)

ti = @testitem("x", skip=:(1 == 1), _run=false, begin end)
@test should_skip(ti)
ti = @testitem("x", skip=:(1 != 1), _run=false, begin end)
@test !should_skip(ti)

ti = @testitem("x", skip=:(x = 1; x + x == 2), _run=false, begin end)
@test should_skip(ti)
ti = @testitem("x", skip=:(x = 1; x + x != 2), _run=false, begin end)
@test !should_skip(ti)

ti = @testitem("x", skip=:(x = 1; x + x), _run=false, begin end)
@test_throws "Test item \"x\" `skip` keyword must be a `Bool`, got `skip=2`" should_skip(ti)

ti = @testitem("x", skip=:(x = 1; x + y), _run=false, begin end)
@test_throws UndefVarError(:y) should_skip(ti)
end

end # internals.jl testset
Loading

2 comments on commit 3b8788d

@nickrobinson251
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JuliaRegistrator
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Registration pull request created: JuliaRegistries/General/97366

Tip: Release Notes

Did you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
"Release notes:" and it will be added to the registry PR, and if TagBot is installed it will also be added to the
release that TagBot creates. i.e.

@JuliaRegistrator register

Release notes:

## Breaking changes

- blah

To add them here just re-invoke and the PR will be updated.

Tagging

After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.

This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:

git tag -a v1.23.0 -m "<description of version>" 3b8788d22631ebe0bff6a0916a5546eda8ccfc67
git push origin v1.23.0

Please sign in to comment.