-
Notifications
You must be signed in to change notification settings - Fork 10
Test Runner
The test runner is a script to run your tests on the command line. It supports several options to run your tests different ways as needed. The runner is located as app/spec.q within the framework source directory. I would recommend creating an alias or shell function to shorten the typing needed to use the test runner.
For example: alias testq='q /path/to/qspec/app/spec.q -q'
should suffice if you are using a Unix shell
The test runner requires at least one argument to run. This will be the files to attempt to load and run tests on. The argument can be either a specific q file or a directory containing at least one q file. When loading a directory, no particular order is observed, so if you have a dependency hierarchy in your test files, I would recommend only providing the top level one. If you only have one source file which is intended to be an executable, you can test if the .tst
namespace is present and not start the “main loop” if it is present (a more elaborate method of providing evidence that tests are to be run may be added later depending on user feedback).
In the following examples, we will look at output from running the test framework’s test suite.
Given only tests to run and code, with completely successful tests, the runner will produce output like this:
$ testq lib/init.q test
.......................................................
For 12 specifications, 55 expectations were run.
55 passed, 0 failed. 0 errors.
If some tests fail, you will see output about them:
$ testq lib/init.q test
...E...................................................
Error Assertions::
- should report only thrown exceptions that were not supposed to have been thrown:
Failure: Expected 0 to be equal to 1
4 assertions were run.
Before code:
{`oldFailures mock .tst.assertState.failures;
}
Test code:
{mustthrow["foo";{'"foo"}];
mustnotthrow["foo";{'"bar"}];
testedFailures: .tst.assertState.failures;
.tst.assertState.failures:oldFailures;
first[testedFailures] mustlike "*to not throw the error 'foo'*";
count[testedFailures] musteq 1;
}
For 12 specifications, 55 expectations were run.
54 passed, 0 failed. 0 errors.
This option will cause tests to not actually be run, but rather a report of all specifications and expectations to printed to the console. This can be useful for getting a quick report on what tests will actually be run or to get a status update on what your test coverage is like.
$ testq --desc lib/init.q test
Assertions::
- should increment the assertions run counter by one
- should attach failure messages to the failures lists
Error Assertions::
- should catch errors
- should report only thrown exceptions that were not supposed to have been thrown
- should report only unthrown exceptions that were supposed to have been thrown
Running an Expectation::
- should call the main expectation function
- should call the before function before calling the main expectation function
- should call the after function after calling the main expectation function
- should make assertions available to be used within the expectation
- should execute the expectation in the correct context
- should restore mocked values after all expectation functions have executed
- should prevent errors from escaping when running the expectation
- should call the expecRan callback with the results of running the expectation
...
If for some reason you don’t want to run a certain specification, possibly because the feature it tests is isolated from what you are currently working on and the test takes a very long time to run, you can exclude that specification by using the exclude
option. This option will accept several like
patterns that will be run against the description of each specification. Currently, the patterns cannot include spaces.
This option can be combined with desc
or describe
to only print some descriptions:
$ testq --desc lib/init.q test/ --exclude "*Fixtures* *Assertion* *Expectation* *expectations* *Loading* *Generator*" Mocking:: - should assign the given value to the named variable - should assign to a non-fully qualifeid name with respect to the current context - should backup a variable if it already exists - should remove any variables that did not originally exist when all variables are restored - should refuse to mock a top level namespace Running a specification should:: - should set the correct context and the correct filepath for its expectations - should restore any partitioned directories that were loaded - should restore the context and filepath to what they previously were - should pass only if all expectations passed - should work with an empty expectation list The Testing UI:: - should let you create specifications - should cause specifications to assume the context that they were defined in - should call the loadDesc callback when a new specification is defined - should let you set a before function - should let you set an after function - should let you create an expectation - should let you create a fuzz expectation - should let you mask before and after functions inside of alternate blocks
The only
option will allow you to explicitly specify which specifications to run. If a pattern is present in both exclude
and only
, the exclusion will take precedence:
$ testq --desc lib/init.q test/ --only "*Mocking*"
Mocking::
- should assign the given value to the named variable
- should assign to a non-fully qualifeid name with respect to the current context
- should backup a variable if it already exists
- should remove any variables that did not originally exist when all variables are restored
- should refuse to mock a top level namespace
$ testq --desc lib/init.q test/ --only "*Mocking*" --exclude "*Mocking*"
The fail-hard
option executes the test runner as usual with the exception that as soon as an error or failure in a test is detected, no further tests are run and the state of the test is recreated up to the point that the error occurred. For example, if an error occurred inside the main body of a test, all the mocked values and fixtures loaded within the before block will be present and available and the assertions and other test helpers will be available to run with a simple copy and paste.
The fail-fast
option executes the test runner as usual with the exception that as soon as an error or failure in a test is detected, no further tests are run. The main way this differs from fail-hard
is that the test runner usually exits immediately (unless noquit
is enabled). This can be useful for short circuiting a long series of tests for automated build verification.
The noquit
option changes the normal behavior of the test runner. If present, after the tests have run, you can continue to interact with the q console. This can be particularly useful if you would like to replicate and introspect a failing test.
The pass
option stops the test runner from printing any output to console and causes it to only return a 0 or 1 to the command line if the tests have passed or failed respectively. The test runner will return this value regardless of whether pass
is present or not, but if it is, no other output will be printed.
This controls the number of failing inputs shown if a fuzz test fails. The default is 10.