You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The FunctionsBindings is designed for per-scenario function instantiation. Although this provides isolation between tests, it also tends to discourage large numbers of tests because it slows things down—we spin up a new function host for each scenario. (And since we currently have no support for dynamic port allocation when testing functions, there's no way to parallelize these tests despite the isolution that this model in principle offers.)
So in practice it's fairly common for tests to want to spin up a function at the SpecFlow feature level. There are two significant problems with this:
Tests have to roll their own support for this
We lose all standard output and standard error reporting
That second issue is mostly down to design problems with unit test frameworks in .NET: there's no good way to report problems that happen at a scope larger than within one single test. This is partly a problem caused by the design of the common test tooling, with the result that pretty much all test frameworks in .NET handle this poorly.
However, we could still do better than we do today. If the FunctionsController were to offer a way to access the "output so far", it would be possible for failing tests to report the entire output from the function. While this is likely to include a lot of irrelevant detail (because it will include output for all tests run so far, not just the failing one) and will include a lot of repetition if multiple scenarios within a feature fail, it is significantly better than having no output at all, which is where we are today. (And in the happy path in which all tests fail, we wouldn't need to show the output, so most of the time it would make no difference.)
The text was updated successfully, but these errors were encountered:
The
FunctionsBindings
is designed for per-scenario function instantiation. Although this provides isolation between tests, it also tends to discourage large numbers of tests because it slows things down—we spin up a new function host for each scenario. (And since we currently have no support for dynamic port allocation when testing functions, there's no way to parallelize these tests despite the isolution that this model in principle offers.)So in practice it's fairly common for tests to want to spin up a function at the SpecFlow feature level. There are two significant problems with this:
That second issue is mostly down to design problems with unit test frameworks in .NET: there's no good way to report problems that happen at a scope larger than within one single test. This is partly a problem caused by the design of the common test tooling, with the result that pretty much all test frameworks in .NET handle this poorly.
However, we could still do better than we do today. If the
FunctionsController
were to offer a way to access the "output so far", it would be possible for failing tests to report the entire output from the function. While this is likely to include a lot of irrelevant detail (because it will include output for all tests run so far, not just the failing one) and will include a lot of repetition if multiple scenarios within a feature fail, it is significantly better than having no output at all, which is where we are today. (And in the happy path in which all tests fail, we wouldn't need to show the output, so most of the time it would make no difference.)The text was updated successfully, but these errors were encountered: