Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve per-feature function hosting #50

Open
idg10 opened this issue Feb 12, 2020 · 0 comments
Open

Improve per-feature function hosting #50

idg10 opened this issue Feb 12, 2020 · 0 comments
Labels
enhancement New feature or request

Comments

@idg10
Copy link
Contributor

idg10 commented Feb 12, 2020

The FunctionsBindings is designed for per-scenario function instantiation. Although this provides isolation between tests, it also tends to discourage large numbers of tests because it slows things down—we spin up a new function host for each scenario. (And since we currently have no support for dynamic port allocation when testing functions, there's no way to parallelize these tests despite the isolution that this model in principle offers.)

So in practice it's fairly common for tests to want to spin up a function at the SpecFlow feature level. There are two significant problems with this:

  1. Tests have to roll their own support for this
  2. We lose all standard output and standard error reporting

That second issue is mostly down to design problems with unit test frameworks in .NET: there's no good way to report problems that happen at a scope larger than within one single test. This is partly a problem caused by the design of the common test tooling, with the result that pretty much all test frameworks in .NET handle this poorly.

However, we could still do better than we do today. If the FunctionsController were to offer a way to access the "output so far", it would be possible for failing tests to report the entire output from the function. While this is likely to include a lot of irrelevant detail (because it will include output for all tests run so far, not just the failing one) and will include a lot of repetition if multiple scenarios within a feature fail, it is significantly better than having no output at all, which is where we are today. (And in the happy path in which all tests fail, we wouldn't need to show the output, so most of the time it would make no difference.)

@idg10 idg10 added the enhancement New feature or request label Feb 12, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant