You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's becoming increasingly more difficult to properly test Stump in a way that ensures each iteration or new feature doesn't introduce regressions. Besides the (ever improving) unit test suite, full end-to-end tests have been done by hand, which is both time-consuming and error-prone. The latest 0.0.8 release is a prime example of this, as there were bad OPDS-related regressions that went unnoticed for multiple weeks.
This issue is to propose and discuss some ideas for automating the testing of base, core features to ensure that regressions are caught early and often. The workflow I have in mind is as follows:
We already build an amd64 docker image as part of the CI pipeline. We should extend this with an additional step that deploys the image locally and properly tears it down after the tests are done
An end-to-end test suite is created that runs against the local deployment and focuses on essential UI->backend interactions, e.g. using playwright
An end-to-end test suite is created that runs against the local deployment and focuses on essential backend assertions and/or backend-specific features (e.g., OPDS feeds)
The first two steps are the most important, as they will catch the most common regressions. The third step is more of a nice-to-have, since if designed well the first two steps should catch most backend regressions as well.
Edit to add that testing through docker might add a layer of complexity that perhaps is not necessary, and it might be better for velocity to just kickoff a local instance of the server through cargo
The text was updated successfully, but these errors were encountered:
It's becoming increasingly more difficult to properly test Stump in a way that ensures each iteration or new feature doesn't introduce regressions. Besides the (ever improving) unit test suite, full end-to-end tests have been done by hand, which is both time-consuming and error-prone. The latest
0.0.8
release is a prime example of this, as there were bad OPDS-related regressions that went unnoticed for multiple weeks.This issue is to propose and discuss some ideas for automating the testing of base, core features to ensure that regressions are caught early and often. The workflow I have in mind is as follows:
amd64
docker image as part of the CI pipeline. We should extend this with an additional step that deploys the image locally and properly tears it down after the tests are doneThe first two steps are the most important, as they will catch the most common regressions. The third step is more of a nice-to-have, since if designed well the first two steps should catch most backend regressions as well.
Edit to add that testing through docker might add a layer of complexity that perhaps is not necessary, and it might be better for velocity to just kickoff a local instance of the server through cargo
The text was updated successfully, but these errors were encountered: