You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It might be nice to encourage unit tests in studies and provide some tooling for it.
In the hypertension study (and still-WIP data-metrics study), I've got a little test harness method that runs a study on some ndjson and compares the resulting tables against some csvs.
Super basic, but pretty useful. We could copy that into the Library, as a starting block.
The text was updated successfully, but these errors were encountered:
don't perscribe a test framework other than what's in the standard lib - but hooks for test frameworks are fine, and should be set up in a reuseable way
The method reference in the above comment is a good starting point for a generalized, holistic runner
It may also be nice to provide a version of the mock_db/mock_db_config functions in the library as part of an importable testing module for more targeted/incremental testing
Matt and I just had a quick slack convo about this and I wanted to mention that we both are leaning away from exposing a way to create raw resource tables (like the Library testbed or the data-metrics harness do) and towards mocking/creating the core tables directly if we can.
Faster certainly, and also easier to mock edge cases directly.
Maybe its API looks similar to the testbed class (e.g. add_patient()), but it would just operate on the core tables instead of raw tables.
It might be nice to encourage unit tests in studies and provide some tooling for it.
In the hypertension study (and still-WIP data-metrics study), I've got a little test harness method that runs a study on some ndjson and compares the resulting tables against some csvs.
Super basic, but pretty useful. We could copy that into the Library, as a starting block.
The text was updated successfully, but these errors were encountered: