You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One item of feedback from JT-NM Tested August 2022 was that the results from the NMOS Testing Tool required several processing steps to collate and filter to produce the familiar tabular results from JT-NM Tested.
Some suggestions for improving automation were made:
save JSON directly to repo used to populate Google Sheet (and maybe PDF for vendor)
format JSON for the JT-NM test plan
e.g. ordering/grouping of test cases (e.g. transmit vs. receive)
process NMOS Testing Tool 'amber' results (warnings, etc.) according to the JT-NM test plan (see below)
automatic verification all test results are there, comparison with pre-test
NOTE: Unless explicitly noted otherwise in the test plan, the testing tool needs to indicate
the ‘PASS’ state for the test case. The test states ‘FAIL’, ‘WARNING’, ‘NOT
IMPLEMENTED’ are NOT considered as a pass. (One general exception is that warnings
about ‘charset’ will be marked as a pass.)
Approx. 7 test cases have notes explicitly indicating that a warning will be marked as a pass in the JT-NM Tested results.
Other questions arose with the testing tool config during JT-NM Tested. The test plan Appendix described how to set up the testing tool and included an example UserConfig.py. But some flexibility was allowed for some settings. Can we standardize timeouts for different cases? What ranges are acceptable for JT-NM Tested? (E.g. shouldn't need HTTP_TIMEOUT of 10 seconds.)
The text was updated successfully, but these errors were encountered:
One simple approach to the problem of processing 'amber' results would be to downgrade each of the JT-NM-ignored warnings from the testing tool to pass-with-info...
Though for other aspects, automated post-processing is probably required anyway?
One item of feedback from JT-NM Tested August 2022 was that the results from the NMOS Testing Tool required several processing steps to collate and filter to produce the familiar tabular results from JT-NM Tested.
Some suggestions for improving automation were made:
See https://static.jt-nm.org/documents/[JT-NM_Tested_Catalog_NMOS_TR_Full-Online-2022-08.pdf
Approx. 7 test cases have notes explicitly indicating that a warning will be marked as a pass in the JT-NM Tested results.
Other questions arose with the testing tool config during JT-NM Tested. The test plan Appendix described how to set up the testing tool and included an example UserConfig.py. But some flexibility was allowed for some settings. Can we standardize timeouts for different cases? What ranges are acceptable for JT-NM Tested? (E.g. shouldn't need
HTTP_TIMEOUT
of 10 seconds.)The text was updated successfully, but these errors were encountered: