-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing #61
Testing #61
Conversation
Use the actual `run_awsm` command in test suite throughout all cases.
Refactor test helper to compare the netCDF test variables and use numpy unit test methods.
test_data, | ||
gold_data, | ||
err_msg=f"Variable ${variable} did not match with gold image" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are 7 digits enough?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We've had a problem with assert almost equal from numpy. There is both a relative and absolute parameter that has to be adjusted for each variable in my experience. This is why we've stuck with the very strict must be equal, then we will know immediately if something changes.
test_data, | ||
gold_data, | ||
err_msg=f"Variable ${variable} did not match with gold image" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We've had a problem with assert almost equal from numpy. There is both a relative and absolute parameter that has to be adjusted for each variable in my experience. This is why we've stuck with the very strict must be equal, then we will know immediately if something changes.
From the doc: Check that the shape is equal and all elements of these objects are equal. NaNs are compared like numbers, no assertion is raised if both objects have NaNs in the same positions
Looks like there is an error in one of the tests now in |
Hm, I am getting a different error locally. Let me investigate. |
Use analog method as in test_awsm.py to store paths and configuration.
Chopping some things up again. I will request another review and highlight the changes soon |
Some spring cleaning ...
No need to call this with every test case run. Also add option to skip clearing of output folders.
Use the AWSMTestCase base class to be able to use the test_dir and reduce duplicated logic.
Move reading of the test_base_config to the TestConfiguration. This is the only place, where it will be used.
Each test case now specifies it's output dir to cleanup after each run per `run_dir`. Under that, an folder 'output' will be looked up and files and folders removed.
Looks like it's still failing on Travis in |
Saw that. They do pass on my local. Wonder what could be different on Travis |
The SMRF PR that was merged yesterday probably broke it. |
Reflow
setup.py
(Pep8) and remove thecan_i_run_awsm
method. The method was defined as a wrapper to returnTrue
on success and can be replaced to rest the actual return value ofrun_awsm
, which isNone
.Established a
AWSMTestCase
class. This class should help read out configurations located absolute from thetest
folder. Also moved the cleanup method there, so future test cases can use that as well. Hopefully this change to use class inheritance makes some easier to read classes and also speeds up writing of more.