-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bug when running multiple tests on the same example & multicore #202
Comments
@Mathadon : Wouldn't it be easier to have one example extend the other in order to create two tests? Results of the unit tests are usually listed by the model name, at least this is how JModelica, OpenModelica and BuildingPy are separating their tests and listing their results. |
@mwetter Ok, I'll fix it that way. But then we should add a check to buildingspy that does not allow to have two unit tests for the same model? |
Would that be a BuildingsPy-specific specification or would it be possible to have a specification that all test tools can agree on? |
@Mathadon : Such a test would be good to have. @thorade : I am not aware that there is a concerted effort for such a test specification that includes tool vendors. I would start with a vendor-annotation of the type __BuildingsPy, and then add information that is common to all tools, and sections that are specific to a tool. We could easily parse such vendor annotation with https://github.com/lbl-srg/modelica-json. |
@mwetter I will make a pull request. |
In IDEAS we run two unit tests on the same example model, using two separate .mos script and two separate result .mat files (as required by BuildingsPy). Travis runs these tests using two cores. Each process runs one of the two unit tests that depend on the same example. Both tests write to
IDEAS.BoundaryConditions.Examples.SimInfoManager.translation.log
. This filename is not unique, such that the file becomes corrupted (it does not contain all required statistics). The python script does not detect the corrupted file. The tests then fail, since some statistics appeared in the old results, but do not appear in the new results.I propose to fix this by changing the script such that a unique translation log file is generated, based on the .mat result name?
The text was updated successfully, but these errors were encountered: