Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug when running multiple tests on the same example & multicore #202

Open
Mathadon opened this issue May 19, 2018 · 5 comments · May be fixed by #209
Open

bug when running multiple tests on the same example & multicore #202

Mathadon opened this issue May 19, 2018 · 5 comments · May be fixed by #209

Comments

@Mathadon
Copy link
Contributor

Mathadon commented May 19, 2018

In IDEAS we run two unit tests on the same example model, using two separate .mos script and two separate result .mat files (as required by BuildingsPy). Travis runs these tests using two cores. Each process runs one of the two unit tests that depend on the same example. Both tests write to IDEAS.BoundaryConditions.Examples.SimInfoManager.translation.log. This filename is not unique, such that the file becomes corrupted (it does not contain all required statistics). The python script does not detect the corrupted file. The tests then fail, since some statistics appeared in the old results, but do not appear in the new results.

I propose to fix this by changing the script such that a unique translation log file is generated, based on the .mat result name?

@mwetter
Copy link
Member

mwetter commented May 20, 2018

@Mathadon : Wouldn't it be easier to have one example extend the other in order to create two tests? Results of the unit tests are usually listed by the model name, at least this is how JModelica, OpenModelica and BuildingPy are separating their tests and listing their results.
Long term, we probably want to move towards a specification of the test in the .mo file rather than the .mos file, which would also be easier if there is one test per .mo file.

@Mathadon
Copy link
Contributor Author

@mwetter Ok, I'll fix it that way. But then we should add a check to buildingspy that does not allow to have two unit tests for the same model?

@thorade
Copy link
Contributor

thorade commented May 20, 2018

Long term, we probably want to move towards a specification of the test in the .mo file rather than the .mos file, which would also be easier if there is one test per .mo file.

Would that be a BuildingsPy-specific specification or would it be possible to have a specification that all test tools can agree on?

@mwetter
Copy link
Member

mwetter commented May 21, 2018

@Mathadon : Such a test would be good to have.

@thorade : I am not aware that there is a concerted effort for such a test specification that includes tool vendors. I would start with a vendor-annotation of the type __BuildingsPy, and then add information that is common to all tools, and sections that are specific to a tool. We could easily parse such vendor annotation with https://github.com/lbl-srg/modelica-json.

@Mathadon
Copy link
Contributor Author

@mwetter I will make a pull request.

Mathadon added a commit to Mathadon/BuildingsPy that referenced this issue May 22, 2018
Mathadon added a commit to Mathadon/BuildingsPy that referenced this issue May 22, 2018
@mwetter mwetter linked a pull request May 24, 2018 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants