Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up Gazebo SDF installation in CI workflows #403

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

flferretti
Copy link
Collaborator

@flferretti flferretti commented Apr 8, 2025

This pull request includes changes to the CI/CD workflow configuration file to update the installation process for Gazebo and its dependencies.

Changes to CI/CD workflow:

  • .github/workflows/ci_cd.yml: Updated the commands to add the Gazebo repository and its key, and modified the package installation process to include libsdformat14-dev instead of gz-ionic and sdf.

📚 Documentation preview 📚: https://jaxsim--403.org.readthedocs.build//403/

@flferretti flferretti self-assigned this Apr 8, 2025
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Benchmark

Benchmark suite Current: 9fc838f Previous: b8dc880 Ratio
tests/test_benchmark.py::test_forward_dynamics_aba[1] 30.45431723617565 iter/sec (stddev: 0.00014903937319438995) 30.43094994884611 iter/sec (stddev: 0.0002354202407734067) 1.00
tests/test_benchmark.py::test_forward_dynamics_aba[128] 14.487057735233533 iter/sec (stddev: 0.0002740891145887525) 14.575836005798429 iter/sec (stddev: 0.0006585891972580001) 1.01
tests/test_benchmark.py::test_free_floating_bias_forces[1] 24.64024359997299 iter/sec (stddev: 0.0002761856178812413) 25.397133741737715 iter/sec (stddev: 0.0009491392636666373) 1.03
tests/test_benchmark.py::test_free_floating_bias_forces[128] 10.520226991748462 iter/sec (stddev: 0.000612239742685594) 10.6816914285496 iter/sec (stddev: 0.000517398924164968) 1.02
tests/test_benchmark.py::test_forward_kinematics[1] 71.24360128062833 iter/sec (stddev: 0.00020961016795191656) 72.03167875049594 iter/sec (stddev: 0.00009500412749925384) 1.01
tests/test_benchmark.py::test_forward_kinematics[128] 23.270686262928642 iter/sec (stddev: 0.00032762942303246183) 23.471106849285086 iter/sec (stddev: 0.00020376718898495997) 1.01
tests/test_benchmark.py::test_free_floating_mass_matrix[1] 38.29954315857371 iter/sec (stddev: 0.0002557530839168373) 38.675730942178994 iter/sec (stddev: 0.00010568383460247941) 1.01
tests/test_benchmark.py::test_free_floating_mass_matrix[128] 37.81409056929418 iter/sec (stddev: 0.0002781760908299701) 38.2550466367494 iter/sec (stddev: 0.00015319005592225342) 1.01
tests/test_benchmark.py::test_free_floating_jacobian[1] 46.91836546435265 iter/sec (stddev: 0.00013836526784276483) 48.19269568026804 iter/sec (stddev: 0.00008858560640105389) 1.03
tests/test_benchmark.py::test_free_floating_jacobian[128] 48.13922595576763 iter/sec (stddev: 0.0001275815140420415) 49.04617743615522 iter/sec (stddev: 0.00007703519636250519) 1.02
tests/test_benchmark.py::test_free_floating_jacobian_derivative[1] 28.848191808301532 iter/sec (stddev: 0.00024182056897584733) 29.346998764148726 iter/sec (stddev: 0.00015369239727139257) 1.02
tests/test_benchmark.py::test_free_floating_jacobian_derivative[128] 29.059321114645346 iter/sec (stddev: 0.00017056492756835668) 29.579412781307177 iter/sec (stddev: 0.0001491117343376076) 1.02
tests/test_benchmark.py::test_soft_contact_model[1] 26.922947884003573 iter/sec (stddev: 0.00025774343556890245) 27.180213298960158 iter/sec (stddev: 0.00011167425098739633) 1.01
tests/test_benchmark.py::test_soft_contact_model[128] 13.382194446144847 iter/sec (stddev: 0.0010709504037412331) 13.438532944177156 iter/sec (stddev: 0.0007602145408073091) 1.00
tests/test_benchmark.py::test_rigid_contact_model[1] 5.663474186425651 iter/sec (stddev: 0.0012416078847196643) 5.826319682602947 iter/sec (stddev: 0.0005824582164795953) 1.03
tests/test_benchmark.py::test_rigid_contact_model[128] 0.8318150436333397 iter/sec (stddev: 0.0018900045498410943) 0.8342140152755144 iter/sec (stddev: 0.0033817016292866774) 1.00
tests/test_benchmark.py::test_relaxed_rigid_contact_model[1] 5.267519237042927 iter/sec (stddev: 0.000597300390032508) 5.166702718309451 iter/sec (stddev: 0.00044048150996140167) 0.98
tests/test_benchmark.py::test_relaxed_rigid_contact_model[128] 2.6482161653710388 iter/sec (stddev: 0.0005415088282890713) 3.1864271451250157 iter/sec (stddev: 0.001222744051181486) 1.20
tests/test_benchmark.py::test_simulation_step[1] 4.125313463009654 iter/sec (stddev: 0.0007392002198967209) 4.16523976071087 iter/sec (stddev: 0.0005173472764501817) 1.01
tests/test_benchmark.py::test_simulation_step[128] 2.1352912145974816 iter/sec (stddev: 0.004754775693441531) 2.4639733087509925 iter/sec (stddev: 0.0009650346967322344) 1.15

This comment was automatically generated by workflow using github-action-benchmark.

@flferretti flferretti marked this pull request as ready for review April 8, 2025 14:27
@flferretti flferretti requested a review from xela-95 as a code owner April 8, 2025 14:27
@traversaro
Copy link
Contributor

I am missing something, the tests are not passing in this PR? To be honest, I guess you also need libgz-tools2 or some similar package, otherwise the gz command (for which sdformat installs a plugin to implement gz sdf) is not installed.

@flferretti
Copy link
Collaborator Author

flferretti commented Apr 8, 2025

I am missing something, the tests are not passing in this PR? To be honest, I guess you also need libgz-tools2 or some similar package, otherwise the gz command (for which sdformat installs a plugin to implement gz sdf) is not installed.

The tests are passing indeed, but the merging is blocked since two approving reviews are needed. Or at least, this is what I see

@traversaro
Copy link
Contributor

Sorry, I wrote "test are not passing", but what I wanted to write is "tests are not running at all"?

@flferretti
Copy link
Collaborator Author

flferretti commented Apr 8, 2025

Sorry, I wrote "test are not passing", but what I wanted to write is "tests are not running at all"?

Oh I see, sorry. That is due to the fact that the change is in a folder that is filtered out of the CI, e.g. some tests do not run if the change does not pertain to certain folders. See https://github.com/ami-iit/jaxsim/blob/main/.github/workflows/ci_cd.yml#L103-L118

@traversaro
Copy link
Contributor

Not urgent (as the this PR in general), but can we either get rid of it (I do not think the gain in time in docs-only PR is worth the risk of accidentally merging PR in which tests are not running in non-clear way) or if really we need that filter make it more robust?

@flferretti
Copy link
Collaborator Author

Not urgent (as the this PR in general), but can we either get rid of it (I do not think the gain in time in docs-only PR is worth the risk of accidentally merging PR in which tests are not running in non-clear way) or if really we need that filter make it more robust?

I definitely agree with getting rid of it

@flferretti
Copy link
Collaborator Author

Not urgent (as the this PR in general), but can we either get rid of it (I do not think the gain in time in docs-only PR is worth the risk of accidentally merging PR in which tests are not running in non-clear way) or if really we need that filter make it more robust?

I definitely agree with getting rid of it

Done in #404

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants