|
1 |
| -# replay_testing |
2 |
| -A testing library and CLI for replaying ROS nodes. |
| 1 | +# Replay Testing |
| 2 | + |
| 3 | +A ROS2-based framework for configuring, authoring and running replay tests. |
| 4 | + |
| 5 | +Features include: |
| 6 | +- MCAP replay and automatic recording of assets for offline review |
| 7 | +- Baked-in Unittest support for MCAP asserts |
| 8 | +- Parametric sweeps |
| 9 | +- Easy-to-use CMake for running in CI |
| 10 | +- Lightweight CLI for running quickly |
| 11 | + |
| 12 | +## What is Replay Tesing? |
| 13 | + |
| 14 | +Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds! |
| 15 | + |
| 16 | +All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable. |
| 17 | + |
| 18 | +## Usage |
| 19 | + |
| 20 | +### CLI |
| 21 | + |
| 22 | +``` |
| 23 | +ros2 run replay_testing replay_test [REPLAY_TEST_PATH] |
| 24 | +``` |
| 25 | + |
| 26 | +For other args: |
| 27 | +``` |
| 28 | +ros2 run replay_testing replay_test --help |
| 29 | +``` |
| 30 | + |
| 31 | + |
| 32 | +### `colcon test` and CMake |
| 33 | + |
| 34 | +This package exposes CMake you can use for running replay tests as part of your own package's testing pipeline. |
| 35 | + |
| 36 | +To use: |
| 37 | + |
| 38 | +```cmake |
| 39 | +find_pacakage(replay_testing REQUIRED) |
| 40 | +
|
| 41 | +.. |
| 42 | +
|
| 43 | +if(BUILD_TESTING) |
| 44 | + add_replay_test([REPLAY_TEST_PATH]) |
| 45 | +endif() |
| 46 | +
|
| 47 | +``` |
| 48 | + |
| 49 | +If you've set up your CI to persist artifact paths under `test_results`, you should see a `*.xunit.xml` file be produced based on the `REPLAY_TEST_PATH` you provided. |
| 50 | + |
| 51 | + |
| 52 | +## Authoring Replay Tests |
| 53 | + |
| 54 | +Each replay test can be authored into its own file, like `my_replay_test.py`. We expose a set of Python decorators that you wrap each class for your test. |
| 55 | + |
| 56 | +Replay testing has three distinct phases, **all of which are required to run a replay test**: |
| 57 | + |
| 58 | +### Fixtures `@fixtures` |
| 59 | + |
| 60 | +For collecting and preparing your fixtures to be run against your launch specification. Duties include: |
| 61 | +- Provides a mechanism for specifying your input fixtures (e.g. `lidar_data.mcap`) |
| 62 | +- Filtering out any expected output topics that will be produced from the `run` step. |
| 63 | +- Produces a `filtered_fixture.mcap` asset that is used against the `run` step |
| 64 | +- Asserts that specified input topics are present |
| 65 | +- (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack |
| 66 | + |
| 67 | +Here is how you use it: |
| 68 | + |
| 69 | +```python |
| 70 | +@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")]) |
| 71 | +class Fixtures: |
| 72 | + input_topics = ["/vehicle/cmd_vel"] |
| 73 | + output_topics = ["/user/cmd_vel"] |
| 74 | +``` |
| 75 | + |
| 76 | + |
| 77 | +### Run `@run` |
| 78 | + |
| 79 | +Specify a launch description that will run against the replayed fixture. Usage: |
| 80 | + |
| 81 | +```python |
| 82 | +@run.default() |
| 83 | +class Run: |
| 84 | + def generate_launch_description(self) -> LaunchDescription: |
| 85 | + return LaunchDescription(" YOUR LAUNCH DESCRIPTION ") |
| 86 | +``` |
| 87 | + |
| 88 | +If you'd like to specify a parameter sweep, you can use the variant: |
| 89 | +```python |
| 90 | +@run.parameterize( |
| 91 | + [ |
| 92 | + ReplayRunParams(name="name_of_your_test", params={..}), |
| 93 | + ] |
| 94 | +) |
| 95 | +class Run: |
| 96 | + def generate_launch_description( |
| 97 | + self, replay_run_params: ReplayRunParams # Keyed by `name` |
| 98 | + ) -> LaunchDescription: |
| 99 | + return LaunchDescription(" YOUR LAUNCH DESCRIPTION ") |
| 100 | +``` |
| 101 | + |
| 102 | +Parameterizing your `run` will result in the `analyze` step being run n-param times. |
| 103 | + |
| 104 | +### Analyze `@analyze` |
| 105 | + |
| 106 | +The analyze step is run after the mcap from the `run` is recorded and written. It is a basic wrapper over `unittest.TestCase`, so any `unittest` assertions are built in. |
| 107 | + |
| 108 | +It also wraps an initialized MCAP reader `self.reader` ([MCAP docs](https://mcap.dev/docs/python/mcap-ros2-apidoc/mcap_ros2.reader)) that you can use to assert against expected message output. |
| 109 | + |
| 110 | +Example: |
| 111 | + |
| 112 | +```python |
| 113 | +@analyze |
| 114 | +class Analyze: |
| 115 | + def test_cmd_vel(self): |
| 116 | + msgs_it = mcap_ros2.reader.read_ros2_messages( |
| 117 | + self.reader, topics=["/user/cmd_vel"] |
| 118 | + ) |
| 119 | + |
| 120 | + msgs = [msg for msg in msgs_it] |
| 121 | + assert len(msgs) == 1 |
| 122 | + assert msgs[0].channel.topic == "/user/cmd_vel" |
| 123 | +``` |
| 124 | + |
| 125 | +### Full Example |
| 126 | + |
| 127 | +```python |
| 128 | +from replay_testing import ( |
| 129 | + fixtures, |
| 130 | + run, |
| 131 | + analyze, |
| 132 | + McapFixture, |
| 133 | +) |
| 134 | +from launch import LaunchDescription |
| 135 | +from launch.actions import ExecuteProcess |
| 136 | + |
| 137 | +import mcap_ros2.reader |
| 138 | +import pathlib |
| 139 | + |
| 140 | + |
| 141 | +@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")]) |
| 142 | +class Fixtures: |
| 143 | + input_topics = ["/vehicle/cmd_vel"] |
| 144 | + output_topics = ["/user/cmd_vel"] |
| 145 | + |
| 146 | + |
| 147 | +@run.default() |
| 148 | +class Run: |
| 149 | + def generate_launch_description(self) -> LaunchDescription: |
| 150 | + return LaunchDescription( |
| 151 | + [ |
| 152 | + ExecuteProcess( |
| 153 | + cmd=[ |
| 154 | + "ros2", |
| 155 | + "topic", |
| 156 | + "pub", |
| 157 | + "/user/cmd_vel", |
| 158 | + "geometry_msgs/msg/Twist", |
| 159 | + "{linear: {x: 1.0}, angular: {z: 0.5}}", |
| 160 | + ], |
| 161 | + name="topic_pub", |
| 162 | + output="screen", |
| 163 | + ) |
| 164 | + ] |
| 165 | + ) |
| 166 | + |
| 167 | + |
| 168 | +@analyze |
| 169 | +class AnalyzeBasicReplay: |
| 170 | + def test_cmd_vel(self): |
| 171 | + msgs_it = mcap_ros2.reader.read_ros2_messages( |
| 172 | + self.reader, topics=["/user/cmd_vel"] |
| 173 | + ) |
| 174 | + |
| 175 | + msgs = [msg for msg in msgs_it] |
| 176 | + assert len(msgs) == 1 |
| 177 | + assert msgs[0].channel.topic == "/user/cmd_vel" |
| 178 | + |
| 179 | +``` |
| 180 | + |
| 181 | +## Reviewing MCAP from Replay Tests |
| 182 | + |
| 183 | +If you'd like to directly view the resulting replay results in tools like Foxglove, `replay_testing` will produce and print the result directory under `/tmp/replay_testing`. Example: |
| 184 | + |
| 185 | +``` |
| 186 | +/tmp/replay_testing/a00a98aa-7f24-45c6-9299-b6232dcd842d/cmd_vel_only/runs/default |
| 187 | +``` |
| 188 | + |
| 189 | +The guid here is dynamically generated, and within that directory you can find all of your run results under the `runs` subdirectory. |
| 190 | + |
| 191 | +## FAQ |
| 192 | + |
| 193 | +> Why MCAP? |
| 194 | +
|
| 195 | +We've built most of our internal tooling around Foxglove, which supports MCAP best. The Foxglove team has published a robust set of libraries for writing and reading MCAP that we've used successfully here. |
| 196 | + |
| 197 | +> Can this package support other forms of recorded data? E.g. *.db3 |
| 198 | +
|
| 199 | +Certainly open to it! |
0 commit comments