Skip to content

Commit 5bf2833

Browse files
committed
Replay Testing MVP
1 parent f26ea01 commit 5bf2833

34 files changed

+1857
-2
lines changed

.flake8

+2
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
[flake8]
2+
max-line-length = 79

.gitignore

+5
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
.pytest_cache
2+
**__pycache__**
3+
4+
*.egg-info
5+
build

.gitlab-ci.yml

+34
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
variables:
2+
REPOS_FILE: repos.yaml
3+
VCS_ARGS: --recursive
4+
PACKAGE_NAME: replay_testing
5+
6+
include:
7+
- project: 'polymathrobotics/ci/ci_templates'
8+
ref: main
9+
file: '/ros/ros2_package.impl.yml'
10+
- project: 'polymathrobotics/ci/ci_templates'
11+
ref: main
12+
file: '/ros/ros2_container/containerize.impl.yml'
13+
- project: 'polymathrobotics/ci/ci_templates'
14+
ref: main
15+
file: '/docker-bake/bake_with_vcs_import_arm64.impl.yml'
16+
- project: 'polymathrobotics/ci/ci_templates'
17+
ref: main
18+
file: '/common/rules.yml'
19+
- project: 'polymathrobotics/ci/ci_templates'
20+
ref: main
21+
file: '/common/stages.yml'
22+
23+
build_and_test_replay_testing:
24+
extends: .ros2_build_and_test
25+
26+
eval_replay_testing:
27+
extends: .ros2_evaluate
28+
needs:
29+
- job: build_and_test_replay_testing
30+
artifacts: true
31+
artifacts:
32+
reports:
33+
junit: $ARTIFACTS_PATH/test_results/test_results/$PACKAGE_NAME/*.xml
34+

CMakeLists.txt

+48
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
cmake_minimum_required(VERSION 3.8)
2+
project(replay_testing)
3+
4+
# Find dependencies
5+
find_package(ament_cmake REQUIRED)
6+
find_package(ament_cmake_python REQUIRED)
7+
find_package(rclpy REQUIRED)
8+
9+
include(cmake/install_mcap.cmake)
10+
install_mcap()
11+
12+
# Install Python modules and entry points
13+
ament_python_install_package(${PROJECT_NAME})
14+
15+
# Install data files (e.g., test fixtures)
16+
install(DIRECTORY test/fixtures/
17+
DESTINATION share/${PROJECT_NAME}/test/fixtures
18+
)
19+
20+
install(DIRECTORY test/replay_tests/
21+
DESTINATION share/${PROJECT_NAME}/test/replay_tests
22+
)
23+
24+
install(PROGRAMS replay_testing/cli.py
25+
DESTINATION lib/${PROJECT_NAME}
26+
RENAME replay_test
27+
)
28+
29+
# Expose cmake modules for use by others
30+
install(
31+
DIRECTORY cmake
32+
DESTINATION share/${PROJECT_NAME}
33+
)
34+
35+
# Define test dependencies
36+
if(BUILD_TESTING)
37+
include(cmake/add_replay_test.cmake)
38+
39+
find_package(ament_cmake_pytest REQUIRED)
40+
ament_add_pytest_test(pytest ${CMAKE_CURRENT_SOURCE_DIR}/test)
41+
42+
# Dogfood it!
43+
add_replay_test(test/replay_tests/basic_replay.py)
44+
endif()
45+
46+
# Package ament metadata
47+
ament_package()
48+

README.md

+199-2
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,199 @@
1-
# replay_testing
2-
A testing library and CLI for replaying ROS nodes.
1+
# Replay Testing
2+
3+
A ROS2-based framework for configuring, authoring and running replay tests.
4+
5+
Features include:
6+
- MCAP replay and automatic recording of assets for offline review
7+
- Baked-in Unittest support for MCAP asserts
8+
- Parametric sweeps
9+
- Easy-to-use CMake for running in CI
10+
- Lightweight CLI for running quickly
11+
12+
## What is Replay Tesing?
13+
14+
Replay testing is simply a way to replay previously recorded data into your own set of ROS nodes. When you are iterating on a piece of code, it is typically much easier to develop it on your local machine rather than on robot. Therefore, if you are able to record that data on-robot first, and then replay locally, you get the best of both worlds!
15+
16+
All robotics developers use replay testing in one form or another. This package just wraps many of the conventions into an easy executable.
17+
18+
## Usage
19+
20+
### CLI
21+
22+
```
23+
ros2 run replay_testing replay_test [REPLAY_TEST_PATH]
24+
```
25+
26+
For other args:
27+
```
28+
ros2 run replay_testing replay_test --help
29+
```
30+
31+
32+
### `colcon test` and CMake
33+
34+
This package exposes CMake you can use for running replay tests as part of your own package's testing pipeline.
35+
36+
To use:
37+
38+
```cmake
39+
find_pacakage(replay_testing REQUIRED)
40+
41+
..
42+
43+
if(BUILD_TESTING)
44+
add_replay_test([REPLAY_TEST_PATH])
45+
endif()
46+
47+
```
48+
49+
If you've set up your CI to persist artifact paths under `test_results`, you should see a `*.xunit.xml` file be produced based on the `REPLAY_TEST_PATH` you provided.
50+
51+
52+
## Authoring Replay Tests
53+
54+
Each replay test can be authored into its own file, like `my_replay_test.py`. We expose a set of Python decorators that you wrap each class for your test.
55+
56+
Replay testing has three distinct phases, **all of which are required to run a replay test**:
57+
58+
### Fixtures `@fixtures`
59+
60+
For collecting and preparing your fixtures to be run against your launch specification. Duties include:
61+
- Provides a mechanism for specifying your input fixtures (e.g. `lidar_data.mcap`)
62+
- Filtering out any expected output topics that will be produced from the `run` step.
63+
- Produces a `filtered_fixture.mcap` asset that is used against the `run` step
64+
- Asserts that specified input topics are present
65+
- (Eventually) Provides ways to make your old data forwards compatible with updates to your robotics stack
66+
67+
Here is how you use it:
68+
69+
```python
70+
@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")])
71+
class Fixtures:
72+
input_topics = ["/vehicle/cmd_vel"]
73+
output_topics = ["/user/cmd_vel"]
74+
```
75+
76+
77+
### Run `@run`
78+
79+
Specify a launch description that will run against the replayed fixture. Usage:
80+
81+
```python
82+
@run.default()
83+
class Run:
84+
def generate_launch_description(self) -> LaunchDescription:
85+
return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")
86+
```
87+
88+
If you'd like to specify a parameter sweep, you can use the variant:
89+
```python
90+
@run.parameterize(
91+
[
92+
ReplayRunParams(name="name_of_your_test", params={..}),
93+
]
94+
)
95+
class Run:
96+
def generate_launch_description(
97+
self, replay_run_params: ReplayRunParams # Keyed by `name`
98+
) -> LaunchDescription:
99+
return LaunchDescription(" YOUR LAUNCH DESCRIPTION ")
100+
```
101+
102+
Parameterizing your `run` will result in the `analyze` step being run n-param times.
103+
104+
### Analyze `@analyze`
105+
106+
The analyze step is run after the mcap from the `run` is recorded and written. It is a basic wrapper over `unittest.TestCase`, so any `unittest` assertions are built in.
107+
108+
It also wraps an initialized MCAP reader `self.reader` ([MCAP docs](https://mcap.dev/docs/python/mcap-ros2-apidoc/mcap_ros2.reader)) that you can use to assert against expected message output.
109+
110+
Example:
111+
112+
```python
113+
@analyze
114+
class Analyze:
115+
def test_cmd_vel(self):
116+
msgs_it = mcap_ros2.reader.read_ros2_messages(
117+
self.reader, topics=["/user/cmd_vel"]
118+
)
119+
120+
msgs = [msg for msg in msgs_it]
121+
assert len(msgs) == 1
122+
assert msgs[0].channel.topic == "/user/cmd_vel"
123+
```
124+
125+
### Full Example
126+
127+
```python
128+
from replay_testing import (
129+
fixtures,
130+
run,
131+
analyze,
132+
McapFixture,
133+
)
134+
from launch import LaunchDescription
135+
from launch.actions import ExecuteProcess
136+
137+
import mcap_ros2.reader
138+
import pathlib
139+
140+
141+
@fixtures.parameterize([McapFixture(path="/tmp/mcap/my_data.mcap")])
142+
class Fixtures:
143+
input_topics = ["/vehicle/cmd_vel"]
144+
output_topics = ["/user/cmd_vel"]
145+
146+
147+
@run.default()
148+
class Run:
149+
def generate_launch_description(self) -> LaunchDescription:
150+
return LaunchDescription(
151+
[
152+
ExecuteProcess(
153+
cmd=[
154+
"ros2",
155+
"topic",
156+
"pub",
157+
"/user/cmd_vel",
158+
"geometry_msgs/msg/Twist",
159+
"{linear: {x: 1.0}, angular: {z: 0.5}}",
160+
],
161+
name="topic_pub",
162+
output="screen",
163+
)
164+
]
165+
)
166+
167+
168+
@analyze
169+
class AnalyzeBasicReplay:
170+
def test_cmd_vel(self):
171+
msgs_it = mcap_ros2.reader.read_ros2_messages(
172+
self.reader, topics=["/user/cmd_vel"]
173+
)
174+
175+
msgs = [msg for msg in msgs_it]
176+
assert len(msgs) == 1
177+
assert msgs[0].channel.topic == "/user/cmd_vel"
178+
179+
```
180+
181+
## Reviewing MCAP from Replay Tests
182+
183+
If you'd like to directly view the resulting replay results in tools like Foxglove, `replay_testing` will produce and print the result directory under `/tmp/replay_testing`. Example:
184+
185+
```
186+
/tmp/replay_testing/a00a98aa-7f24-45c6-9299-b6232dcd842d/cmd_vel_only/runs/default
187+
```
188+
189+
The guid here is dynamically generated, and within that directory you can find all of your run results under the `runs` subdirectory.
190+
191+
## FAQ
192+
193+
> Why MCAP?
194+
195+
We've built most of our internal tooling around Foxglove, which supports MCAP best. The Foxglove team has published a robust set of libraries for writing and reading MCAP that we've used successfully here.
196+
197+
> Can this package support other forms of recorded data? E.g. *.db3
198+
199+
Certainly open to it!

0 commit comments

Comments
 (0)