-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce EventsExecutor implementation #1391
base: rolling
Are you sure you want to change the base?
Conversation
Draft form of this feature to drive discussion only. I've done some minimal work on this relative to the original code to get things in roughly the right shape; however the functional test currently seems to explode, and a bunch of the linters are unhappy. Even once this nominally works, there's a fair bit more work to be done on it, starting with the removal of the boost dependency. |
Signed-off-by: Brad Martin <[email protected]>
The previous version worked on 22.04+Iron, but fails on 24.04+Jazzy or Rolling. It seems like the behavior of std::bind() may have changed when asked to bind a py::object to a callback taking a py::handle. Signed-off-by: Brad Martin <[email protected]>
039907a
to
18036c4
Compare
Signed-off-by: Brad Martin <[email protected]>
Signed-off-by: Brad Martin <[email protected]>
6a7c1e7
to
68ecfb4
Compare
Signed-off-by: Brad Martin <[email protected]>
Signed-off-by: Brad Martin <[email protected]>
clang-format fixes things that uncrustify tends to leave alone, but then uncrustify seems slightly unhappy with that result. Also reflow comments at 100 columns. This uses the .clang-format file from the ament-clang-format package, with the exception of SortIncludes being set to false, because I didn't like what it tried to do with includes without that override. Signed-off-by: Brad Martin <[email protected]>
Signed-off-by: Brad Martin <[email protected]>
Signed-off-by: Brad Martin <[email protected]>
Tests that currently work are enabled, and those that don't are annotated what needs to be done before they can be enabled Signed-off-by: Brad Martin <[email protected]>
Signed-off-by: Brad Martin <[email protected]>
Signed-off-by: Brad Martin <[email protected]>
Signed-off-by: Brad Martin <[email protected]>
if (next_call_ns <= 0) { | ||
// This just notifies RCL that we're considering the timer triggered, for the purposes of | ||
// updating the next trigger time. | ||
const auto ret = rcl_timer_call(timer_cb_pair.first); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This happens directly before the the execution of the timer in the c++ version. I wonder if doing it here introduces some unwanted behavior...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
None that I've encountered yet? It just seemed important to get started on the next interval as quickly as possible to keep the timer rate from drifting due to the latency in dispatching the timer. Are there any specific issues you suspect with this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I remember some test failing around this detail in my implementations, sadly I don't remember the details. I guess the behavior around cancel is broken now.
E.g.
A callback taking some time is running.
Timer becomes ready, is added to the ready queue
Some other callback cancels the timer
Timer is executed anyway
Is there a check if the timer was canceled directly before the execution, that I missed ?
The interval should not drift, if this call is issued later. The reason for this is, that the next trigger time is computed as :
while(nextTime < now)
{
nextTimer = nextTime + period;
}
I think there is another bug lurking here. If a timer takes longer than its period, your queue will fill up. You need to block the timer from recomputing the next execution time, until it was executed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah gotcha, yeah it won't drift but it will potentially miss whole intervals, but perhaps that was intended per your concern about queuing multiple calls.
As far as canceling, I'm not sure even SingleThreadedExecutor
is safe from races on that front:
- I think a cancel between the
is_timer_ready()
check in_wait_for_ready_callbacks()
and thecall_timer_with_info()
call in_take_timer()
will cause an uncaughtRCLError
exception to be thrown:
File "/home/brad/rclpy_ws/install/rclpy/lib/python3.12/site-packages/rclpy/executors.py", line 380, in _take_timer
info = tmr.handle.call_timer_with_info()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
rclpy._rclpy_pybind11.RCLError: failed to call timer: timer is canceled, at ./src/rcl/timer.c:268
- A cancel any time after
call_timer_with_info()
will not prevent the callback already in motion from eventually being delivered
I agree though that spacing out the rcl_timer_call()
from the callback dispatch does make this race more likely to hit though. I'll go ahead and rearrange that per the other concerns above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be resolved now. Both the rcl_timer_call()
and the user callback now occur from the same asio::post()
event.
} | ||
it->second->RemoveTimer(timer); | ||
if (it->second->empty()) { | ||
clock_managers_.erase(it); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hm, isn't there a race here, that the asio callback still happens into the now deleted clock_manager ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the timer's own callback is safe, because it calls a user callback directly without capturing the clock manager itself. However, it may be possible for HandleJump()
and UpdateTimers()
to get called with a deleted this
pointer, if you remove the last timer from a clock immediately after a callback was posted, and before the callback is dispatched. I'll make a note to look at this closer later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is resolved now.
// objects you're working with. We'll try to figure out what kind of stuff is hiding behind the | ||
// abstraction by having the Waitable add itself to a wait set, then take stock of what all ended | ||
// up there. We'll also have to hope that no Waitable implementations ever change their component | ||
// entities over their lifetimes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, the rclcpp layer explicitly forces the waitables, to implement events handling. I don't know much about the rclpy waitable internals, but the approach taken here, would break the rclcpp waitables, so we should check this... A good candidate to have a look at should be the action interfaces.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are test cases in test_events_executor.py
specifically for actions, so I know that rclpy Waitable
type works, unless you think there's something specific missing from that test coverage.
Can you clarify what it is you're looking for here? Are you wanting me to modify the Waitable
interface to better support callbacks directly? That's probably a bit more ambitious and invasive than I was hoping to be, but I can look into it if you think it's necessary.
if (next_call_ns <= 0) { | ||
// This just notifies RCL that we're considering the timer triggered, for the purposes of | ||
// updating the next trigger time. | ||
const auto ret = rcl_timer_call(timer_cb_pair.first); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I remember some test failing around this detail in my implementations, sadly I don't remember the details. I guess the behavior around cancel is broken now.
E.g.
A callback taking some time is running.
Timer becomes ready, is added to the ready queue
Some other callback cancels the timer
Timer is executed anyway
Is there a check if the timer was canceled directly before the execution, that I missed ?
The interval should not drift, if this call is issued later. The reason for this is, that the next trigger time is computed as :
while(nextTime < now)
{
nextTimer = nextTime + period;
}
I think there is another bug lurking here. If a timer takes longer than its period, your queue will fill up. You need to block the timer from recomputing the next execution time, until it was executed.
It was being used only inconsistently, and on reflection it wasn't adding any protection beyond what the GIL already provides. Signed-off-by: Brad Martin <[email protected]>
Needed to release the GIL while blocking on another lock. Signed-off-by: Brad Martin <[email protected]>
Signed-off-by: Brad Martin <[email protected]>
Never bool-test a py::object::attr() return value without an explicit py::cast<bool>. Signed-off-by: Brad Martin <[email protected]>
Signed-off-by: Brad Martin <[email protected]>
29c5109
to
3502872
Compare
EventsExecutor was exploding because the test was leaving destroyed entities in the node by using an incorrect API to destroy them. Signed-off-by: Brad Martin <[email protected]>
…om it * Test methods need to be prefixed with 'test_' in order to function. This had been entirely dead code, and when I enabled it EventsExecutor deadlocked. * On reflection, it seems like a deadlock is exactly what should be expected from what this test is doing. Remove EventsExecutor from the test and document the problem. Signed-off-by: Brad Martin <[email protected]>
0373a8c
to
776a1a3
Compare
Co-authored-by: Janosch Machowinski <[email protected]> Signed-off-by: Brad Martin <[email protected]>
This both prevents an issue where user callbacks could potentially queue up if they didn't dispatch fast enough, and ensures that a timer that has passed its expiration without being dispatched yet can still be canceled without the user callback being delivered later. This commit also makes use of the new rcl API for querying the absolute timer expiration time, instead of the relative number of nanoseconds remaining until it expires. This should both make things more accurate, and potentially reduce overhead as we don't have to re-query the current time for each outstanding timer. Signed-off-by: Brad Martin <[email protected]>
…ckManager Signed-off-by: Brad Martin <[email protected]>
Signed-off-by: Brad Martin <[email protected]>
This both reduces duplicate code now, and simplifies the asio interface used which would need replacing. Signed-off-by: Brad Martin <[email protected]>
This method can't be allowed to leave its Future done callback outstanding in case the method is returning for a reason other than the Future being done. Signed-off-by: Brad Martin <[email protected]>
Closes #1389