-
-
Notifications
You must be signed in to change notification settings - Fork 349
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add @background_with_channel
#3197
base: main
Are you sure you want to change the base?
Conversation
I'm not too sold on how some of this is done, but at least now it shouldn't fail CI. edit: I'm also pretty sure the more correct way to fix the race condition would be using |
I tracked down the race condition to We still want to use The reason it works to move the |
…add buffer_size tests, remove unused code
Codecov ReportAttention: Patch coverage is
❌ Your patch check has failed because the patch coverage (98.09524%) is below the target coverage (100.00000%). You can increase the patch coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## main #3197 +/- ##
====================================================
- Coverage 100.00000% 99.98941% -0.01059%
====================================================
Files 124 124
Lines 18787 18889 +102
Branches 1269 1274 +5
====================================================
+ Hits 18787 18887 +100
- Misses 0 1 +1
- Partials 0 1 +1
|
@Zac-HD if you can show why the inner loop is necessary it'd be great, but I'm kinda suspecting it's a remnant of previous implementations or something - because I can't come up with anything that would hit that code path. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this'll help? But haven't run many tests yet...
try: | ||
# Send the value to the channel | ||
await send_chan.send(value) | ||
except trio.BrokenResourceError: | ||
# Closing the corresponding receive channel should cause | ||
# a clean shutdown of the generator. | ||
return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The loop in my prototype evolved from this one by @oremanj, via @belm0 here.
If we could trigger it, it'd have to be a non-Cancelled
non-BrokenResourceError
, which occurred while waiting in await send_chan.send(value)
. I think we could in principle get a few things here (e.g. KeyboardInterrupt
, GC-related errors, etc), but in each case calling .aclose()
on the generator and raising from this function without ever throwing the error into the generator seems like a reasonable response to me.
So... I might be wrong, but this simpler code looks good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah since your version had except
s for BrokenResourceError
and Cancelled
it's only niche stuff that could get sent. And I don't see any reason why the generator should generate another value after the send
gets cancelled or broken since it'll just fail again.
Given that send
has KI protection I don't even think it can raise KeyboardInterrupt
(unless that gets raised just as the decorator exits? idk details how that works)
src/trio/_channel.py
Outdated
await nursery.start(_move_elems_to_channel, ait, send_chan) | ||
# async with recv_chan could eat exceptions, so use sync cm | ||
with recv_chan: | ||
yield recv_chan |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, looking at this again - we want to be sure that we close the channels even if we get an error while starting the generator, which suggests pulling the send channel to the outside:
await nursery.start(_move_elems_to_channel, ait, send_chan) | |
# async with recv_chan could eat exceptions, so use sync cm | |
with recv_chan: | |
yield recv_chan | |
with send_chan, recv_chan: | |
nursery.start_soon(_move_elems_to_channel, ait, send_chan) | |
try: | |
yield recv_chan | |
except BaseException: | |
with trio.CancelScope(shield=True) as scope: | |
scope.cancel() | |
await ait.aclose() | |
raise | |
else: | |
await ait.aclose() |
I also spent a while looking at the spec for await agen.aclose()
too, but in my experiments I couldn't construct a version which misbehaved with our decorator but was OK without it. (although it's easy enough to get errors both ways!)
The tricky part is that I run into basically the same questions as #1559 (comment). I think here my solution gets around that: we aclose_forcefully()
if there's any error propagating (and if there's an error during that, it should propagate as usual I think?), while if there's no exception yet we do a bare await ait.aclose()
and allow whatever cleanup logic we might have to run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think there's any problems with not closing the receive channel on starting the generator - if the nursery.start
call errors then we'll never yield the receive channel to the user, so there's nobody that can get stuck waiting on it
Channel objects can be closed by calling
~trio.abc.AsyncResource.aclose
or usingasync with
. They are not automatically closed when garbage
collected. Closing memory channels isn't mandatory, but it is generally a
good idea, because it helps avoid situations where tasks get stuck waiting
on a channel when there's no-one on the other side. See
:ref:channel-shutdown
for details.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh and also the send channel will be closed in current implementation, so there's really no problem with leaving the receive channel unclosed.
add buffer_size note to docstring Co-authored-by: Zac Hatfield-Dodds <[email protected]>
for more information, see https://pre-commit.ci
…syncContextManager, I have no clue why
I have no clue why sphinx fails to link the |
Mostly just shepherding @Zac-HD's implementation from #638 (comment)
I don't understand all the details of it, esp some of the code paths I'm completely failing how to cover, so gonna need some help. And I'm getting an exception that sometimes disappear.. which seems bad?
Feel free to add code suggestions and/or commit directly to the branch.
I don't know if this fully resolves #638, or if there's docs and/or other stuff that should be added.
I will make https://flake8-async.readthedocs.io/en/latest/rules.html#async900 suggest using this, and perhaps make it enabled by default, once released.