Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Graceful shutdown of a stream for a single subscription #1201

Open
wants to merge 109 commits into
base: master
Choose a base branch
from

Conversation

svroonland
Copy link
Collaborator

@svroonland svroonland commented Mar 24, 2024

Implements functionality for gracefully stopping a stream for a single subscription: stop fetching records for the assigned topic-partitions but keep being subscribed so that offsets can still be committed. Intended to replace stopConsumption, which did not support multiple-subscription use cases.

A new command EndStreamsBySubscription is introduced, which calls the end method on the PartitionStreamControl of streams matching a subscription. In the method Consumer#runWithGracefulShutdown we then wait for the user's stream to complete, before removing the subscription.

This is experimental functionality, intended to replace stopConsumption at some point. Methods with this new functionality are offered besides existing methods to maintain compatibility.

All the fiber and scope trickery proved to be very hard to get right (the lifetime of this PR is a testimony to that), and there may still be subtle issues here. This is now traced back to issue zio/zio#9288

Implements some of #941.

@svroonland svroonland changed the title Subscription stream control Graceful shutdown of a single subscription Mar 30, 2024
@svroonland svroonland marked this pull request as ready for review March 30, 2024 11:07
Copy link
Collaborator

@erikvanoosten erikvanoosten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't look at the implementation yet, only docs and tests.

Copy link
Collaborator

@erikvanoosten erikvanoosten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still need more time to digest this.

@svroonland
Copy link
Collaborator Author

svroonland commented Apr 3, 2024

Hmm, should we instead of this:

Consumer.runWithGracefulShutdown(Consumer.partitionedStreamWithControl(Subscription.topics("topic150"), Serde.string, Serde.string)) { 
  stream => ... 
}

offer this:

Consumer.partitionedStreamWithGracefulShutdown(Subscription.topics("topic150"), Serde.string, Serde.string) {
  (stream, _) => stream.flatMapPar(...) 
}

The second parameter would be the SubscriptionStreamControl, which you could always manually call stop on. Or would that prevent certain use cases.. 🤔

@erikvanoosten
Copy link
Collaborator

Hmm, should we instead of this:

If I understand it correctly, the proposal allows for more use cases; with it you can also call stop for any condition you want. Is it true that after stopping, you can start consuming again?

@svroonland
Copy link
Collaborator Author

Well, I mean compared to just the partitionedStreamWithControl method. In both cases you would need to do something with the stream that ultimately reduces to a ZIO of Any, so I don't think the partitionedStreamWithGracefulShutdown is limiting in that regard.

stop currently doesn't support that, since the stream would then be finished. We could probably build pause and resume like in #941.

@erikvanoosten
Copy link
Collaborator

If resume after stop is not supported (and never will be), then I like the first proposal better where you don't need to call stop. What would you do after calling stop?

@svroonland
Copy link
Collaborator Author

Well, in both proposals you can call stop.

I don't think you want to do anything after stop, but it would give you more explicit control when to stop, instead of when the scope ends.

We probably need to decide if we want to add pause/resume in the future. If we do, we should add the control parameter like in the partitionedStreamWithGracefulShutdown example for future compatibility. If we don't, we can drop it altogether and make SubscriptionStreamControl a purely internal concept (if at all).

@guizmaii
Copy link
Member

guizmaii commented Apr 5, 2024

Hey :)

Thanks for the great work!

Here's some initial feedback:

I'm not a big fan of the SubscriptionStreamControl implementation.

To me, functions/methods returning it should return a Tuple (stream, control).
It avoids adding one more concept for our users to understand and learn (Kafka already has a lot of concepts)
It also simplifies the interface of the control type, the current one with the [S <: ZStream[_, _, _]] being complex
It also simplifies the return type of our functions/methods, avoiding this kind of type:

SubscriptionStreamControl[Stream[Throwable, Chunk[(TopicPartition, ZStream[R, Throwable, CommittableRecord[K, V]])]]]

in favor of:

(Stream[Throwable, Chunk[(TopicPartition, ZStream[R, Throwable, CommittableRecord[K, V]])], SubscriptionStreamControl)

Made the change in a PR to show/study how, to me, it simplifies things: https://github.com/zio/zio-kafka/pull/1207/files

@guizmaii
Copy link
Member

guizmaii commented Apr 5, 2024

Didn't finish my review yet. I still have some parts of the code to explore/understand, but I have to go. I'll finish it later 🙂

@svroonland
Copy link
Collaborator Author

Thanks for the feedback Jules. Agreed about the extra concept that would be unwanted. Check out my latest interface proposal where there is only a plainStreamWithGracefulShutdown method and SubscriptionStreamControl remains hidden.

Copy link
Collaborator

@erikvanoosten erikvanoosten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still reading the code...

@erikvanoosten
Copy link
Collaborator

erikvanoosten commented Apr 7, 2024

I understand now that when graceful shutdown starts we're ending the subscribed streams. That should work nicely. Lets work out what will happen next to the runloop. The runloop would still be happily fetching records for that stream. When those are offered to the stream, PartitionStreamControl.offerRecords will probably append those records to the queue (even though it now also contains an 'end' token). Because of the 'end' token that is already in that queue, these new records will never be taken out. Back pressure will kick in (depending on the fetch strategy) and the partitions will be paused. Once we're unsubscribed, 15 seconds later, the queue will be garbage collected. So far so good.

We can do slightly better though. We're fetching and storing all these records in the queue for nothing, even potentially causing an OOM for systems that are tuned for the case where processing happens almost immediately.

My proposal is to:

  1. stop accepting more records in PartitionStreamControl.offerRecords when the queue was ended
  2. in Runloop.handlePoll only pass running streams to fetchStrategy.selectPartitionsToFetch so that partitions for ended streams are immediately paused

If you want, I can extend this PR with that proposal (or create a separate PR).

@svroonland
Copy link
Collaborator Author

@erikvanoosten If you have some time to implement those two things, by all means.

@erikvanoosten
Copy link
Collaborator

erikvanoosten commented Apr 13, 2024

@erikvanoosten If you have some time to implement those two things, by all means.

@svroonland Done in commit 1218204.

Now I am wondering, how can we test this?

@svroonland
Copy link
Collaborator Author

svroonland commented Apr 14, 2024

Change looks good. Totally forgot to implement this part.

@svroonland
Copy link
Collaborator Author

Was able to create a minimized reproducer of the issue: zio/zio#9288

@svroonland
Copy link
Collaborator Author

The abovementioned issue has been fixed and will probably be in the next ZIO release. For compatibility with older versions let's keep the .fork.flatMap(_.join) workaround.

@svroonland svroonland added this to the 3.0.0 milestone Nov 16, 2024
@erikvanoosten
Copy link
Collaborator

The abovementioned issue has been fixed and will probably be in the next ZIO release. For compatibility with older versions let's keep the .fork.flatMap(_.join) workaround.

Great work!

Shall we slap a comment on it? E.g. something like:

// Workaround for a bug in ZIO up to v2.1.12. See https://github.com/zio/zio/issues/9288

Copy link
Collaborator

@erikvanoosten erikvanoosten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So many little comments. I'll push this now so you have something to work with. Meanwhile I'll continue reviewing.


zio-kafka also supports a _graceful shutdown_, where the fetching of records for the subscribed topics/partitions is stopped, the streams are ended and all downstream stages are completed, allowing in-flight records to be fully processed.

Use the `with*Stream` variants of `plainStream`, `partitionedStream` and `partitionedAssignmentStream` for this purpose. These methods accept a parameter that describes the execution of a stream, which is gracefully ended when the method is interrupted.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can extend this a bit more before we merge this PR.

case RunloopCommand.EndStreamsBySubscription(subscription, cont) =>
ZIO.foreachDiscard(
state.assignedStreams.filter(stream => Subscription.subscriptionMatches(subscription, stream.tp))
)(_.end) *> cont
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The formatter makes some weird choices here. Would this case be more readable when written as a for comprehension?

@@ -203,17 +213,20 @@ private[consumer] final class Runloop private (

private def handlePoll(state: State): Task[State] = {
for {
partitionsToFetch <- settings.fetchStrategy.selectPartitionsToFetch(state.assignedStreams)
pendingCommitCount <- committer.pendingCommitCount
runningStreamsBeforePoll <- ZIO.filter(state.assignedStreams)(_.isRunning)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The isRunning is based on whether the stream control ended or not. However, we can/could do better by not providing data for ended subscriptions instead. WDYT?

Comment on lines 5 to 15
/**
* Allows graceful shutdown of a stream, where no more records are being fetched but the in-flight records can continue
* to be processed and their offsets committed.
*
* @param stream
* The stream of partitions / records for this subscription
* @param stop
* Stop fetching records for the subscribed topic-partitions and end the associated streams, while allowing commits to
* proceed (consumer remains subscribed)
*/
final private[consumer] case class SubscriptionStreamControl[S <: ZStream[_, _, _]](stream: S, stop: UIO[Unit])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This case class models a stream that can be stopped from outside (with a stop method). It is not tied to subscriptions or anything else from zio-kafka. Therefore, I propose we rename this to something like StoppableStream, or even simpler: StreamControl.

Suggested change
/**
* Allows graceful shutdown of a stream, where no more records are being fetched but the in-flight records can continue
* to be processed and their offsets committed.
*
* @param stream
* The stream of partitions / records for this subscription
* @param stop
* Stop fetching records for the subscribed topic-partitions and end the associated streams, while allowing commits to
* proceed (consumer remains subscribed)
*/
final private[consumer] case class SubscriptionStreamControl[S <: ZStream[_, _, _]](stream: S, stop: UIO[Unit])
/**
* Models a stream with a graceful shutdown.
*
* In a graceful shutdown, the stream stops pulling elements from its source, but completes processing of already pulled stream elements.
*
* @param stream
* A stream that supports graceful shutdown
* @param stop
* Initiate a graceful shutdown of the stream
*/
final private[consumer] case class StreamControl[S <: ZStream[_, _, _]](stream: S, stop: UIO[Unit])

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am wondering why this it is not like this:

case class StreamControl[R, E, A]](stream: ZStream[R, E, A], stop: UIO[Unit])

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is an opportunity to clean up the code that is using this with methods like map:

case class StreamControl[R, E, A]](stream: ZStream[R, E, A], stop: UIO[Unit]) {
  def map[R1 <: R, E1 >: E, B](f: ZStream[R, E, A] => ZStream[R1, E1, B]): StreamControl[R1, E1, B] =
    StreamControl(f(stream), stop)
}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, I like it

Comment on lines +73 to +79
* Like [[partitionedAssignmentStream]] but wraps the stream in a construct that ensures graceful shutdown.
*
* When this effect is interrupted, all partition streams are closed upstream, allowing the stream created by
* `withStream` to complete gracefully all stream stages, thereby fully processing all buffered and/or in-flight
* messages.
*
* EXPERIMENTAL API
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Like [[partitionedAssignmentStream]] but wraps the stream in a construct that ensures graceful shutdown.
*
* When this effect is interrupted, all partition streams are closed upstream, allowing the stream created by
* `withStream` to complete gracefully all stream stages, thereby fully processing all buffered and/or in-flight
* messages.
*
* EXPERIMENTAL API
* Like [[partitionedAssignmentStream]] but wraps the stream in an effect that allows graceful shutdown.
*
* When this effect is interrupted, the stream of assigned partitions ends, allowing the streams created by
* `withStream` to complete gracefully, thereby fully processing all buffered and/or in-flight
* stream elements.
*
* WARNING: this is an EXPERIMENTAL API and may disappear or change in an incompatible way without notice in any zio-kafka version.

override def plainStream[R, K, V](
subscription: Subscription,
keyDeserializer: Deserializer[R, K],
valueDeserializer: Deserializer[R, V],
bufferSize: Int
bufferSize: Int = 4
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we extract this change to its own PR? This looks like something that is useful on its own.

Note to self: it may also affect documentation.

.timeout(shutdownTimeout)
.someOrElseZIO(
ZIO.logError(
"Timeout joining withStream fiber in runWithGracefulShutdown. Not all pending commits may have been processed."
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is my attempt at rewriting the message more towards the user's view. Not sure if this is entirely correct though.

Suggested change
"Timeout joining withStream fiber in runWithGracefulShutdown. Not all pending commits may have been processed."
"Timeout waiting for `withStream` to gracefully shut down. Not all in-flight records may have been processed."

)
)
.tapErrorCause(cause =>
ZIO.logErrorCause("Error joining withStream fiber in runWithGracefulShutdown", cause)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand this correctly, the problem is not that the join fails, but more likely the stream itself.

Suggested change
ZIO.logErrorCause("Error joining withStream fiber in runWithGracefulShutdown", cause)
ZIO.logErrorCause("Stream failed while awaiting its graceful shutdown", cause)

// The fork and join is a workaround for https://github.com/zio/zio/issues/9288 for ZIO <= 2.1.12
.forkDaemon
.flatMap(_.join)
.tapErrorCause(cause =>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: when we remove this workaround (I am actually in favor of that, we can document that we require zio 2.1.13+), do we then still need this second tapErrorCause?

diagnostics.emit(Finalization.SubscriptionFinalized)
}
} yield stream
} yield SubscriptionStreamControl(
stream = stream.merge(ZStream.fromZIO(end.await).as(Take.end)),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I read (line 72) this some alarm bells went of in my head.

PartitionStreamControl registers which records (offsets) were pulled. This information is later used by the rebalance listener when rebalanceSafeCommit is enabled.

When the 'end' gets merged into the stream provided by PartitionStreamControl, it could be that the latter just pulled a chunk of records, but before the chunk was given downstream, the end is merged in. This causes downstream to never see those records, even though PartitionStreamControl thinks those have been pulled. When this happens, the ending rebalance will wait for maxRebalanceDuration.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A solution could be to stop offering data from the runloop (see also https://github.com/zio/zio-kafka/pull/1201/files#r1986290475).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants