You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As it stands the PubSub implementation of Redis Streams provides a concurrency control setting via metadata.
This concurrency control is not scoped to each individual subscription.
For example, if you have a Redis PubSub component, which has 2 topics
Topic A
Topic B
and you have an App which subscribes to both of those Topics, the concurrency will be distributed across both subscribers.
So given concurrency is set to 10 then this means that there are 10 workers assigned, and they will share the load across both subscriptions. It could be split 5:5 / 9:1 / 7:3 and change at random etc etc - which isn't helpful when trying to control the concurrency of each subscription.
I propose that a new feature is added that sets a per-subscription concurrency control, such that each subscription gets exactly that many workers / concurrency. This would allow a consistent 10 workers for subscription A and seperate 10 workers for Subscription B.
Ideally, the best solution here is to be able to specify different levels of concurrency per subscription, but thats possibly a step too far for now.
Release Note
RELEASE NOTE:
The text was updated successfully, but these errors were encountered:
@olitomlinson I'd like to think about this from a user perspective before I can think about whether this can be done (or should be done):
A per subscription concurrency could be possible via subscription metadata (not component metadata), where either in declarative pubsub subscription or the dynamic programmatic subscription you provide metadata that indicates the desired concurrency (similar to the way a deadletter queue is defined).
Perhaps the existing concurrency setting can be redefined to mean per topic concurrency. And this could be the default that is used if you did not override the concurrency in the subscription metadata.
This would probably be my preference.
If we retained the existing concurrency concept we would have to otherwise enforce that all other concurrency numbers do not add up to something greater than the original concurrency feature. This is not ideal and could be confusing.
OR maybe something like the following:
concurrency (shared): unchanged behavior pertopicconcurrency: new, cannot be used in parallel with concurrency. If this is set it will take precedence. It will give every topic this many concurrent workers by default unless, subscription metadata: concurrency: If this is set, the given topic subscription will use the specified concurrency. If this is not set, it will use whatever is set in pertopiconcurrency. And if pertopicconcurrency is not set, then the existing shared global worker count is used.
It might be overcomplicating thing though.. the implementation of this could be rather complex.
So I'd like us to figure out how to simplify this.
Describe the feature
As it stands the PubSub implementation of Redis Streams provides a
concurrency
control setting via metadata.This
concurrency
control is not scoped to each individual subscription.For example, if you have a Redis PubSub component, which has 2 topics
Topic A
Topic B
and you have an App which subscribes to both of those Topics, the concurrency will be distributed across both subscribers.
So given
concurrency
is set to10
then this means that there are10
workers assigned, and they will share the load across both subscriptions. It could be split 5:5 / 9:1 / 7:3 and change at random etc etc - which isn't helpful when trying to control the concurrency of each subscription.I propose that a new feature is added that sets a per-subscription concurrency control, such that each subscription gets exactly that many
workers
/ concurrency. This would allow a consistent 10 workers for subscription A and seperate 10 workers for Subscription B.Ideally, the best solution here is to be able to specify different levels of concurrency per subscription, but thats possibly a step too far for now.
Release Note
RELEASE NOTE:
The text was updated successfully, but these errors were encountered: