Achieving "Fair" Event Consumption #4461
Unanswered
gremerritt
asked this question in
Q&A
Replies: 1 comment 2 replies
-
Hey @gremerritt, did you find any solution on how to achieve partition consumption fairness with this Kafka client? |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm attempting to achieve "fair" event consumption across all partitions for a consumer. Let's say we have the following:
test-topic-0
andtest-topic-1
, each with two partitions^test-topic-*
)We'll say we've already published 2 events to each partition. We don't want the consumer to process more than 1 event for a partition before moving on to others. So we'd want something like:
Based on the config docs it seemed like I'd be able to use the
queued.min.messages
(in this case=1
) to get this behavior. However, I still see the consumer process all messages for a partition before moving to the next. Eg. something like:I've created a simple driver for this over at https://github.com/gremerritt/kafka_fairness_test (using the ruby wrapper over
librdkafka
.Also just noting I've had some success using
fetch.max.bytes
. Our issue is that for our usecase message sizes can vary pretty dramatically, but do not correlate well to how long processing takes, which is more strongly correlated with number of messages processed.Any suggestions?
Edit
To put this another way - why doesn't this library implement max.poll.records as described in KIP-41? The answer I've found so far is that since this library returns a single record on poll, it's not needed. But since that setting also affects the consumers distribution of events across partitions, wouldn't it still be useful?
Beta Was this translation helpful? Give feedback.
All reactions