Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Produce failed: Local: Queue full #4772

Open
indsak opened this issue Jun 28, 2024 · 4 comments
Open

Produce failed: Local: Queue full #4772

indsak opened this issue Jun 28, 2024 · 4 comments

Comments

@indsak
Copy link

indsak commented Jun 28, 2024

I am trying to receive data from socket at 100MBps, each message nearly 7000 -7500 bytes, and publish these messages to Kafka topic with partition 0.

When i execute the program, after 3 min I get the error "Failed to produce to topic: Local: Queue full".
How can I overcome this? I am giving below the settings which I wrote in conf. What other settings should I include?

I may be receiving data at still higher rate. Does librdkafka supports this?

Below are the conf settings I have done in librdkafka.
rd_kafka_conf_set(conf, "bootstrap.servers", KAFKA_BROKER, errstr, sizeof(errstr));
rd_kafka_conf_set(conf, "queue.buffering.max.messages", "100000000", NULL, 0);
rd_kafka_conf_set(conf, "queue.buffering.max.ms", "40", NULL, 0);
rd_kafka_conf_set(conf, "queue.buffering.max.kbytes", "1000000", errstr, sizeof(errstr));
rd_kafka_conf_set(conf, "message.max.bytes", "100000000", errstr, sizeof(errstr));
rd_kafka_conf_set(conf, "max.request.size", "100000000", errstr, sizeof(errstr));
rd_kafka_conf_set(conf, "compression.codec", "snappy", errstr, sizeof(errstr));

I am using 1.9.0 librdkafka for Producer.
Apache Kafka version:
Operating system:RHEL 7.9

server.properties is as below

broker.id=0
message.max.bytes=41943552
port:9093
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
offsets.retention.minutes=360
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.minutes=3
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
delete.topic.enable=true
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0

I have seen many posts with similar subjects. Tried out whatever I could, but still I get this error.

@emasab
Copy link
Collaborator

emasab commented Jun 28, 2024

It may be that in 40 ms you're producing more that 1MB of data, try increasing queue.buffering.max.kbytes to a value double than the size of messages produced in 40 ms.

@indsak
Copy link
Author

indsak commented Jun 28, 2024

OK. Thank you. I will try this and will update

@indsak
Copy link
Author

indsak commented Jun 28, 2024

If I take calculation for 7000 bytes, in 40ms time 4.2MB data will be there for publishing.

I modified the conf as follows
rd_kafka_conf_set(conf, "queue.buffering.max.messages", "100000", NULL, 0);
rd_kafka_conf_set(conf, "queue.buffering.max.ms", "5", NULL, 0);
rd_kafka_conf_set(conf, "queue.buffering.max.kbytes", "2147483647", errstr, sizeof(errstr)); //maximum value

But still the same error.
Where am I going wrong?
Any help?

@indsak
Copy link
Author

indsak commented Jul 1, 2024

Any help regarding my query @edenhill ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants