-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Produce failed: Local: Queue full #4772
Comments
It may be that in 40 ms you're producing more that 1MB of data, try increasing |
OK. Thank you. I will try this and will update |
If I take calculation for 7000 bytes, in 40ms time 4.2MB data will be there for publishing. I modified the conf as follows But still the same error. |
Any help regarding my query @edenhill ? |
I am trying to receive data from socket at 100MBps, each message nearly 7000 -7500 bytes, and publish these messages to Kafka topic with partition 0.
When i execute the program, after 3 min I get the error "Failed to produce to topic: Local: Queue full".
How can I overcome this? I am giving below the settings which I wrote in conf. What other settings should I include?
I may be receiving data at still higher rate. Does librdkafka supports this?
Below are the conf settings I have done in librdkafka.
rd_kafka_conf_set(conf, "bootstrap.servers", KAFKA_BROKER, errstr, sizeof(errstr));
rd_kafka_conf_set(conf, "queue.buffering.max.messages", "100000000", NULL, 0);
rd_kafka_conf_set(conf, "queue.buffering.max.ms", "40", NULL, 0);
rd_kafka_conf_set(conf, "queue.buffering.max.kbytes", "1000000", errstr, sizeof(errstr));
rd_kafka_conf_set(conf, "message.max.bytes", "100000000", errstr, sizeof(errstr));
rd_kafka_conf_set(conf, "max.request.size", "100000000", errstr, sizeof(errstr));
rd_kafka_conf_set(conf, "compression.codec", "snappy", errstr, sizeof(errstr));
I am using 1.9.0 librdkafka for Producer.
Apache Kafka version:
Operating system:RHEL 7.9
server.properties is as below
broker.id=0
message.max.bytes=41943552
port:9093
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
offsets.retention.minutes=360
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.minutes=3
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
delete.topic.enable=true
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
I have seen many posts with similar subjects. Tried out whatever I could, but still I get this error.
The text was updated successfully, but these errors were encountered: