-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kafka: Failed to produce message to topic myRsp: write tcp [ip1]:[port1]->[ip2]:[port2]: write: broken pipe #2900
Comments
I'm not using Sarama much right now. Have you examined the logs from the Broker side? |
Thanks @k-wall for replying. Unfortunately, there is no broker log while the issue occurs. Generally, if I set the configuration like this:
the AsyncProducer could send message successfully. However, if I configure them like this, the fault detection time will be extended, which will cause the business unit to not accept this delay. |
@dnwe @k-wall , I reproduced the issue on 27th May, and I found the connection between 192.168.2.148(server) and 100.100.134.251(client) has been finished by kafka server at 15:54:54.104013 (frame NO. 16770976) and this acknowledged by client at 15:54:54.144786 (frame NO. 16771072). Please see the following screenshot: At this moment, the client should not use the finished connection anymore, but the client try to send message again via the outdated connection at 16:01:20. Please see the following screenshot: To my surprise, why does the producer continue to use the old connection sometimes? Which parameter determines this? |
I don’t think there’s anyway to know that the underlying socket has been |
Thank you, @puellanivis, for your response. As you can see, frame NO. 16771072 has been acknowledged with a Alternatively, is there a way to quickly establish a new connection upon receiving a broken pipe without having to set cfg.Producer.Retry.Max to 2 or a larger value? Setting cfg.Producer.Retry.Max to a higher value will result in business timeouts. |
If the problem is that the client is closing down the connection, then yes, it should be silently handling this disconnect scenario, but if it’s a result of the client calling a function to explicitly shutdown the connection to Kafka, then the correct action is not to attempt a reconnect, but rather avoid sending on that same client object that you’ve told to close down. If the problem is on the server side closing the connection, there’s not really a way for the client to know that this connection has already been lost (I think). I’m also unsure if a retry would attempt a reconnect before sending. 🤷♀️ I’m kind of starting to reach outside of my scope of knowledge. |
Thank you for taking the time to raise this issue. However, it has not had any activity on it in the past 90 days and will be closed in 30 days if no updates occur. |
Description
Hi there, I encountered an old issue similar to #1565 and #2004 when I use AsyncProducer
Versions
Configuration
Logs
logs: CLICK ME
Additional Context
The occurrence time of the issue is 17:06:18
The text was updated successfully, but these errors were encountered: