-
Notifications
You must be signed in to change notification settings - Fork 227
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: 4 DEADLINE_EXCEEDED: Deadline exceeded #1515
Comments
Getting the same error today. |
Not 100% sure if it is related, but at least the error code matches. And also the error message seems to be similar to a deadline exceeded error 🤔 The error started to happen recently for the first time and now is triggered here and there.
Environment details:
Might be related to #1442 though 🤔 |
I got this error when i started to use Node 16. grpc-js recommends NodeJs 12. :/ |
Also seeing this error and using node In summary, this has been occurring intermittently for us w/:
Anyone know of a work-around? |
@mr-pascal We are getting the same error too with nearly the same environment. Did you find a solution? Environment details:
|
@gautier-gdx
This is absolutely fine for us since it's actually the preferred setup due to its very high traffic. At least I haven't experienced any issues anymore since then 🤔 Might be a coincidence or due to this change. |
@mr-pascal Thanks a lot!
|
I 'm using a pretty old version of node and google-cloud/pubsub and encounter the same thing
|
I am using NodeJS v18, deployed in a Cloud run service, and almost every time the instance is shutting down I receive this error. I can't set the CPU allocation as allways allocated because it generates aditional and unnecesaries cost. It would be nice this to be fixed or the error to be handled in a different way |
@oscarojeda Also have you tried awaiting the Promise returned by the publish method? Mind sharing some code snippet what you exactly doing? Also do you listen to the sigterm signal triggert by Cloud Run to do some logic in there? |
Hi @mr-pascal I have the 3.1.0. Did this issue got resolved in the latest version?
do you have any idea? |
I keep having this issue, despite not using pub/sub but http requests. You may want to check this, if you use tasks too: stackoverflow issue |
any update on this? |
We are also seeing this error on a NestJS project with |
@anthony-langford what's the ack deadline time for your subscriptions ? |
1 minute on one subscription and 5 minutes on the other |
Same here , Error: 4 DEADLINE_EXCEEDED: Deadline exceeded
at Object.callErrorFromStatus (/usr/src/app/node_modules/@grpc/grpc-js/build/src/call.js:31:26)
at Object.onReceiveStatus (/usr/src/app/node_modules/@grpc/grpc-js/build/src/client.js:409:49)
at Object.onReceiveStatus (/usr/src/app/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:328:181)
at /usr/src/app/node_modules/@grpc/grpc-js/build/src/call-stream.js:187:78
at processTicksAndRejections (internal/process/task_queues.js:77:11) Docker: ubuntu20.04
Anyone found a solution for this issue ? |
Started getting the same error trying to assert 20+ topics and subscriptions from code - I'm checking topic/subscription existence with |
Hi, I'm also getting this error. Environment details:
This happens intermittently when making unary pulls from a subscription. |
Hi, I'm also getting this error. Environment details: Cloud Run Please let us know if there is any update on this? |
Hi @clipboardbolaji , are you still facing this issue ? If so could you provide steps to reproduce this issue. |
Same problem here. const [subExists] = await subscription.exists() Error: 4 DEADLINE_EXCEEDED: Deadline exceeded |
At least regarding unary pull, deadline exceeded errors are going to happen from time to time and probably be even more likely when there are no messages to pull. If there are no messages, it means the request is going to wait the longest to get back a response. Could you try making |
There are, actually. The admin plane calls are throttled, though I don't remember the exact limit. The way we've been recommending to deal with this, if you need to do it regularly, is to try to just open the topic or subscription for use and see if it gives you an error. Which is honestly sort of clunky and can result in lost messages if you were queueing them up on a subscription. So I might bring this up again Thursday. |
Separately: this error is really too generic to be able to do much debugging on it. The best way forward, if anyone is still having it regularly and has a support contract, is to put in a support case so we can ask the service engineers to look deeper for what's going on in your specific case. |
Okay, the answer in regards to checking if a subscription exists is that you should try to open a subscriber first (e.g. There is a quota for admin operations; the "read" operations like This should get less clunky in v2, because we're planning to separate the admin and data planes. So create will be a separate operation from opening a subscriber. I'm going to close this, but reminder to anyone else, if you are having the deadline exceeded errors and have a support contract, please file a support case so we can look at server logs. Thanks! |
Thanks for the discussion above! @feywind we have a support contract and have opened a ticket for further investigation, it's #45302224 if you want to follow along. The commonalities we see with others on this thread is that we create a client for each subscription and listen to about a dozen subscriptions per cloud run instance. Across our 20 instances, we get about 30 errors that repeat on a regular pattern every 15 minutes; these are captured via datadog-agent. So it's possible this is intentional error throwing and handling based on the message-stream.js code. |
Hi, we're still experiencing this issue. Will it help if we open another support ticket? Is there a preferred workaround we could document somewhere perhaps? |
If anyone hasn't tried turning on grpc keepalive support here, please do try that to see if it helps anything. e.g.: For JavaScript:
Or for TypeScript:
Might also want to force-upgrade grpc-js in your project:
These are all just workarounds, but some people are having success. We're still trying to get a root cause. You'll probably want to remove that grpc-js package dependency later so you get updates again, and we're talking about just making keepalives be an always-on thing. |
@feywind I just tried both of the suggested workarounds above, we were already using [email protected] , so I added the grpc config keys are you suggested. Unfortunately, this does not solve the issue. We still see a constant 4 DEADLINE_EXCEEDED error every 20 minutes from our subscriber. |
I'm getting this error at the moment using the same packages. Did you find a fix for this? |
Same error did you find a solution? |
@feywind, I tried with your suggestions but not working for me Create sub error Error: 4 DEADLINE_EXCEEDED: Deadline exceeded
Please let us know if anyone has any other solution to this issue Thanks! |
For anyone experiencing the issue of DEADLINE_EXCEEDED error messages, is your application performing message ack's? |
grpc/grpc-node#2690 Might be the cause of your recent DEADLINE_EXCEEDED. Try updating @grpc/grpc-js to 1.10.4 |
Hi @milo-, I have tried update @grpc/grpc-js (1.10.1 to 1.10.4) but still facing same issue. Environment details: Thanks |
there's a good repro here #1885 |
Any news here? did you find a solution? Does @milo- solution solve the problem? |
We ended up migrating our pubsub needs to REDIS since this error was intermittent and not reliably reproduceable and NOT fixed. |
Error: 4 DEADLINE_EXCEEDED: Deadline exceeded
at Object.callErrorFromStatus (/usr/src/app/node_modules/@grpc/grpc-js/build/src/call.js:31:26)
at Object.onReceiveStatus (/usr/src/app/node_modules/@grpc/grpc-js/build/src/client.js:391:49)
at Object.onReceiveStatus (/usr/src/app/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:328:181)
at /usr/src/app/node_modules/@grpc/grpc-js/build/src/call-stream.js:187:78
at processTicksAndRejections (internal/process/task_queues.js:77:11)
at runNextTicks (internal/process/task_queues.js:64:3)
at listOnTimeout (internal/timers.js:526:9)
at processTimers (internal/timers.js:500:7)
We're experiencing the issue on our production server at intervals.
Environment details
@google-cloud/pubsub
version: ^2.18.5Thanks!
The text was updated successfully, but these errors were encountered: