You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Environment: Express app using pino 9.5.0, pino-http 10.3.0 running in a container in Kubernetes
Issue: After a certain amount of hours running, the app which logs about 20k logs per hour, stops outputting logs. For the next few hours the memory usage of the pod will climb and level off (~1.3GB) before outputting a ton of errors until the pod is manually restarted.
The issue crops up sometimes multiple times in a day and sometimes after a span of days. We run multiple replicas of the app and only one replica will have the issue at a time.
I haven't found much information on the flushSync or memory related errors that are similar to this.
In the meantime we'll test out if the issue happens with synchronous logging.
Any help or suggestions would be appreciated. Thanks.
Initial error after memory usage reaches the point where it levels off
Error in pino transport Error: _flushSync took too long (10s)
at flushSync (/usr/src/app/node_modules/thread-stream/index.js:531:13)
at writeSync (/usr/src/app/node_modules/thread-stream/index.js:468:7)
at ThreadStream.write (/usr/src/app/node_modules/thread-stream/index.js:249:9)
at Pino.write (/usr/src/app/node_modules/pino/lib/proto.js:217:10)
at Pino.LOG [as info] (/usr/src/app/node_modules/pino/lib/tools.js:62:21)
at onResFinished (/usr/src/app/node_modules/pino-http/logger.js:129:15)
at ServerResponse.onResponseComplete (/usr/src/app/node_modules/pino-http/logger.js:178:14)
at ServerResponse.emit (node:events:526:35)
at onFinish (node:_http_outgoing:1005:10)
at callback (node:internal/streams/writable:608:21)
Subsequent errors - these spam the logs (about 2k every 5 minutes)
Error in pino transport Error: the worker has exited
at ThreadStream.write (/usr/src/app/node_modules/thread-stream/index.js:238:19)
at Pino.write (/usr/src/app/node_modules/pino/lib/proto.js:217:10)
at Pino.LOG [as info] (/usr/src/app/node_modules/pino/lib/tools.js:71:21)
[The rest of the stack trace varies depending on which method called the logger]
The text was updated successfully, but these errors were encountered:
@mcollina
I may have edited the code right after you saw it but we have tried both async and sync settings, the issue occurs with both. I'll include the additional init code. The only notable modifications we have are setting hostname to undefined (to prevent logging hostname) and a custom logLevel.
Environment: Express app using pino 9.5.0, pino-http 10.3.0 running in a container in Kubernetes
Issue: After a certain amount of hours running, the app which logs about 20k logs per hour, stops outputting logs. For the next few hours the memory usage of the pod will climb and level off (~1.3GB) before outputting a ton of errors until the pod is manually restarted.
The issue crops up sometimes multiple times in a day and sometimes after a span of days. We run multiple replicas of the app and only one replica will have the issue at a time.
I haven't found much information on the flushSync or memory related errors that are similar to this.
In the meantime we'll test out if the issue happens with synchronous logging.
Any help or suggestions would be appreciated. Thanks.
Transport definition
Initial error after memory usage reaches the point where it levels off
Subsequent errors - these spam the logs (about 2k every 5 minutes)
The text was updated successfully, but these errors were encountered: