-
Notifications
You must be signed in to change notification settings - Fork 326
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Many CLOSE_WAIT connections #2676
Comments
Have you tried restarting the container rather than the whole machine? |
No, I plan to try that next time. |
@aantron: Any views on this? How could OS changes affect Dream's networking layer and cause this? |
@cuihtlauac Which OS changes are you referring to? Definitely need to look into this regardless. |
Inside its container, the server runs on Alpine 3.20.2. Last update of this came from PR#2609 |
That looks like a Docker image change. At least in terms of the image names, I see that it was |
If not, a practical solution for now might be to move back to the 4.14 images, since that PR doesn't require using 5.2 for OCaml.org, just allows it, and we eagerly switched to it -- which might be too aggressive, if there is some flaw in either OCaml 5.2, how Dream or its dependencies end up interacting with the 5.2 runtime, or something wrong with the rest of the OCaml 5.2 image. |
It's also possible that some of the other changes in that PR are causing this somehow, since opam-repository is pinned to a newer commit (there might be a fresh bug in an upstream library from opam), or the branch of River that I proposed is somehow causing this (which seems highly unlikely, but I've done nothing yet to empirically rule that out). I would start by running the current |
This morning, some CLOSE_WAIT connections appeared. They are gone now. Looking at the logs, I found this:
|
It looks like one of the issues might be with Dream_encoding, cc @tmattio. However, even in the presence of any other issues, Dream shouldn't be leaking connections, so that's a separate question. Do we know whether any of these errors are related to the fds in the CLOSE_WAIT state? Perhaps, at a minimum, Dream should print the OS fd number to its logs to help with analyzing this kind of situation. |
No we don't. |
I've added fd number logging to Dream in aantron/dream#345, now merged in. Would you be able to try running with it? |
The |
Recently, the server seems to become slower and slower until it becomes unresponsive. This only affects package documentation pages.
Connecting to the server and docker container shows there is an apparently always increasing number of TCP connections in CLOSE_WAIT status.
As a workaround, rebooting the machine will put the server in a working state. But daily reboots are needed.
The text was updated successfully, but these errors were encountered: