-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
isDaemon
: memory usage from client connections persists after connection is lost
#75
Comments
My best guess from a quick look at the code is that we never actually kill off this task: yaq-python/yaqd-core/yaqd_core/_protocol.py Lines 30 to 31 in 2735e4e
Which is preventing things from being garbage collected, etc. I think that it may be as simple as doing And then calling |
I can confirm this behavior is reproduced by running @ksunden adding in your task cancellation to Protocol:
I also removed the task from the daemons list like you suggested. Tasks do get cancelled, but I don't see significant changes in memory usage behavior. |
My interpretation of the code modification was incorrect--I didn't account for how memory usage can fluctuate. If I execute a lot of connections (~1000s), I do see the memory usage contract eventually. Using |
closed via #77 |
I am observing this behavior on
yaqd-lightcon-topas4-motor
, but I believe this behavior should be observed in all daemons so I am posting it here.When I make a client connection to a lightcon-topas4-motor (
c=yaqc.Client(...)
), I can see the memory usage of the daemon go up (~100 kB). When I close the daemon, the memory remains in use by the daemon. Each connection increases the memory usage by ~100 kB.This may not seem like a big issue, but it can snowball quickly. e.g. when
yaqd-attune
is a background process and fails to initialize its dependents (this can happen when the lightcon server is down) it will retry the connection. It reconnects to every lightcon daemon every ~2 seconds, so after hours the connections take up GBs of memory for the daemon.The text was updated successfully, but these errors were encountered: