Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make QueuedTracking more stable when running out of memory #28

Open
tsteur opened this issue Dec 15, 2015 · 2 comments
Open

Make QueuedTracking more stable when running out of memory #28

tsteur opened this issue Dec 15, 2015 · 2 comments

Comments

@tsteur
Copy link
Member

tsteur commented Dec 15, 2015

Currently, we assume that always enough memory is available. Usually processing the data from Redis into the database should be kinda fast and the queue should not take loads of space. However, if there's eg a problem with tracking then it might just collect requests into Redis and never remove them from there when it is eg not possible to acquire a lock see #22 and #24 . In such cases it might be possible to run out of memory over time.

We should think about ways to make the Queue better handle such problems.

  • Maybe we can detect a no more memory available and print or log a clear error message.
  • Also when enabling the queue and when testing the queue via "queuedtracking::test", we should check whether eg the noeviction or allkeys-lru eviction policy is activated (these are OK). Other policies might result in problems when it comes to low memory and will most likely always release our lock.

Background:
If no more memory is available, and eg volatile-lru is set, it would always evict first our key for the lock as it is probably the only one with an expire set. The same for volatile-random etc.

From http://redis.io/topics/lru-cache:

volatile-lru: evict keys trying to remove the less recently used (LRU) keys first, but only among keys that have an expire set, in order to make space for the new data added.
allkeys-lru: evict keys trying to remove the less recently used (LRU) keys first, in order to make space for the new data added

I was also thinking about using two databases, one for the lock etc and one for the actual tracking requests but this doesn't solve much. We could have had a small database just for the lock which doesn't need much space, maybe 1MB. This way we would make sure to never evict a lock key etc but I think it is not really needed as it makes configuration more difficult etc.

Maybe there are other things we can do too?

@tsteur
Copy link
Member Author

tsteur commented Dec 15, 2015

There could be a daily scheduled task checking if eg still 10% or 20% of memory is free and if not, send a warning to super users.

@toredash
Copy link
Contributor

It would be nice with a feature, when enabled, that would insert tracking reqs into the database if redis is nearing out of memory. I would rather get increased DB load than missing analytics data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants