Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make sure that the logs won't make a huge failure worse #5

Open
ghost opened this issue Dec 7, 2015 · 3 comments
Open

Make sure that the logs won't make a huge failure worse #5

ghost opened this issue Dec 7, 2015 · 3 comments

Comments

@ghost
Copy link

ghost commented Dec 7, 2015

As described by @rachedbenmustapha in the PR #1 , We need to design some kind of fallback behavior for whenever enough errors occur simultaneously (but slowly) to fill the memory with log entries.

We need to make sure that the log system won't become a worsening factor, and that it won't hog all the memory for itself due to bad design decisions.

The first (and current) version shall be simple, but we need to think about a solution for the long-term.

@ghost ghost added the enhancement label Dec 7, 2015
@GiorgioRegni
Copy link
Contributor

Hey @DavidPineauScality is this ticket still needed? It's 6 months old now so either we need to work on it or we can close it.

@ghost
Copy link
Author

ghost commented Sep 6, 2016

It's actually not something we've though about yet. That being said, it's probably one of our lesser worries at the moment.

@rahulreddy
Copy link
Collaborator

We could introduce a dumpLevelLimit (configurable and set to some default). This would mean that we would evict entries using FIFO once the number of entries on a request logger reach that limit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants