We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
It has been observed, Orchestrator memory is getting increased gradually on daily basis, and after a few days it gets OOMKilled.
1.) Use Cluster Terminal and put it in dangling state 2.) Fetch Ingress URLs
Memory should get GCed when objects getting freed.
After 10-15 days Orchestrator gets OOMKilled
EKS 1.23
Chrome
No response
The text was updated successfully, but these errors were encountered:
prakarsh-dt
vikramdevtron
Successfully merging a pull request may close this issue.
📜 Description
It has been observed, Orchestrator memory is getting increased gradually on daily basis, and after a few days it gets OOMKilled.
👟 Reproduction steps
1.) Use Cluster Terminal and put it in dangling state
2.) Fetch Ingress URLs
👍 Expected behavior
Memory should get GCed when objects getting freed.
👎 Actual Behavior
After 10-15 days Orchestrator gets OOMKilled
☸ Kubernetes version
EKS 1.23
Cloud provider
🌍 Browser
Chrome
🧱 Your Environment
No response
✅ Proposed Solution
No response
👀 Have you spent some time to check if this issue has been raised before?
🏢 Have you read the Code of Conduct?
The text was updated successfully, but these errors were encountered: