-
Notifications
You must be signed in to change notification settings - Fork 205
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding karpenter.sh/nodeclaim-name:<nodeclaim-name> as label to a node #1668
Comments
This issue is currently awaiting triage. If Karpenter contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
We previously attempted to do this here: #996 but it got put on the backburner in favor of other PRs. Let us know if you're interested in implementing! |
I appreciate the update and understand that other priorities may have taken precedence. I am indeed interested in contributing to Karpenter and would like to explore this further. However, I would appreciate some guidance on the internal source and architecture to better understand how I can effectively contribute to this effort. If there are any resources or documentation you could share, I would be grateful. |
@Balraj06 I think the best guidance would be the previous PR I linked. You should be aware that this might be a deceptively harder thing to implement, since adding this as a label allows it to be potentially scheduled on with pods. We'd likely want to remove this from an allowable scheduling constraint in our simulations. Feel free to reach out in karpenter-dev for further advice. |
Description
What problem are you trying to solve?
This label is required to identify which bin-pack value of the resource request triggered the creation of an instance.
How important is this feature to you?
It is needed to identify the pod binpack overhead in our kubernetes clusters
The text was updated successfully, but these errors were encountered: