Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to define threshold usage percentage for Karpenter to consider nodes for consolidation when using WhenEmptyOrUnderutilized #1686

Closed
andrewhibbert opened this issue Sep 18, 2024 · 3 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@andrewhibbert
Copy link

andrewhibbert commented Sep 18, 2024

Description

What problem are you trying to solve?

Since v1 I am seeing nodes that are more utilised consolidated over those less utilized, I'd like to be able to set a threshold percentage to ignore nodes that are well utilized.

Example - this node is underutilized:

Name:               ip-10-138-104-15.eu-west-1.compute.internal
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/instance-type=r7a.4xlarge
                    beta.kubernetes.io/os=linux
                    failure-domain.beta.kubernetes.io/region=eu-west-1
                    failure-domain.beta.kubernetes.io/zone=eu-west-1b
                    k8s.io/cloud-provider-aws=e86087ba30a6f3944c59fd13e20eff23
                    karpenter.k8s.aws/instance-category=r
                    karpenter.k8s.aws/instance-cpu=16
                    karpenter.k8s.aws/instance-cpu-manufacturer=amd
                    karpenter.k8s.aws/instance-ebs-bandwidth=10000
                    karpenter.k8s.aws/instance-encryption-in-transit-supported=true
                    karpenter.k8s.aws/instance-family=r7a
                    karpenter.k8s.aws/instance-generation=7
                    karpenter.k8s.aws/instance-hypervisor=nitro
                    karpenter.k8s.aws/instance-memory=131072
                    karpenter.k8s.aws/instance-network-bandwidth=6250
                    karpenter.k8s.aws/instance-size=4xlarge
                    karpenter.sh/capacity-type=spot
                    karpenter.sh/initialized=true
                    karpenter.sh/nodepool=enrichment-service
                    karpenter.sh/registered=true
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=ip-10-138-104-15.eu-west-1.compute.internal
                    kubernetes.io/os=linux
                    lifecycle=Ec2Spot
                    node.kubernetes.io/instance-type=r7a.4xlarge
                    nodegroup=EnrichmentService
                    topology.ebs.csi.aws.com/zone=eu-west-1b
                    topology.k8s.aws/zone-id=euw1-az1
                    topology.kubernetes.io/region=eu-west-1
                    topology.kubernetes.io/zone=eu-west-1b
Annotations:        alpha.kubernetes.io/provided-node-ip: 10.138.104.15
                    argocd.argoproj.io/compare-options: IgnoreExtraneous
                    argocd.argoproj.io/sync-options: Prune=false
                    compatibility.karpenter.k8s.aws/kubelet-drift-hash: 15379597991425564585
                    csi.volume.kubernetes.io/nodeid: {"ebs.csi.aws.com":"i-003d6b57c7066e375","efs.csi.aws.com":"i-003d6b57c7066e375"}
                    karpenter.k8s.aws/ec2nodeclass-hash: 1109691076529490063
                    karpenter.k8s.aws/ec2nodeclass-hash-version: v3
                    karpenter.sh/nodepool-hash: 3415431166760717406
                    karpenter.sh/nodepool-hash-version: v3
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 18 Sep 2024 09:53:53 +0100
Taints:             enrichment_service_spot:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  ip-10-138-104-15.eu-west-1.compute.internal
  AcquireTime:     <unset>
  RenewTime:       Wed, 18 Sep 2024 10:23:21 +0100
Conditions:
  Type                    Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                    ------  -----------------                 ------------------                ------                       -------
  KernelDeadlock          False   Wed, 18 Sep 2024 10:19:20 +0100   Wed, 18 Sep 2024 09:54:16 +0100   KernelHasNoDeadlock          kernel has no deadlock
  ReadonlyFilesystem      False   Wed, 18 Sep 2024 10:19:20 +0100   Wed, 18 Sep 2024 09:54:16 +0100   FilesystemIsNotReadOnly      Filesystem is not read-only
  CorruptDockerOverlay2   False   Wed, 18 Sep 2024 10:19:20 +0100   Wed, 18 Sep 2024 09:54:16 +0100   NoCorruptDockerOverlay2      docker overlay2 is functioning properly
  MemoryPressure          False   Wed, 18 Sep 2024 10:23:19 +0100   Wed, 18 Sep 2024 09:53:52 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure            False   Wed, 18 Sep 2024 10:23:19 +0100   Wed, 18 Sep 2024 09:53:52 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure             False   Wed, 18 Sep 2024 10:23:19 +0100   Wed, 18 Sep 2024 09:53:52 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                   True    Wed, 18 Sep 2024 10:23:19 +0100   Wed, 18 Sep 2024 09:54:07 +0100   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:   10.138.104.15
  InternalDNS:  ip-10-138-104-15.eu-west-1.compute.internal
  Hostname:     ip-10-138-104-15.eu-west-1.compute.internal
Capacity:
  cpu:                16
  ephemeral-storage:  78630892Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             129113520Ki
  pods:               234
Allocatable:
  cpu:                15890m
  ephemeral-storage:  71392488124
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             126114224Ki
  pods:               234
System Info:
  Machine ID:                 ec2884672117060126b62e9ac7de629c
  System UUID:                ec2feba9-af45-676f-3511-296737e66ea3
  Boot ID:                    152f0f8b-0fa4-4f14-a5bd-147d547c722b
  Kernel Version:             5.10.223-212.873.amzn2.x86_64
  OS Image:                   Amazon Linux 2
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.7.11
  Kubelet Version:            v1.28.11-eks-1552ad0
  Kube-Proxy Version:         v1.28.11-eks-1552ad0
ProviderID:                   aws:///eu-west-1b/i-003d6b57c7066e375
Non-terminated Pods:          (11 in total)
  Namespace                   Name                                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                                               ------------  ----------  ---------------  -------------  ---
  enrichment-services-np-99   enrichment-services-np-99-elasticsearch-put-service-async-ch5jl    10m (0%)      0 (0%)      1100Mi (0%)      1536Mi (1%)    74s
  enrichment-services-np-99   enrichment-services-np-99-stage-coupler-service-async-79f48fqlc    2 (12%)       3 (18%)     512Mi (0%)       768Mi (0%)     29s
  infra                       infra-fluent-bit-kq7hp                                             10m (0%)      200m (1%)   8Mi (0%)         500Mi (0%)     29m
  infra                       infra-node-problem-detector-s4dhm                                  100m (0%)     200m (1%)   10Mi (0%)        40Mi (0%)      29m
  kube-system                 aws-node-ml7f8                                                     50m (0%)      0 (0%)      40Mi (0%)        0 (0%)         29m
  kube-system                 calico-node-c8mxb                                                  10m (0%)      0 (0%)      20Mi (0%)        0 (0%)         29m
  kube-system                 ebs-csi-node-wh4d4                                                 30m (0%)      0 (0%)      120Mi (0%)       768Mi (0%)     29m
  kube-system                 efs-csi-node-6pkt6                                                 30m (0%)      0 (0%)      60Mi (0%)        0 (0%)         29m
  kube-system                 kube-proxy-9g6j2                                                   100m (0%)     0 (0%)      20Mi (0%)        0 (0%)         29m
  kubecost                    kubecost-network-costs-fg9zc                                       50m (0%)      500m (3%)   20Mi (0%)        0 (0%)         29m
  newrelic                    newrelic-nrk8s-kubelet-9gh92                                       200m (1%)     0 (0%)      300M (0%)        600M (0%)      29m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests         Limits
  --------           --------         ------
  cpu                2590m (16%)      3900m (24%)
  memory             2302780160 (1%)  4387456512 (3%)
  ephemeral-storage  0 (0%)           0 (0%)
  hugepages-1Gi      0 (0%)           0 (0%)
  hugepages-2Mi      0 (0%)           0 (0%)
Events:
  Type     Reason                   Age                From                   Message
  ----     ------                   ----               ----                   -------
  Normal   Starting                 29m                kube-proxy
  Normal   Starting                 29m                kubelet                Starting kubelet.
  Warning  InvalidDiskCapacity      29m                kubelet                invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  29m (x2 over 29m)  kubelet                Node ip-10-138-104-15.eu-west-1.compute.internal status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    29m (x2 over 29m)  kubelet                Node ip-10-138-104-15.eu-west-1.compute.internal status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     29m (x2 over 29m)  kubelet                Node ip-10-138-104-15.eu-west-1.compute.internal status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  29m                kubelet                Updated Node Allocatable limit across pods
  Normal   Synced                   29m                cloud-node-controller  Node synced successfully
  Normal   RegisteredNode           29m                node-controller        Node ip-10-138-104-15.eu-west-1.compute.internal event: Registered Node ip-10-138-104-15.eu-west-1.compute.internal in Controller
  Normal   DisruptionBlocked        29m                karpenter              Cannot disrupt Node: state node isn't initialized
  Normal   NodeReady                29m                kubelet                Node ip-10-138-104-15.eu-west-1.compute.internal status is now: NodeReady

This is more utilized:

Name:               ip-10-138-109-39.eu-west-1.compute.internal
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/instance-type=m6g.2xlarge
                    beta.kubernetes.io/os=linux
                    failure-domain.beta.kubernetes.io/region=eu-west-1
                    failure-domain.beta.kubernetes.io/zone=eu-west-1c
                    k8s.io/cloud-provider-aws=e86087ba30a6f3944c59fd13e20eff23
                    karpenter.k8s.aws/instance-category=m
                    karpenter.k8s.aws/instance-cpu=8
                    karpenter.k8s.aws/instance-cpu-manufacturer=aws
                    karpenter.k8s.aws/instance-ebs-bandwidth=4750
                    karpenter.k8s.aws/instance-encryption-in-transit-supported=false
                    karpenter.k8s.aws/instance-family=m6g
                    karpenter.k8s.aws/instance-generation=6
                    karpenter.k8s.aws/instance-hypervisor=nitro
                    karpenter.k8s.aws/instance-memory=32768
                    karpenter.k8s.aws/instance-network-bandwidth=2500
                    karpenter.k8s.aws/instance-size=2xlarge
                    karpenter.sh/capacity-type=spot
                    karpenter.sh/initialized=true
                    karpenter.sh/nodepool=enrichment-service
                    karpenter.sh/registered=true
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=ip-10-138-109-39.eu-west-1.compute.internal
                    kubernetes.io/os=linux
                    lifecycle=Ec2Spot
                    node.kubernetes.io/instance-type=m6g.2xlarge
                    nodegroup=EnrichmentService
                    topology.ebs.csi.aws.com/zone=eu-west-1c
                    topology.k8s.aws/zone-id=euw1-az2
                    topology.kubernetes.io/region=eu-west-1
                    topology.kubernetes.io/zone=eu-west-1c
Annotations:        alpha.kubernetes.io/provided-node-ip: 10.138.109.39
                    argocd.argoproj.io/compare-options: IgnoreExtraneous
                    argocd.argoproj.io/sync-options: Prune=false
                    compatibility.karpenter.k8s.aws/kubelet-drift-hash: 15379597991425564585
                    csi.volume.kubernetes.io/nodeid: {"ebs.csi.aws.com":"i-02a3c1cf6e820667f","efs.csi.aws.com":"i-02a3c1cf6e820667f"}
                    karpenter.k8s.aws/ec2nodeclass-hash: 1109691076529490063
                    karpenter.k8s.aws/ec2nodeclass-hash-version: v3
                    karpenter.sh/nodepool-hash: 3415431166760717406
                    karpenter.sh/nodepool-hash-version: v3
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 18 Sep 2024 08:41:01 +0100
Taints:             enrichment_service_spot:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  ip-10-138-109-39.eu-west-1.compute.internal
  AcquireTime:     <unset>
  RenewTime:       Wed, 18 Sep 2024 10:23:23 +0100
Conditions:
  Type                    Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                    ------  -----------------                 ------------------                ------                       -------
  KernelDeadlock          False   Wed, 18 Sep 2024 10:22:01 +0100   Wed, 18 Sep 2024 08:41:48 +0100   KernelHasNoDeadlock          kernel has no deadlock
  ReadonlyFilesystem      False   Wed, 18 Sep 2024 10:22:01 +0100   Wed, 18 Sep 2024 08:41:48 +0100   FilesystemIsNotReadOnly      Filesystem is not read-only
  CorruptDockerOverlay2   False   Wed, 18 Sep 2024 10:22:01 +0100   Wed, 18 Sep 2024 08:41:49 +0100   NoCorruptDockerOverlay2      docker overlay2 is functioning properly
  MemoryPressure          False   Wed, 18 Sep 2024 10:18:42 +0100   Wed, 18 Sep 2024 08:41:01 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure            False   Wed, 18 Sep 2024 10:18:42 +0100   Wed, 18 Sep 2024 08:41:01 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure             False   Wed, 18 Sep 2024 10:18:42 +0100   Wed, 18 Sep 2024 08:41:01 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                   True    Wed, 18 Sep 2024 10:18:42 +0100   Wed, 18 Sep 2024 08:41:19 +0100   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:   10.138.109.39
  InternalDNS:  ip-10-138-109-39.eu-west-1.compute.internal
  Hostname:     ip-10-138-109-39.eu-west-1.compute.internal
Capacity:
  cpu:                8
  ephemeral-storage:  78621676Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             32119604Ki
  pods:               58
Allocatable:
  cpu:                7910m
  ephemeral-storage:  71383994658
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             31102772Ki
  pods:               58
System Info:
  Machine ID:                 ec2baaafef43824447c0c3c5a0083ce9
  System UUID:                ec216dd9-32c6-97b9-e7e8-14a151e259f9
  Boot ID:                    ccee4912-f4b0-4af1-a85a-354c845373de
  Kernel Version:             5.10.223-212.873.amzn2.aarch64
  OS Image:                   Amazon Linux 2
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.7.11
  Kubelet Version:            v1.28.11-eks-1552ad0
  Kube-Proxy Version:         v1.28.11-eks-1552ad0
ProviderID:                   aws:///eu-west-1c/i-02a3c1cf6e820667f
Non-terminated Pods:          (20 in total)
  Namespace                   Name                                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                                               ------------  ----------  ---------------  -------------  ---
  enrichment-services-np-01   enrichment-services-np-01-datalake-read-service-sync-6d9c94lbzf    500m (6%)     0 (0%)      2Gi (6%)         3Gi (10%)      34m
  enrichment-services-np-01   enrichment-services-np-01-datalake-read-service-sync-6d9c9wj5q6    500m (6%)     0 (0%)      2Gi (6%)         3Gi (10%)      61m
  enrichment-services-np-01   enrichment-services-np-01-elasticsearch-put-service-async-2b85d    10m (0%)      0 (0%)      1100Mi (3%)      1536Mi (5%)    56m
  enrichment-services-np-01   enrichment-services-np-01-es-doc-index-service-async-54864hnkt6    10m (0%)      0 (0%)      1100Mi (3%)      20Gi (67%)     56m
  enrichment-services-np-50   enrichment-services-np-50-datalake-read-service-sync-7bc4cjjtj4    10m (0%)      0 (0%)      2Gi (6%)         3Gi (10%)      61m
  enrichment-services-np-50   enrichment-services-np-50-datalake-read-service-sync-7bc4clg5cc    10m (0%)      0 (0%)      2Gi (6%)         3Gi (10%)      34m
  enrichment-services-np-50   enrichment-services-np-50-datalake-read-service-sync-7bc4clmt22    10m (0%)      0 (0%)      2Gi (6%)         3Gi (10%)      61m
  enrichment-services-np-50   enrichment-services-np-50-sentence-service-async-6c7fb9866ssm6b    10m (0%)      0 (0%)      1100Mi (3%)      3Gi (10%)      56m
  enrichment-services-np-99   enrichment-services-np-99-datalake-read-service-sync-7b8bfflz54    10m (0%)      0 (0%)      2Gi (6%)         3Gi (10%)      61m
  enrichment-services-np-99   enrichment-services-np-99-datalake-read-service-sync-7b8bfs6879    10m (0%)      0 (0%)      2Gi (6%)         3Gi (10%)      34m
  enrichment-services-np-99   enrichment-services-np-99-elasticsearch-put-service-async-jfflv    10m (0%)      0 (0%)      1100Mi (3%)      1536Mi (5%)    66m
  infra                       infra-fluent-bit-29p9w                                             10m (0%)      200m (2%)   8Mi (0%)         500Mi (1%)     102m
  infra                       infra-node-problem-detector-nmplv                                  100m (1%)     200m (2%)   10Mi (0%)        40Mi (0%)      102m
  kube-system                 aws-node-9v92q                                                     50m (0%)      0 (0%)      40Mi (0%)        0 (0%)         102m
  kube-system                 calico-node-kf89d                                                  10m (0%)      0 (0%)      20Mi (0%)        0 (0%)         102m
  kube-system                 ebs-csi-node-b554b                                                 30m (0%)      0 (0%)      120Mi (0%)       768Mi (2%)     102m
  kube-system                 efs-csi-node-bgjvt                                                 30m (0%)      0 (0%)      60Mi (0%)        0 (0%)         102m
  kube-system                 kube-proxy-4844b                                                   100m (1%)     0 (0%)      20Mi (0%)        0 (0%)         102m
  kubecost                    kubecost-network-costs-s6znw                                       50m (0%)      500m (6%)   20Mi (0%)        0 (0%)         102m
  newrelic                    newrelic-nrk8s-kubelet-wqkk5                                       200m (2%)     0 (0%)      300M (0%)        600M (1%)      102m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests           Limits
  --------           --------           ------
  cpu                1670m (21%)        900m (11%)
  memory             20258595584 (63%)  52437403136 (164%)
  ephemeral-storage  0 (0%)             0 (0%)
  hugepages-1Gi      0 (0%)             0 (0%)
  hugepages-2Mi      0 (0%)             0 (0%)
  hugepages-32Mi     0 (0%)             0 (0%)
  hugepages-64Ki     0 (0%)             0 (0%)

The latter node is then consolidated at 10:26:

karpenter-86d4bc57b9-w6649 controller {"level":"INFO","time":"2024-09-18T09:26:03.494Z","logger":"controller","message":"disrupting nodeclaim(s) via delete, terminating 1 nodes (11 pods) ip-10-138-109-39.eu-west-1.compute.internal/m6g.2xlarge/spot","commit":"b897114","controller":"disruption","namespace":"","name":"","reconcileID":"49d4d848-f319-4592-8687-3b96738286ba","command-id":"c6d1b299-efb3-494f-a1ef-a6959b435103","reason":"underutilized"}
karpenter-86d4bc57b9-w6649 controller {"level":"INFO","time":"2024-09-18T09:26:04.664Z","logger":"controller","message":"annotated nodeclaim","commit":"b897114","controller":"nodeclaim.termination","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","NodeClaim":{"name":"enrichment-service-rvzhc"},"namespace":"","name":"enrichment-service-rvzhc","reconcileID":"2e90ee7f-297c-4468-af9d-5323025841b4","Node":{"name":"ip-10-138-109-39.eu-west-1.compute.internal"},"provider-id":"aws:///eu-west-1c/i-02a3c1cf6e820667f","karpenter.sh/nodeclaim-termination-timestamp":"2024-09-18T15:26:04Z"}
karpenter-86d4bc57b9-w6649 controller {"level":"INFO","time":"2024-09-18T09:26:04.828Z","logger":"controller","message":"tainted node","commit":"b897114","controller":"node.termination","controllerGroup":"","controllerKind":"Node","Node":{"name":"ip-10-138-109-39.eu-west-1.compute.internal"},"namespace":"","name":"ip-10-138-109-39.eu-west-1.compute.internal","reconcileID":"f4ffa39c-9056-4be5-9d1f-97022795fc09","taint.Key":"karpenter.sh/disrupted","taint.Value":"","taint.Effect":"NoSchedule"}
karpenter-86d4bc57b9-w6649 controller {"level":"INFO","time":"2024-09-18T09:27:51.954Z","logger":"controller","message":"deleted node","commit":"b897114","controller":"node.termination","controllerGroup":"","controllerKind":"Node","Node":{"name":"ip-10-138-109-39.eu-west-1.compute.internal"},"namespace":"","name":"ip-10-138-109-39.eu-west-1.compute.internal","reconcileID":"d6b34ab8-010c-4a7b-9895-92eb49f38938"}
karpenter-86d4bc57b9-w6649 controller {"level":"INFO","time":"2024-09-18T09:27:52.271Z","logger":"controller","message":"deleted nodeclaim","commit":"b897114","controller":"nodeclaim.termination","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","NodeClaim":{"name":"enrichment-service-rvzhc"},"namespace":"","name":"enrichment-service-rvzhc","reconcileID":"3b90a2a4-e57b-4260-aefd-7743f4fe2d0a","Node":{"name":"ip-10-138-109-39.eu-west-1.compute.internal"},"provider-id":"aws:///eu-west-1c/i-02a3c1cf6e820667f"}

I appreciate that this is largely due to consolidateAfter which we have set at 20 minutes but it is also a combination of this and kube-scheduler scheduling pods to least allocated nodes (as per comment in #735 (comment)) and karpenter not having these controls to reduce disruption. In the above case I'd rather just leave it until memory and cpu requests are under 50%. Similar to how cluster autoscaler works - https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work, but not blended as Karpenter can provision different size nodes

I see an issue #1440 which is kind of related but only for replace.

How important is this feature to you?

Important.

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@andrewhibbert andrewhibbert added the kind/feature Categorizes issue or PR as related to a new feature. label Sep 18, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Karpenter contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Sep 18, 2024
@leoryu
Copy link

leoryu commented Sep 19, 2024

Duplicate of #1117

@njtran
Copy link
Contributor

njtran commented Sep 24, 2024

This is an intentional design decision of Karpenter that is different in CAS. We intentionally decide not to use a threshold, as this can get you into unexpected edge cases. As an alternative we're discussing a price improvement threshold in #1433. Closing this in favor of that

@njtran njtran closed this as completed Sep 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

4 participants